Expired Domain Finder is a software tool that allows users to find powerful expired domains to turbo charge your search engine rankings. For those of you who don't know expired domains are simply domains that were registered but are now expired as they have not been renewed by the owner. This can happen for many reasons, the person has lost interest in the project, the company has gone bust, the company has rebranded and many more reasons. So what would one use these domains for? Their backlink profile, its no secret the more backlinks a website has the higher it appears in Google. Once you find an expired domain with a strong backlink profile you then have two choices. Firstly you can use it as a money website, by money website we mean a website that you will use to generate a profit from, this will be your main website. Or you can use it to build a prive blog network. A private blog network is simply a network of websites that you own that all link to your main websites (your money website) with the intention of making it rank higher by linking to it.
Expired Domain Finder has three methods of finding expired domains.
If you have a list of websites that are highly niche related then you may enter this list and this free software will crawl all the pages on each website entered looking for external links and checking if each externally linked website has expired.
Enter a list of niche related keywords and this tool will search Google for all these niche related keywords and crawl every page on every website returned for every niche related keyword. Helping you find niche related domains.
If niche related domains are of no interest to you then simply enter a small list of seed websites (can be random) and the software will follow all external links checking for expired domains and each new domain that is found is added to the list to crawl giving you an endless crawl. Where else can you get an expired domain crawler of this power?Once an expired domain is found if you have entered your Moz API key (its free and highly recommended) the domains DA and PA is automatically fetched and diplayed in the results. So simply enter your settings and leave it running over night and come back to a list of high DA expired domains.
SEO is not as easy as it used to be. The market is very saturated now, content is now incredibly easy to generate and publish. Everyone and his dog (or gran!) can make a wordpress website. The main ranking factor for Google is still backlinks and will be for the foreseeable future. Your content might be better than your competitors but if they have more links they'll simply always outrank you and receive more traffic. Fair or not, its a fact. This method can see you get the strong backlink profile you need to rank #1. Please note a PBN is considered black hat so please diversify your backlink profile and do your research first.
Once a scrape (crawl) of a website has started all pages will be checked for external links. The domains are stripped out of these external links to aid with the search and checked to see if they have expired. Mining for expired domains has never been this easy.
This is the transcript from the video above, it needs tiding up a little as its from YouTube subs.
Hi guys it's Jamie from SuperGrowth.com and in this video I'm going to show you how to use my expired domain finder, so the title kind of gives it away. Basically, this tool allows you to find expired domains that no one else wants, and hopefully they'll have good or worthwhile backlink profiles that you can either build your money site on to give you a competitive advantage in the search engines or you can use them to build a PBN so we'll work our way through these tabs and the associated settings and I'll show you what it can do.
Lets look at some settiings, so you could enter a list of websites to crawl for expired domains on, or you can enter a search query say you're looking to form a pbn to boost your money site, that's about horse riding or something you could put horse riding down there and then it'll search, Google for horse riding and then crawl all the domains that Google brings back to you. Then we've got endless crawl and you basically you put in a few seed websites and then it will just crawl endlessly from those websites. So it will take all the domains off those websites. First they'll check whether they're expired or not. If they're not expired, then it'll start crawling them and then it will crawl the ones it finds from them and so on so on and so on. It will just keep going ok.
So, let's start off with the simple website list crawl. So settings for this is covered by the general crawl settings and these apply to all the other types of crawl as well, such as the Search crawl and the Endless cruel, so pretty simple really. Delay between each request to a website. One second, this is in seconds. Secondly, concurrent websites crawl how many websites you want to crawl at any one point in time and then how many threads will concurrently crawl per website. So that's ten, a crawl of ten websites at once, and each of those websites there's three different threads crawling. That's 30 concurrent connections you've got going.
Now the numbers of errors, for we give up crawling a web site in case you know, they've got some kind of anti scraping technology or the web sites just completely knackered or something thirty is generally fine. Limit number of pages crawled per website most the time you want that ticked. I have it about a hundred thousand less you are going to be crawling some particularly big websites. I like that there, because, if you have an endless, crawl and there's some kind of weird URL structure on a website like an old-school date picker or something you don't want to be endlessly stuck on that website. Show your URLs being crawled. Now, if you just want to see what it's doing, you can have it on for debugging sometimes, but I generally leave it off because it makes it slightly faster and writes results into a text file so as it goes along and it finds expired domains as well as showing you and the GUI here, you can write them into a text file just in case there's a crash or your PC shuts down or something like that.
Okay. So, let's move on to our next tab crawl from search query, something look at the settings. So if you want to limit the amount of results that you cruel per search query, you can tick that and specify, so if you only want to crawl the top three or ten results from each search query, you can enter that there. He'll just go to the end of the search. Next setting, results on Google skip the domain if it's in the majestic million, so the majestic million is just something from majestic, it's a free resource that majestic SEO makes available that shows the most popular hundred domains.
Some people might want to skip that because they think that they've already been crawled, which is a possibility and you can apply a manual exclusion to any results you think are going to be returned on Google for your search query. So things like YouTube, you don't want to crawl, then you can add them in there and it will ignore them if they returned in the search results. Okay, so that's yeah! That's pretty much the search setting query settings here. So your search terms go here. You can have as many as you like just enter them in their line separated.
Now a lot of time, you'll search for things and you'll think you're getting niche website's back, but actually, in fact, because of Google's shift towards big authority websites, you'll get things like Amazon listings. So if you don't want to end up crawling those big authority websites - and you want just the smaller ones, then you can make sure that the website, you'll crawl from the search engine results, is relevant by putting in a metadata requirement here. So any results that come back from the scrape of Google for any of these search terms, here you can say that they must contain one of these things, so what you can do is you can just put that your search terms in there back into there into the metadata requirement. So then, when a result comes back from google, it will loop through these line, separated terms and it'll say this is a homepage metadata, so the title, the keyword, the description, does it contain these?
If it contains any of those, then I am going to crawl it. Otherwise, I'm not going to because it's probably just something like an Amazon listing. Okay lets move on to the endless cruel. So this is the endless crawl. Basically, here you put your seed websites, one will do if it's a big website, because there's probably loads of domains on it. If it's a tiny, tiny a website, then you might want to stick a few more in so, as I was saying before, it will crawl on all the pages on these websites and then for each external domain that it finds on that website. It will check whether it's expired if it's not expired, it'll try and crawl it and then it will start the loop again, it'll try and take all the domains from there check that they are expired.
It will just go on and on and on endlessly, so that really is like a set and forget setting. Okay, now before I show you it in action, I'll just skim over some of these stats stuff. Here this is pretty self-explanatory really so that's the amount pages crawled so far, that's how many pages were crawling per minute, so once it's been running for a minute that that'll date and they'll update every minute after that. That is if the websites are blocking are crawl. Thatw how many websites currently are been crawled, so how many websites have been crawled. That's if we have it have it had any registrar errors, so we fired off some domains to see if they're available or not and there's been any errors in checking them it displays there.
There is a secondary registrar check built-in, so shouldn't happen very often, so that's the amount external domains we found. Then out of those how many of them are expired domains we've found which will appear in here or in any of the other ones, depending on which type of crawl were running, and that's just simply how long this crewel has been running. So we've got a lot of the similar settings here, in the search query crawl. We have how many queries have been blocked, which you shouldn't get many with any blocked because you're, not hammering google, you know you're sending one query and then processing the results and then sending another query see not scraping Google all all at once. So that's how many queries we've done and how many we've got left to process.
Okay, so let's run it. I'm just going to copy those in there to give it a bit of a kick start, but I'm on modern fiber UK fiber, so that might actually be too slow still, but yeah just trial and error with the amoount of crawlng threads, because it depends what kind of resources your PC has, and what kind of connection you're on. Okay, so we'll come back in a minute, oh by the way also, you can get the DA and the PA for each result for any of the crawl types. As long as you put in your MOZ key in the settings, so I just paused the video because I want to play around with the settings, because I haven't used this on my home fibre and I found on my UK fibre, these things worked pretty well but, like I said it will vary from connection to connection and machines to machine as well. So because I want some results to show you pretty quickly. I swapped it for website list and we're going to crawl some directories because they're always really good for finding tons of expired domains.
Okay, so let's set that going. You can see we're tearing through these directories once we've been crawling for a minute pages crawl per minute will update and you can see we're finding things already. That's it, thanks for watching guys.