black hole seo scripts are missing the point

Status
Not open for further replies.

nohatter

New member
Jan 23, 2008
170
10
0
Hi.

I know this is my first post but i hope it wont affect your take on what im about to say.

I've been testing various scripts and programs considering the "black hole seo" backlink technique and find them all completely missing the point or
using too much resources etc.

Some of these programs are creating hundreds of xml files from no niche specific titles and so taking too much resources when the working result
could be done with just one xml feed per site.
Some of them are using generated titles and descriptions and some are even
creating useless pages for this theory/method.

The working program should do something following:
First of all it should do all the title scraping on local machine so your
paid hosting is not going to die.
It should also take your keywords from a list and find about 100 titles
with your keyword in it from aggregators. It should use some regex magic
to filter all the sheeba titles. It should create a(one) xml feed from those
titles and add your links and descriptions in from a list. It should then
upload the generated xml file to your paid hosting and ping all the major
services.Then after some period of time it would run the process again,
update titles, update rss file and ping.

:bigear:
 


The working program should do something following:
First of all it should do all the title scraping on local machine so your
paid hosting is not going to die.
It should also take your keywords from a list and find about 100 titles
with your keyword in it from aggregators. It should use some regex magic
to filter all the sheeba titles. It should create a(one) xml feed from those
titles and add your links and descriptions in from a list. It should then
upload the generated xml file to your paid hosting and ping all the major
services.Then after some period of time it would run the process again,
update titles, update rss file and ping.
Now that we've agreed the spec, feel free to start coding.
 
Now that we've agreed the spec, feel free to start coding.

Heh, i've already done coding mine, even before i made this thread.
However it's a FrankenBuild. Meaning it was first built by some guy
who missed the point (i think) and had it make index pages with
crappy generated content from each title. He had also took the curl
part from some other script etc.

I took some crap out of it and modified it a little.

The funny part is that i tried to make it exactly like eli described, but then
i started to think, would the scrapers honestly take the same titles again?
Since i have never looked into autoblog im not so sure about it, so i have
to study it first. That made me think if the people using autoblog hand
select indivitual blog feeds to their autoblog instead of just scraping straight outta aggregators.

This ends me with more questions than answers, which means i have to
make a few cycle sites and study them before finishing that FrankenBuild.

If you or anyone got answers to my questions im all :bigear:. Even though
i dont have too much experience from autoblogs i understood
(i think) what eli was talking about in the blackholeseo article and all
the scripts i saw were not on point.

Long post, bad english.. i know. :rasta:
 
I think you missed on this one. Here's what I understood the post to mean

joe random blog --> ping --> syndicator --> (kwd tgt)autoblog --> aggregator

You want as many autoblog-keyword heavy titles as possible, mixed up into multiple xml files, and pinged out so they get picked up by the autoblogs again. Eli's article suggests that aggregators are a good place to find those titles. It is definitely a resource-heavy process, as naively scraping titles from weblogs.com or whatever just gives you the blog title-- you have to go hit each rss feed individually and parse out the real title(s).

Would the autoblogs take the same titles again? Sure they would. Think for a minute about what that would take in terms of indexing all the titles across your basement-level blogfarm. Lots of people hand select feeds; lots of other people don't.

It's cool that you're making your own tool; that is effort that will be well-rewarded. But the blackhole post was about getting as many links as possible, not targeting. Read a little more slowly through the comments on the blackhole post; I think you'll come to the same conclusion.

Hope that helps,
etothei
 
The working program should do something following:
First of all it should do all the title scraping on local machine so your
paid hosting is not going to die.
It should also take your keywords from a list and find about 100 titles
with your keyword in it from aggregators. It should use some regex magic
to filter all the sheeba titles. It should create a(one) xml feed from those
titles and add your links and descriptions in from a list. It should then
upload the generated xml file to your paid hosting and ping all the major
services.Then after some period of time it would run the process again,
update titles, update rss file and ping.

:bigear:

do you know if rss exploiter does this? it seems to be pretty popular here.
 
I really think this is the genius behind Eli's posts: He NEVER gives information away. He simplifies his methods, removes his trade secrets, and then posts skeletons that others may build from. The step-by-step method that he suggests is ludicrous.. All you get is a bunch of blog titles in foreign languages that, as stated in the 1st post, would have very little positive effect as a backlink.

Eli's posts require your own ingenuity to truly put them into practice. The comments that follow his posts are generally by those who cannot understand what he is getting at and need further explanation. I don't know Eli, but I do know this-- he is a fucking genius. To me(metaphorically), he is a red hot poker to the left side of the brain.

If you just want a huge quantity of backlinks from any old bullshit site, then go ahead and get your site banned. Don't forget that there is a point of diminishing returns in absolutely every earthly scenario. The key is bending the rules AND staying beneath the radar. Do the intelligent thing - build your own scraper that pulls out specific keywords.

@NoHatter - I would recommend multiple RSS feeds like eli does for the following reason: Autoblogs generally scrape data at specific intervals and times through cron-jobs. Adding multiple RSS feeds adds depth to your site allowing RSS scrapers access data as far back as 1000 posts (if you have 10RSS feeds @ 100posts) or more. If you have updated your single RSS feed before sites have had the chance to parse through it, then you are losing potential backlinks.
 
That technique is really hard to get right if you've never built a splog or scraper site before. Most do get it backwards on their first try. In the simplest possible way to explain it its a splog or scraper site in reverse.
How a splog/scraper site works:
1) pick keywords
2) scrape content related to those keywords
3) output content
4) try to get content indexed

The most popular methods of doing each step is:
1) use a popular and widely available keyword research tool
2) scrape rss feeds
3) throw it into a popular cms such as wordpress, blogsolution, yacg, ssec etc.
4) ping aggregators

So the obvious route to getting your links on splog/scraper sites is to do the reverse:
1) scrape places that get content indexed
2) output the content into an easily shared and scrapable form
3) syndicate the content where scrapers/splogs use most
4) cover as many keywords as possible

Following the most widely used splog/scraper site scripts the technique follows the most popular method of doing this process:
1) scrape ping services
2) output the content into rss feeds
3) publish the content on rss aggregators
4) cover as many keywords as possible

The objective is link quantity NOT link quality. Both are needed to rank so both must be covered in a link campaign and although keyword targeting is not a necessary factor it was mentioned as a possibility in the post. Read the followup post to seo empire part 1 for more detailed info on that. Yes, there is definitely better ways of doing this technique than the details given in the post as they are nothing more than, as someone mentioned a framework for the technique. They're pretty easy to figure out, think about other popular scripts that wouldn't get affected by this technique. What about the mass influx of cycle sites since the seo empire post came out(theres literally thousands coming out everyday)? How can they be easily identified? What about other types of sites that aren't necessarily black hat scrapers or splogs? Some may not even be a widely used program, just used by single companies that deal with hundreds of thousands or millions of sites *cough* domain parkers *cough*. Is there anyway to get your content where they get their content? What about targeting the sites not where they get the content but where they get the keywords, the post failed to mention that? :)

To say the technique is bunk is kinda ridiculous although some people have. Many of the people using the technique are getting hundreds of just indexed links/day through it. Since most are doing the technique in the same exact way just take a look at the math. thousands of scripts are scraping the same titles and getting the same keywords and submitting the rss feeds at the same time 24/7. splog sites that follow the format are usually only grabbing the first 10-100 results for whatever particular keywords they are after. So less than 1/10th to 100th of the possible links are getting grabbed per thousands of people. Of those I'm sure less than 1/10th are actually getting indexed, it may even be in the lower 1/1000ths. Yet individual people are getting in the hundreds of links/day from it. That gives you a rough idea of the absolute insane number of splogs/scraper sites are being built and updating everyday. The area of actual playground is just phenomenal. To say 100's of links/hour for a virgin technique is a huge understatement as I tend to do. So targeting shmargeting, its not even needed. More sites for the links to point to is whats needed.

Just some shit to think about :)


glad you're taking an interest in the technique and building you're own. Let us know how it works out for you. Yes, rudedogg's version of the process works very well and is well built.
 
Last edited:
OMG I have a man-crush on you Eli. I've been reading your blog solid for a week and I can actually feel myself getting smarter.

Are you going to update it again?
 
how about scraping the titles straight out of blogsearch.google.com
with a keyword like "wrote an interesting post today"?

just clean the crap and it leaves you
with the real titles scraped earlier by cyclesites that were able to leave a trackback.
Now you would know which titles were hot and you could also do targetting by adding extra words before quotes. Im not sure but i think some version of autoblog uses that
string ("wrote an interesting post today") by default.

My script did this, but i only run a small test to see if everything works and never
bothered to run a full scale test to see the results, since i had other things to do.

Hope it makes sence, sorry bout my english!
 
Status
Not open for further replies.