Millions of Pages Google Hijacked using ODP Feed 427
The Real Nick W writes "Threadwatch reports that millions of pages are being Google Hijacked using the 302 redirect exploit and the ODP's RDF dump. The problem has been around for a couple of years and is just recently starting to make major headlines. By using the Open Directory's data dump of around 4 million sites, and 302'ing each of those sites, the havoc being wreaked on the Google database could have catastrophic effects for both Google and the websites involved."
Ugh. This is so not true. (Score:2, Informative)
(Yes, I am GoogleGuy.)
Re:Ugh. This is so not true. (Score:5, Funny)
Re:Ugh. This is so not true. (Score:5, Funny)
Re:Ugh. This is so not true. (Score:3, Funny)
Can anybody provide a working example? (Score:3, Interesting)
Re:Ugh. This is so not true. (Score:2, Troll)
Re:Ugh. This is so not true. (Score:2, Insightful)
Re:Ugh. This is so not true. (Score:5, Informative)
Here's the skinny on "302 hijacking" from my point of view, and why you pretty much only hear about it on search engine optimizer sites and webmaster forums. When you see two copies of a url or site (or you see redirects from one site to another), you have to choose a canonical url. There are lots of ways to make that choice, but it often boils down to wanting to choose the url with the most reputation. PageRank is a pretty good proxy for reputation, and incorporating PageRank into the decision for the canonical url helps to choose the right url.
A lot of sites that try to spam search engine indices get caught, and their PageRank goes lower and lower as their reputation suffers. We do a very good job of picking canonical urls for normal sites; sites with their PageRank going toward zero are more likely to have a different canonical url picked, though, and to a webmaster I understand that it can look like "hijacking" even though the base cause is usually your reputation declining. For a long time, it was hard to get anyone to report canonicalization problems, because the site that got "hijacked" would be free-cheap-texas-holdem-plus-viagra-and-payday-lo
But even though I suspected that this issue affected very few sites, we still wanted to collect feedback to see how big of a problem it was, and to see if we could improve our url canonicalization. So starting a while ago, we offered a way to report "302 hijacking" to Google; I mentioned the method on several webmaster forums. You contact user support and use the keyword "canonicalpage" in your report. Then I created a little mailing list with some engineers on it, and user support passes on emails that meet the criteria to the mailing list.
So how much reports has all this work (including posting multiple times on lots of webmaster boards to request data) gotten me? The last time I checked, it was under 30. Not a million pages. Not even a hundred reports. Under 30. Don't get me wrong, we're still looking at how we can do better: one engineer proposed a way that might help these sites, and he's got a testset of sites that would be affected by changes in how we canonicalized urls. A few of us have been looking through it to see if we can improve things, but please know that this is not a wildfire issue that will result in the web melting down.
As a side note, I'm getting a little tired of debunking the source of this story (NickW at threadwatch). For example, he claimed that Google had removed Greg Duffy from Google's index. When I pointed out that he was making an assertion of fact without evidence, he started out revising the story by sprinkling in words like "appears" and eventually pulled the story at http://www.threadwatch.org/node/1822 off his front page. But given that this is the third link to NickW's site from Slashdot in the last couple weeks, I'm guessing that he's tasted the Slashdot effect and wants more.
Re:Ugh. This is so not true. (Score:5, Insightful)
Well shucks GG, not every webmaster is glued to WMW and other forums.. and even if they did the signal/noise ratio on this topic is so low that you probably couldn't find the information even if you were looking. It's hardly an obvious reporting mechanism. Although posting it on /. should help some, so that's appreciated. Thanks.
But look - what we have here are a whole bunch of webmasters who have been nuked off the face of the earth by 302 redirects and just don't have the technical knowledge to try and fix it. Mom and Pop stores, hobbyists, nonprofits etc etc. These people are just gonna get pasted.. they'll just be wondering why they don't get any visitors any more.
This is a HUGELY serious problem - and it's getting worse all the time as more and more people deliberately try to exploit the 302 bug. I've been hit by this bug myself, and let me tell you that unless you know EXACTLY what to look for you'd be stuffed - all you'd see is your traffic flatlining.
The key issue here - and it's the kind of issue that will really, really hit the headlines when it's exploited is redirection. Sure, I can use a 302 and send Googlebot to the correct page.. so first of all I basically 0wn the content of that page not the publisher. *Then* I insert an exploit into the 302 redirect.. and hey presto, I've 0wned hundreds of thousands if not millions of computers. *That's* going to make unpleasant reading for Google when it hits the headlines - "Use Google and Get Owned". Nasty.
Kindly extract your head from wherever it is (Score:5, Informative)
What it needs is a rapid and satisfactory answer or Google will find themselves at the receiving end of more angst than they even know is possible.
A concrete example. My company's web site has been in existence since 1995. So we have pretty good page ranking. Our main page has one phrase, very distinct, unique.
When I search for this phrase (in quotes), Google reports hundreds of matches. These sites (except our own) do not contain the phrase but are sites that sell traffic boosting.
The 302 problem is real.
Incidentally, I just spent 15 minutes at Google.com looking for a way to report the problem. Where is that mention of "canonicalpage"? In the bottom shelf of a filing cabinet, behind a locked door that says "beware of the tiger"?
I'm not surprised you got only 30 reports. What I am surprised at is that you appear to speak for Google yet have such an inane response to what is a real (and for many people, a terrifying) problem.
Re:Kindly extract your head from wherever it is (Score:3, Informative)
Re:OK, an example (Score:3, Insightful)
Re:OK, an example (Score:4, Informative)
- for the search imatix [google.com] I see you at number one.
- for the search "Strategic solutions for a complex world" [google.com] I see you at number one.
- for the search allinurl:imatix.com [google.com], that search (and it's sister operator inurl:) only look for the words in the url. So it's perfectly fine to show results like "real-imatix.com/" because they contain the word imatix. These results are not hijacking results--this is expected behavior for inurl and allinurl.
Hope this helps,
GoogleGuy
Re:OK, an example (Score:3, Informative)
Re:Ugh. This is so not true. (Score:5, Informative)
But even though I suspected that this issue affected very few sites, we still wanted to collect feedback to see how big of a problem it was, and to see if we could improve our url canonicalization. So starting a while ago, we offered a way to report "302 hijacking" to Google; I mentioned the method on several webmaster forums. You contact user support and use the keyword "canonicalpage" in your report.
I'm sorry, but this is a flat-out lie. If you are the GoogleGuy, then there were 1000+ post threads on WebmasterWorld where people were begging you for input, and you essentially disappeared. I think I might remember seeing one post from you about this "canonicalurl" on a short, almost unrelated thread. You certainly didn't make it clear where to send problem reports, at least not on any of the threads that people were actually reading.
The fact is, this is a huge problem, and has totally fucked a lot of legitimate site rankings. I honestly believe Google was doing everything in their power to ignore the problem up until now, hoping that it was just a figment of people's imagination, or worse, that it would help increase advertising revenue. And now that it's turning out to be a PR disaster for you, you're in damage control mode.
I run one of the sites that was affected by the 302 bug. I sent a message to Google about it, and got a canned response essentially telling me there was nothing wrong. I read through no less than 10 threads on WebmasterWorld about this, many with hundreds or even thousands of posts. I saw maybe, maybe, two or three from GoogleGuy. Where were you? Did you somehow miss those threads that spanned 80+ pages??? Why weren't you posting on those threads about this "canonicalurl" thing.
Luckily there was only one site 302-ing me, and they were doing it by accident and were happy to remove me from their directory. Now I'm back up at the top of the rankings. But I know it's going to be nowhere near as easy for many of the thousands of people who are still affected by this.
Seriously, that you would come on here and try to discredit someone for bringing attention to a very big problem with Google is pretty distasteful. To me it indicates either a cover-up or having your head buried firmly in the sand. Either way, it doesn't bode well for the future of Google. Instead of flaming people now that the problem is getting mainstream press, why not try and actually fix things.
Re:Ugh. This is so not true. (Score:3, Insightful)
It's EXTREMELY informative, because it tells you what Google's offical position is. Whether you like it or not, you need to know that. "Informative" doesn't mean "good".
If Bill Gates posted here in defence of some MS policy, it would hopefully similarly be modded "informative".
You got an email from me! (Score:3, Informative)
Re:Ugh. This is so not true. (Score:5, Interesting)
Google has login accounts, so let logged-in users have a link saying "report spam site". Track who files the most reliable reports, and if a few of those people all agree that a site is spam, nuke its pagerank.
See how OpenRatings does reliability calculations for more info. Or buy them
Re:Ugh. This is so not true. (Score:4, Insightful)
As an alternative, I'd love a cookie based version of this that you could click "ignore all results from this domain". After a couple of weeks you'd get rid of most of them on your personal browser. Make the lists sharable even. All the pagerank wannabies can do is start from scratch with new URLs.
OK, I'll bite ... (Score:4, Insightful)
However, if this is Google's PR method, I think you are kind of asking for it! In the absence of information, the internet community will speculate until the cows come home. I'm not saying it's right, I'm just saying that's reality. Even though I said on my site that I thought Google didn't do anything underhanded I bet a lot of people were still not convinced. Google can do a little better than this, and although you have been fairly nice to me (thanks) this response is a little flamebaity for PR. Please understand that I mean no offense, it's just constructive criticism. Even if everything you say is true, a representative of the company should always at least attempt to sugar coat something like your last paragraph.
Also, on a more personal note, maybe Google should embrace the people that are involved [clsc.net] in researching [gregduffy.com] these problems instead of using this broken communications policy. I know that in my case I contacted you guys 5 *months* ago about the Google Print problem I described and never got any followup except for my t-shirt (which I really like). I have some great ideas about possible solutions to the problem I described, and as far as I can see Google has not fixed the root of the problem. When are you guys going to contact me?
-Greg Duffy
Re:Ugh. This is so not true. (Score:4, Informative)
Robot.txt (Score:3, Insightful)
Re:Robot.txt (Score:5, Informative)
No, it's not about redirecting the user... (Score:5, Informative)
For instance: I have a site with excellent page ranking. Now a new site will set up, and do a 302 to my site. Google now gives this new site my page ranking. When the new site is indexed, it removes the 302 redirection.
When you search for my site, you now find these new sites instead. There is no redirection when you click on a link, the the "cached text" that Google shows is wrong.
Basically this technique allows people to get high page rankings without earning them. It's very widespread - I counted over 60 such parasites for my company's web site (which has excellent page ranking).
Re:Robot.txt (Score:5, Informative)
Site A can return a 302 HTTP redirect to site B when Googlebot crawls their site. The googlebot will then index site B as site A. Site A could have no affiliation whatsoever with Site B; people could be clicking on SesameStreet.com and get AsianHookers.com, etc.
I do think the figure of millions of pages being hijacked is a little steep, though.
Re:Robot.txt (Score:5, Insightful)
Why? It can be completely automated. A million is no harder than four.
Re:Robot.txt (Score:5, Informative)
This isn't about fooling people, it's about fooling a flawed technology to get false listings in the search engine results pages. It's about getting a lot of traffic. Yes, some people will be really pissed off when they get redirected to an affiliate program or something of the sort, but some small percentage of people will buy. If the cost to bring in a million visitors is miniscule because you're stealing search engine placement, and you get 50 people to sign up to something that pays you $50 a person, then you're up $2500 minus your hosting costs.
$2500 to someone in Malaysia is a lot of dough for a little coding... they could work for $200/mo in some kind of outsourcing plan or make a year's wages in their spare time. What do you think they're going to do?
Re:Robot.txt (Score:5, Funny)
couldn't you have made that a link so I can just click on it?
Re:Robot.txt (Score:2)
Isn't the fix then to provide preference to the real URL over 'copies' when culling duplicate data and/or pageranking the results? This seems easy, so the problem must be that Google isn't storing HTTP response codes with their page indexes such
Re:Robot.txt (Score:5, Informative)
A 302 is a "temporary redirect". Basically, it says that the content normally lives at the URL you requested but that, just this once, you should look at this other URL for the content. Googles response to a 302 is actually very reasonable. I suppose the best thing they could do is just not follow 302s.
A 301 is a permanent redirect, indicating that the page isn't at the original URL and that all future requests should be made to the new one. I don't know what Googlebot does in this case but I assume it discards the original URL, which is what the standard recommends.
Re:Robot.txt (Score:2)
So if some phisher has access to put a redirect on sesamestreet.com, he could simply upload the content of asianhookers.com
My understanding is that it doesn't work this way at all. I believe what happens is that the hijacker sets up a page/site that redirects to your own site. Google then crawls the link, and erroneously indexes the content from your page with the URL of the redirecting page. From there, it's trivial to change the redirect on the fake page to someplace else, and maintain the appearan
Re:Robot.txt (Score:5, Informative)
Aside from a filter on Google's end to resolve this, it would be nice if the practice of using 302 redirects also included a means of confirmation of the setup on the site being redirected to. If the site actually hosting the data does not in some way confirm the redirection, either through a tag in the header of the html, or perhaps in a third, predictably place file (much like a robots.txt file). Of course, this would first require te standard to be rewritten, and then would require people to actually abide by it.
Re:Robot.txt (Score:3, Informative)
Re:RTFA (Score:5, Insightful)
The article is confused and baddly written. It does not explain the exploit being used ever. So stop dumping on people. It is not at all surprising that people don't get what is going on when the description is crud.
What is really going on has nothing to do with 302, or at least very little. What these people are doing is to set up fake web sites using content filched from genuine Web sites. This allows (or is beleived to allow) them to climb the google rankings.
I don't see why someone would use a 302 response when they can just copy the entire content unless there is some sort of bug in Google's pagerank that is not being explained. Copying the entire content is much simpler.
So what the attacker does is to set up their site so that when the googlebot comes round it publishes some legitimate content, then when other folk follow the site from a google search they get pages infested with spyware or the like.
This would certainly explain the number of times I have done a Google search and ended up at an idiotic 'search site' that does nothing for me.
Re:RTFA (Score:5, Informative)
No, the way it works is with the 302, but only for the googlebot.
For this to work the scammer has to give the 302 only to the googlebot, all other browsers need to get the content of the scammer's page. If you google for "cheapest car insurance" (IIRC) you can find an example of this. Change your User Agent accordingly and click on the top Google link, you'll end up at another site. Change back to Mozilla and you'll get the scammer's site.
Re:RTFA (Score:3, Interesting)
If I find both articles confused and confusing then it is a bit much to expect other people to follow them, I am listed as an original contributor to the design of HTTP.
The real problem here is not the 302, its a bug in the googlebot. fortunately a realtively easy one to fix. When googlebot sees a 302 redirect to a page it treats the actual page and
Re:Robot.txt (Score:5, Informative)
(Sorry for dumbing down my post so much, too much experience explaining things to my grand mother)
I've had it with Google! (Score:5, Funny)
*duck*
Google Cookie last until 2038! (Score:2)
Re:Google Cookie last until 2038! (Score:5, Funny)
Re:I've had it with Google! (Score:2, Informative)
How and when Yahoo fixed it (Score:3, Informative)
Sorry for not writing this in the article - it's pretty long already and you just have to cut somewhere, but here goes:
Yahoo was exactly as vulnerable as the rest of the search engines. In fact this problem was pretty bad with Yahoo at one point. What Yahoo did was simply to fix it by implementing some internal rules about how to interpret redirects.
I believe it was fixed around June 2004 - at that time the problem had already been known (and aboused) for a long time, but use was not widespread yet. The
Easy to prosecute, hmmm? (Score:5, Interesting)
site exists with behavior dependent on browser name
being GoogleBot or not. The replacement site will
generally have some way of making money, which can
be tracked via financial transactions.
Re:Easy to prosecute, hmmm? (Score:5, Insightful)
Re:fraud, copyright, phishing, decency laws (Score:3, Insightful)
prosecution can't fix this problem.
Law of the Internet (Score:5, Insightful)
302 (Score:5, Informative)
Re:302 (Score:5, Informative)
Re:302 (Score:2)
The REAL answer would be to have google not index redirects (which is pretty stupid, all things considered. Why link searchers to the "wrong" URL, instead of the destination URL of the redirect?)
Re-re-explained (Score:5, Informative)
302 redirections are temporary redirections - the idea is that a 302 is supposed to be used when someone needs to be redirected to a new page, but should still use the original URL if they want to come back later. As an example, the page http://purl.oclc.org/OCLC/PURL/CONTRIBUTORS [oclc.org] performs a 302 redirect to http://purl.oclc.org/docs/contributors.html [oclc.org]. This means that although your web browser needs to go to some other URL for the content at the moment, they really should remember the first url as the permanent one.
Contrast this with what happens when your browser visits http://snowplow.org/martin [snowplow.org] - you get sent a 301 redirect to http://snowplow.org/martin/ [snowplow.org]. (Note the extra slash) In this case, the server is saying "the url with the slash on the end is the real location, and you should not try to come back here without the final slash in the future."
Ideally, if every web browser behaved according to spec., bookmarks (remember bookmarks?) would get automatically updated to the new URL when you selected them and the redirect was a 301 redirect. However, for a 302 redirect, the bookmark would stay as is.
302 redirects can be very useful when you want to set up a hierarchy of "logical" URLs that will permanently point to the correct location. 301 redirects are useful when you're obsoleting an old URL and wish people to go and use the new URL from now on.
Okay, so how does this relate to google? Well, let's suppose that you have a great site on fruitbats. I can set up http://www.example.com/topics/fruitbats to be a 302-style redirect to your site, essentially saying "The information at http://www.example.com/topics/fruitbats is temporarily being hosted by http://www.yoursite.com/". Now, google when it spiders pages will see that, will go retrieve the text from your page and will then index it under http://www.example.com/topics/fruitbat, since after all I just gave a temporary (302) redirect.
But it gets worse, because a final part of google's indexing process is to compare pages for identical text, and throw out all but one of the URLs. Apparently this stage has nothing to go on other than the text and the recorded URLs, and so your URL stands a fifty-fifty chance of being thrown out.
Except that I've not just redirected http://www.example.com/topics/fruitbats to your site, but also http://www.example.com/topics/fruitbat, http://www.example.com/topics/fruit_bat, and http://www.example.com/topics/fruit_bats. Now your lone URL doesn't stand much of a chance of being the one kept by the "throw out duplicates" processor, does it?
In a sense, of course, there's little google can do to prevent this, because even if they weighted 302-redirects lower in their "throw out duplicates" stage, I could always just go snag a copy of your website each time googlebot visits, in essence doing the redirection myself. (How? Just search the apache mod_rewrite guide [apache.org] for "Dynamic Mirror") However, doing it through 302 redircts means that google pays for the bandwidth to go get your page, not me. (Not that this is necessarily a signficant amount of bandwidth, since we're only talking about basic google here and not images. Depending on the revenue you get by misdirecting google queries it might be economical)
Of course, for this to really work, I'd need a list of websites sorted by category to build up my redirect db. But wait! The ODP feed provides exactly that.
I am a little bit wary of doi
Re:Re-re-explained (Score:3, Insightful)
well, a bunch of people have suggested that 302s should only be honored by crawlers if the domain is the same. i think that's a pretty good idea.
It's not Google that's broken--it's the web. It's just that the two-leg
Re:302 (Score:5, Interesting)
Although, they could probably still figure out it's google by their IP, but it's a step in the right direction.
Re:302 (Score:2, Interesting)
Re:302 (Score:5, Informative)
Re:302 (Score:3, Insightful)
Re:302 (Score:2, Informative)
But what's the point? (Score:2)
What in the world does the hijacker gain by having google point him, only to then load the victim's page?
hawk
Re:But what's the point? (Score:4, Informative)
301 redirects (Score:3, Interesting)
I noticed in my logs that search engines have repeatedly requested the 301 pages, but often don't follow the links to the new pages. And when searched with google, the pages still show up with the old urls. Should I be using 302 redirects instead?
Wrong (Score:5, Informative)
This is why the "302 hack" works. If the redirect is only supposed to be temporary, the search engine keeps the URL of the 302 as the URL for the document, but indexes the content of the page to which the redirect is directed.
301 is what you should be using to point the SEs to your new pages if you've moved them. The behavior is supposed to be for the SEs to replace the old URL in their index with the new one, and furthermore count all links to the 301ed URL as being towards the new one. I don't know why it's not working for the grandparent poster, but it's the way that the functionality is "advertised" for Google and Yahoo, and it should work.
Why? (Score:2, Insightful)
"Oh! Look! Something beautiful! Something impressive! I must destroy it!"
pah. feeling jaded today, i guess.
Re:Why? (Score:2, Insightful)
The people using this exploit to get fake listings (just like all of the spam pages we see in search engines) aren't doing it for the fun of it.
Re:Why? (Score:2)
In a far off time, the Internet was a wonderfull place devoid of such mundane things as commerce. Now, fastforwarding a few years to the present, people are making significant sums of money off of the internet selling "products". One of the best to get somebody to buy something is to make them aware of a "need" they have
Do what I'm going to do... (Score:4, Insightful)
Re:Do what I'm going to do... (Score:3, Funny)
Re:Do what I'm going to do... (Score:2)
Free investment tip: Avoid buying stock in any company if an unsophisticated investor, for reasons unrelated to profitability, would think that company is Way Cool.
It appears Google has a sound business plan and competent management. Which probably justifies some particular, perfectly healthy stock price. But I'
Web presence pressure (Score:5, Insightful)
Re:Web presence pressure (Score:3, Insightful)
Gopher (Score:5, Funny)
Re:Gopher (Score:5, Funny)
Re:Gopher (Score:2)
The World Wide Web is the meta-index of (mostly) Internet-accessible content which can be addressed by URI (almost always more specifically by URL).
Since Gopher can be addressed via the URI scheme, "gopher", it's part of the Web.
Re:Gopher (Score:3, Interesting)
Re:Gopher (Score:2)
An excellent reason to use Gopher.
Wait... (Score:5, Funny)
Damn Google!!! Do you mean this is not www.kuro5hin.org ??
The super-slashdotting (Score:5, Funny)
Re:The super-slashdotting (Score:3, Insightful)
How to check if your site is being hijacked... (Score:5, Informative)
2. copy and paste any dubious URLS into this tool [thinkhost.com] and check whether they're using 302 redirects or not.
3. Panic!
One example: http://www.tradedoubler.it.
Luckily, the phrase in question is complete gibberish and no-one ever finds our site through Google, only by reputation and word of mouth.
Still, I think it's clear Google have a serious problem here...
And how to report this to Google... (Score:3, Interesting)
Google are not taking this problem seriously.
I'd suggest that if your website is affected, you send an email as above.
Good explanation about 302 hijacking (Score:5, Informative)
302 hijacks work because Google goes to http://bad.site/ and gets redirected to http://good.site/. It then treats the contents of the bad.site as identical to that of good.site. The effect seems similar to if somebody simply copied an entire page off of your site (I'm not sure if it's actually more serious than this), but it's easier to do because you're just keeping a small table of redirections.
How serious is it? Don't know. It's pretty easy for a webmaster to check for hijacking and have her pages de-hijacked (see aforementioned article). It's probably not as screamingly awful as the threadwatch.org article suggests, but the redirector sites are rather annoying. Several of the comments in the webmaster article suggest that Google has already started moving on the problem.
Re:Good explanation about 302 hijacking (Score:3, Informative)
The key here is that only googlebot is redirected. If you simply copied someone else's site, everyone would still get the info they were looking for. However, if you only redirect googlebot, you can redirect others to whatever you want.
Comment removed (Score:4, Informative)
Not a surprise (Score:5, Interesting)
The thing is that all they have to do is keep it just good enough that people won't leave. Remember, AdWords is Google's product, everything else [gmail, orkut, etc] they've got is just a way to show you those ads. Google's success is entirely because they had clearly better search results than anyone else. If another company can clearly best them then Google may be in trouble.
Re:Not a surprise (Score:5, Insightful)
Bleh... (Score:4, Funny)
My site is affected (Score:5, Interesting)
Re:My site is affected (Score:4, Informative)
From the Google "Information for Webmasters" (Score:5, Informative)
How about adding "Fiction: Google information for webmasters contains any facts"?
pure FUD the submitter is a spammer (Score:4, Informative)
what major headlines ? millions of pages !! the world is coming to an end !!!!
a quick whois on threadwatch.org (the submitters site) reveals its hosted by search engine spammers
platinax.co.uk which is registed to a UK "company" called BriteCorp
http://www.britecorp.co.uk/ [britecorp.co.uk]
who offer all the usual SE spamming methods
coincidence ?
a whois on britecorp's platinex [platinax.co.uk] site reveals they have removed their address from the whois db, and their websites contact details are a mobile phone number (07963 808470)
further investigation on britecorp reveals they are not a "real" company but trading as "Brian Turner" (pic [platinax.co.uk]) and companies house [companieshouse.gov.uk] dont seem to have any records of any of these companies, though iam sure further investigation could find out more
so why would a supposedly reputable marketing company have a cell phone as a primary contact point ?
something to hide egh ?
or perhaps local trading standards would like to hear about them and their "services" ?
northern scum by any other name
Absolute hilarity (Score:4, Informative)
I guess some people have never heard of the term "sole trader".
My internet business is barely a year old - almost everything is communicated with other webmasters via e-mail - phone support is provided as a last option, but it means that if anyone really needs to use it, then they can have my immediate attention wherever I am, to have their concerns addressed immediately.
As for spamming - well, this is one of those "anonymous cowards" some of us are familiar with, who believes that if you purchase a link from another site, or become involved in a link exchange, or register your site in a directory - then you're a spammer.
Thanks for the heads up on the Platinax registration details, though - hadn't realised they'd been left out. I had a run in with some Belgian Nazis last year, after I booted them from a forum I admin, when they tried to use it for promoting Neo-nazi propaganda. They've tried a few times to get back at me since, so I've been trying to reclaim some privacy online. Platinax reg details should be public, though - I'll put something online, then try and fine a PO Box for the hate crap.
Search engines should devalue redirects (Score:5, Insightful)
It will also break many "click trackers", "portals", "directory sites", "search engine optimizers", and other annoyances, which is probably a plus for Google users. You know, those sites where you click on some phrase in Google and, three redirects later, you're at some irrelevant porno site.
Doesn't seem like the end of the world (Score:3, Insightful)
treat redirects as one-link pages (Score:3, Insightful)
It seems that when page A redirects to B, Google not only considers that a hit for A, but also assigns B's content to A (I just skimmed through all the posts here so maybe that's not what happens).
In that case, it seems to make more sense to just ignore A altogether since the hit and content rightfully belong to B.
This could be done by treating redirects as empty one-link pages, thus unifying the handlers and defeating this practice.
Why This is Such a Big Deal (A Summary) (Score:5, Informative)
There seems to be a lot of confusion as to why exactly this is such a big deal. A lot of people saying there's no problem or that this is nothing new... basically just not understanding the issue. Let me explain:
Suppose you have a small business under the domain http://xyz.com/, and search engines bring you a lot of traffic because you rank high for keywords in your market. You have a lot of people out there linking to you, a lot of satisfied customers, good content on your site. You're always in the top 10 somewhere when people search for "xyz widgets".
Well, this issue with Google makes it very easy -- incredibly easy -- for someone to knock your site out of the rankings entirely. And I mean for *everything*, to where searching for your own company name in quotes literally buries you hundreds of pages deep in the results. We're talking sites going from getting 1000 unique hits to 10 overnight.
And here's the kicker: It requires absolutely no technical knowledge, no time investment, and is perfectly legal...
All I have to do is have another domain handy that is roughly as popular as yours. And I make a "links" page, like one of those directory services, that lists your website. But instead of being a normal hyperlink, it's a CGI (or PHP or ASP or whatever) script that generates a 302 redirect to your domain... Now, these are very simple, common scripts. One-liners that you can download from cgiscripts.com and stick on your server. The original intent of these scripts is to track which links are being clicked on your site. But now they've found a new use, because when Google gets that 302, all hell breaks loose.
See, according to the HTTP spec, 302 is a *temporary* redirect, which means Google is supposed to interpret whatever content it finds at the 302 target (your site) as really belonging to the URL of the source (my site). Google is just obeying the spec strictly here, and with devestating results. Why? BECAUSE THE DUPE FILTER NOW KICKS IN! You see, Google has a "dupe filter" that says if the same exact content is found for two unique URLs, then one of the URLs is obliterated in the rankings. Because after all, searchers don't want to be finding the same content over and over. If that happens, they'll start using a different search engine. But Google, sticking strictly to the HTTP spec, doesn't know who the content really belongs to when it gets a 302.
So Google essentially flips a coin. And if it comes up tails, say bye-bye to your domain in the rankings. Your *entire* domain. Because the dupe filter isn't limited to just the page that the 302 is pointing to -- it applies across your entire domain.
These 302 "exit-link-trackers" are all over the web. They've been used by webmasters for years. But it's just recently that Google has started treating 302 this way, so it didn't have any bad effect before. But now it kills you.
The funny thing is, the solution seems pretty simple: Just stop treating 302s this way if they point to a different domain. But for whatever reason Google isn't listening. Hopefully the press that's being generated now will give them the kick in the ass that they need.
Doesn't effect Yahoo (Score:5, Interesting)
Simple Answer (Score:5, Insightful)
In all other cases treat a 302 (temporary) as a 301 (permanent) redirect, thus giving credit for the content to the actual hoster of the content.
This allows webmasters to continue using 302s to setup logical URLs to mask the organization of underlying content but eliminates the ability to hijack completely.
clsc.net seems to be down... (Score:4, Interesting)
personally, i'm ready to give up google maps or something else (autolink?) if they would 'fix' this or at least be more transparent about what's going on.
btw, the word on the net is that the googleguy posting here isn't the real one. anybody have details on this?
-kpaul
I don't get it... (Score:3, Insightful)
It's pretty simple; 302 redirects allow bad guys to exploit Google.
It doesn't matter that it's the wrong way to use a 302 redirect. They are the BAD GUYS. Remember the "spammers lie" truism?
It's the Google rule that is broken. 302 should be treated as "cant find site" in their search rankings rather than assuming the the data sent by the web server is honest. It sucks that some legit users of 302 won't get ranked as well because of it, but boo hoo. Let anybody that has hardware or software problems get better equipment in the first place if their freaking world ends when they don't get ranked in their keyword group. I have NO SYMPATHY for someone that shoestrings their vital revenue stream infrastructure and then wonders why things go bad. It reminds me of my job too much.
Buy Google ADs if you need to make money off your site traffic.
Google will change the rule or they won't. If they want to stay relevant, they'd better. I find myself getting irritated with Google's crappy search results a lot now days, sooner or later I will find one of the little startup to use and they can kiss off if it keeps up. So I figure they will get to it. They are Google, they are good at what they do.
Now what I think they should do is download snippets of pages via the Google toolbar which then sends the data to Google to make a massively distributed bot-net spider that is indistinquishable from the web-using masses. At that point, as far as exploiting Google via IP of the bot or user agent of the bot IT IS ALL OVER.
Move along, nothing to see here but a bunch of people that don't understand redirect and HTTP protocols.
Mod parent up. (Score:3, Insightful)
Re:A Real Question (Score:2, Informative)
Re:Exactly. (Score:4, Insightful)
A "Don't show me any results from this subnet + domain from now on" feature would be nice, as would google banning some of the worst offenders (which it seems to have done).