Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Google Businesses The Internet

Millions of Pages Google Hijacked using ODP Feed 427

The Real Nick W writes "Threadwatch reports that millions of pages are being Google Hijacked using the 302 redirect exploit and the ODP's RDF dump. The problem has been around for a couple of years and is just recently starting to make major headlines. By using the Open Directory's data dump of around 4 million sites, and 302'ing each of those sites, the havoc being wreaked on the Google database could have catastrophic effects for both Google and the websites involved."
This discussion has been archived. No new comments can be posted.

Millions of Pages Google Hijacked using ODP Feed

Comments Filter:
  • This is a placeholder. I'll include more details of why you shouldn't listen to Threadwatch.org in a bit, and debunk this some. Let me get this posted and I'll follow up.

    (Yes, I am GoogleGuy.)
    • by Solder Fumes ( 797270 ) on Wednesday March 23, 2005 @11:43AM (#12024117)
      This is a placeholder rebuttal, I'll post why your arguments are COMPLETELY STUPID after you actually post them.
    • This is a placeholder.... I'll include more details of why you shouldn't believe the NEXT slashdot article.... Let me get this posted.... and I'll follow up! (Hey, if the other guy can get modded informative for that.... this, since it's for a future article ought to be insightful). And, no, I'm NOT a GoogleGuy.
    • by Anonymous Coward
      Wow, getting modded up just for leaving a message on our answering machine! I guess it's true, just like with Wil Wheaton, if you claim to be (or are) someone of alleged importance, you too can get +5 Informative on every post, no matter what you say (or don't)!
    • by GoogleGuy ( 754053 ) * on Wednesday March 23, 2005 @12:14PM (#12024599) Homepage
      Okay, I'll talk about this whole "millions of webpages hijacked! Film at 11!" piece of scaremongering. If you RTFA, the author (and the submitter of the story?) claims that some scraper sites have pulled down a copy of the dmoz RDF, gotten the urls, and are doing 302 redirects to sites in an attempt to hijack them. Note that this does not mean that lots of pages were hijacked at all.

      Here's the skinny on "302 hijacking" from my point of view, and why you pretty much only hear about it on search engine optimizer sites and webmaster forums. When you see two copies of a url or site (or you see redirects from one site to another), you have to choose a canonical url. There are lots of ways to make that choice, but it often boils down to wanting to choose the url with the most reputation. PageRank is a pretty good proxy for reputation, and incorporating PageRank into the decision for the canonical url helps to choose the right url.

      A lot of sites that try to spam search engine indices get caught, and their PageRank goes lower and lower as their reputation suffers. We do a very good job of picking canonical urls for normal sites; sites with their PageRank going toward zero are more likely to have a different canonical url picked, though, and to a webmaster I understand that it can look like "hijacking" even though the base cause is usually your reputation declining. For a long time, it was hard to get anyone to report canonicalization problems, because the site that got "hijacked" would be free-cheap-texas-holdem-plus-viagra-and-payday-loa ns-as-well.com type sites. In fact, I had to offer to ignore the spamminess of any reported sites in order to get people to send in any real data.

      But even though I suspected that this issue affected very few sites, we still wanted to collect feedback to see how big of a problem it was, and to see if we could improve our url canonicalization. So starting a while ago, we offered a way to report "302 hijacking" to Google; I mentioned the method on several webmaster forums. You contact user support and use the keyword "canonicalpage" in your report. Then I created a little mailing list with some engineers on it, and user support passes on emails that meet the criteria to the mailing list.

      So how much reports has all this work (including posting multiple times on lots of webmaster boards to request data) gotten me? The last time I checked, it was under 30. Not a million pages. Not even a hundred reports. Under 30. Don't get me wrong, we're still looking at how we can do better: one engineer proposed a way that might help these sites, and he's got a testset of sites that would be affected by changes in how we canonicalized urls. A few of us have been looking through it to see if we can improve things, but please know that this is not a wildfire issue that will result in the web melting down.

      As a side note, I'm getting a little tired of debunking the source of this story (NickW at threadwatch). For example, he claimed that Google had removed Greg Duffy from Google's index. When I pointed out that he was making an assertion of fact without evidence, he started out revising the story by sprinkling in words like "appears" and eventually pulled the story at http://www.threadwatch.org/node/1822 off his front page. But given that this is the third link to NickW's site from Slashdot in the last couple weeks, I'm guessing that he's tasted the Slashdot effect and wants more.
      • by Dynamoo ( 527749 ) on Wednesday March 23, 2005 @12:38PM (#12024915) Homepage
        You contact user support and use the keyword "canonicalpage" in your report.. So how much reports has all this work gotten me? The last time I checked, it was under 30

        Well shucks GG, not every webmaster is glued to WMW and other forums.. and even if they did the signal/noise ratio on this topic is so low that you probably couldn't find the information even if you were looking. It's hardly an obvious reporting mechanism. Although posting it on /. should help some, so that's appreciated. Thanks.

        But look - what we have here are a whole bunch of webmasters who have been nuked off the face of the earth by 302 redirects and just don't have the technical knowledge to try and fix it. Mom and Pop stores, hobbyists, nonprofits etc etc. These people are just gonna get pasted.. they'll just be wondering why they don't get any visitors any more.

        This is a HUGELY serious problem - and it's getting worse all the time as more and more people deliberately try to exploit the 302 bug. I've been hit by this bug myself, and let me tell you that unless you know EXACTLY what to look for you'd be stuffed - all you'd see is your traffic flatlining.

        The key issue here - and it's the kind of issue that will really, really hit the headlines when it's exploited is redirection. Sure, I can use a 302 and send Googlebot to the correct page.. so first of all I basically 0wn the content of that page not the publisher. *Then* I insert an exploit into the 302 redirect.. and hey presto, I've 0wned hundreds of thousands if not millions of computers. *That's* going to make unpleasant reading for Google when it hits the headlines - "Use Google and Get Owned". Nasty.

      • by ites ( 600337 ) on Wednesday March 23, 2005 @12:38PM (#12024918) Journal
        This story does not need "debunking".

        What it needs is a rapid and satisfactory answer or Google will find themselves at the receiving end of more angst than they even know is possible.

        A concrete example. My company's web site has been in existence since 1995. So we have pretty good page ranking. Our main page has one phrase, very distinct, unique.

        When I search for this phrase (in quotes), Google reports hundreds of matches. These sites (except our own) do not contain the phrase but are sites that sell traffic boosting.

        The 302 problem is real.

        Incidentally, I just spent 15 minutes at Google.com looking for a way to report the problem. Where is that mention of "canonicalpage"? In the bottom shelf of a filing cabinet, behind a locked door that says "beware of the tiger"?

        I'm not surprised you got only 30 reports. What I am surprised at is that you appear to speak for Google yet have such an inane response to what is a real (and for many people, a terrifying) problem.
        • Here's where you can file a report [google.com].
      • by Anonymous Coward on Wednesday March 23, 2005 @12:48PM (#12025073)

        But even though I suspected that this issue affected very few sites, we still wanted to collect feedback to see how big of a problem it was, and to see if we could improve our url canonicalization. So starting a while ago, we offered a way to report "302 hijacking" to Google; I mentioned the method on several webmaster forums. You contact user support and use the keyword "canonicalpage" in your report.

        I'm sorry, but this is a flat-out lie. If you are the GoogleGuy, then there were 1000+ post threads on WebmasterWorld where people were begging you for input, and you essentially disappeared. I think I might remember seeing one post from you about this "canonicalurl" on a short, almost unrelated thread. You certainly didn't make it clear where to send problem reports, at least not on any of the threads that people were actually reading.

        The fact is, this is a huge problem, and has totally fucked a lot of legitimate site rankings. I honestly believe Google was doing everything in their power to ignore the problem up until now, hoping that it was just a figment of people's imagination, or worse, that it would help increase advertising revenue. And now that it's turning out to be a PR disaster for you, you're in damage control mode.

        I run one of the sites that was affected by the 302 bug. I sent a message to Google about it, and got a canned response essentially telling me there was nothing wrong. I read through no less than 10 threads on WebmasterWorld about this, many with hundreds or even thousands of posts. I saw maybe, maybe, two or three from GoogleGuy. Where were you? Did you somehow miss those threads that spanned 80+ pages??? Why weren't you posting on those threads about this "canonicalurl" thing.

        Luckily there was only one site 302-ing me, and they were doing it by accident and were happy to remove me from their directory. Now I'm back up at the top of the rankings. But I know it's going to be nowhere near as easy for many of the thousands of people who are still affected by this.

        Seriously, that you would come on here and try to discredit someone for bringing attention to a very big problem with Google is pretty distasteful. To me it indicates either a cover-up or having your head buried firmly in the sand. Either way, it doesn't bode well for the future of Google. Instead of flaming people now that the problem is getting mainstream press, why not try and actually fix things.

      • And I know two other people who sent one. Maybe you should check again? I doubt me and my mates account for 10% of your responses. If you believe that the people affected by this are all "spammers" then perhaps the problem is false positives for your spam detection filters. In fact you should probably take a look at your spam detection filters anyway. Last time I checked--probably much more recently than you checked for canonicalpage emails, there was a bunch of scraper sites running AdSense where good re
      • by metamatic ( 202216 ) on Wednesday March 23, 2005 @01:21PM (#12025496) Homepage Journal
        Frankly, I'd like to see Google start blocking content-free traffic-boosting sites from the page results entirely.

        Google has login accounts, so let logged-in users have a link saying "report spam site". Track who files the most reliable reports, and if a few of those people all agree that a site is spam, nuke its pagerank.

        See how OpenRatings does reliability calculations for more info. Or buy them :-)
        • by glesga_kiss ( 596639 ) on Wednesday March 23, 2005 @08:15PM (#12030486)
          Google has login accounts, so let logged-in users have a link saying "report spam site".

          As an alternative, I'd love a cookie based version of this that you could click "ignore all results from this domain". After a couple of weeks you'd get rid of most of them on your personal browser. Make the lists sharable even. All the pagerank wannabies can do is start from scratch with new URLs.

      • OK, I'll bite ... (Score:4, Insightful)

        by isometrick ( 817436 ) on Wednesday March 23, 2005 @01:33PM (#12025636)
        Look, there *was* circumstancial evidence for the "Greg Duffy" thing ... i.e. just enough to make it a discussion. I agree that fearmongering is not the way to go. I appreciate that you looked into the issue (and my first instinct is to trust your explanation, that is was a DNS issue).

        However, if this is Google's PR method, I think you are kind of asking for it! In the absence of information, the internet community will speculate until the cows come home. I'm not saying it's right, I'm just saying that's reality. Even though I said on my site that I thought Google didn't do anything underhanded I bet a lot of people were still not convinced. Google can do a little better than this, and although you have been fairly nice to me (thanks) this response is a little flamebaity for PR. Please understand that I mean no offense, it's just constructive criticism. Even if everything you say is true, a representative of the company should always at least attempt to sugar coat something like your last paragraph.

        Also, on a more personal note, maybe Google should embrace the people that are involved [clsc.net] in researching [gregduffy.com] these problems instead of using this broken communications policy. I know that in my case I contacted you guys 5 *months* ago about the Google Print problem I described and never got any followup except for my t-shirt (which I really like). I have some great ideas about possible solutions to the problem I described, and as far as I can see Google has not fixed the root of the problem. When are you guys going to contact me?

        -Greg Duffy
  • Robot.txt (Score:3, Insightful)

    by superpulpsicle ( 533373 ) on Wednesday March 23, 2005 @11:40AM (#12024068)
    I am really extremely entirely confused about the article altogether. Is the hijacking more or less about Google digging into your site even when your robot.txt crawler robot is refusing google entrance?

    • Re:Robot.txt (Score:5, Informative)

      by wizbit ( 122290 ) on Wednesday March 23, 2005 @11:44AM (#12024144)
      No, it means Google has indexed a page that appears (to googlebot) to contain something legitimate, and visiting the actual page by clicking the link silently redirects you to an illegitimate site (usually phish/scam copy of same, etc).
      • by ites ( 600337 ) on Wednesday March 23, 2005 @12:49PM (#12025096) Journal
        It's about pushing unrelated sites up in the rankings.

        For instance: I have a site with excellent page ranking. Now a new site will set up, and do a 302 to my site. Google now gives this new site my page ranking. When the new site is indexed, it removes the 302 redirection.

        When you search for my site, you now find these new sites instead. There is no redirection when you click on a link, the the "cached text" that Google shows is wrong.

        Basically this technique allows people to get high page rankings without earning them. It's very widespread - I counted over 60 such parasites for my company's web site (which has excellent page ranking).

    • Re:Robot.txt (Score:5, Informative)

      by pluggo ( 98988 ) on Wednesday March 23, 2005 @11:46AM (#12024183) Homepage
      There was an article a little while back on /. that talked about this exploit.

      Site A can return a 302 HTTP redirect to site B when Googlebot crawls their site. The googlebot will then index site B as site A. Site A could have no affiliation whatsoever with Site B; people could be clicking on SesameStreet.com and get AsianHookers.com, etc.

      I do think the figure of millions of pages being hijacked is a little steep, though.
      • Re:Robot.txt (Score:5, Insightful)

        by PornMaster ( 749461 ) on Wednesday March 23, 2005 @11:49AM (#12024219) Homepage
        I do think the figure of millions of pages being hijacked is a little steep, though.

        Why? It can be completely automated. A million is no harder than four.
      • by catalina ( 213767 ) <jmattclark@@@gmail...com> on Wednesday March 23, 2005 @12:13PM (#12024582) Homepage Journal
        .....and get AsianHookers.com, etc.

        couldn't you have made that a link so I can just click on it?
      • Site A can return a 302 HTTP redirect to site B when Googlebot crawls their site. The googlebot will then index site B as site A. Site A could have no affiliation whatsoever with Site B; people could be clicking on SesameStreet.com and get AsianHookers.com, etc.

        Isn't the fix then to provide preference to the real URL over 'copies' when culling duplicate data and/or pageranking the results? This seems easy, so the problem must be that Google isn't storing HTTP response codes with their page indexes such
        • Re:Robot.txt (Score:5, Informative)

          by arkanes ( 521690 ) <<arkanes> <at> <gmail.com>> on Wednesday March 23, 2005 @12:25PM (#12024763) Homepage
          One problem is that people use 302s when they should be using 301s, like directory sites. No doubt this is because they want to get referral counts up.

          A 302 is a "temporary redirect". Basically, it says that the content normally lives at the URL you requested but that, just this once, you should look at this other URL for the content. Googles response to a 302 is actually very reasonable. I suppose the best thing they could do is just not follow 302s.

          A 301 is a permanent redirect, indicating that the page isn't at the original URL and that all future requests should be made to the new one. I don't know what Googlebot does in this case but I assume it discards the original URL, which is what the standard recommends.

    • Re:Robot.txt (Score:5, Informative)

      by northcat ( 827059 ) on Wednesday March 23, 2005 @01:48PM (#12025827) Journal
      This is more like one site hijacking the ranking of another site. Suppose you're Ferrari and I'm the hijacker. You have ferrari.com and I have irule.com. Since you're ferrari.com you get very high rankings when people search for "ferrari" on Google. You're probably the first site displayed. And in the results page on Google, it displays a summary probably like "the official home page of ferrari cars". On my website I set up a 302 redirect to your website. It means, when someone visits my irule.com, they get redirected to ferrari.com. I don't do anything to your website, I don't have access to your website. I hope you know that Google indexes web pages by visiting those webpages with the user agent string "googlebot" and, of course, Google's IPs which are known to people. When Google sees that my page is 302 redirecting to ferrari.com, for certain reasons, it replaces ferrari.com in its index with irule.com. So when someone searches for "ferrari" the get irule.com as the first result instead of ferrari.com, and the summary still says "the official home page of ferrari cars". Now, I only 302 redirect irule.com to ferrari.com when googlebot visits my page. When anyone else visits irule.com, I give them something else, probably lots of ads, or I redirect them to some other site like LotsOfSmut.com. So I'm "hijacking" any references to ferrari.com on Google and its ranking. And when someone searches for "cars", instead of ferrari.com as the ninth result, irule.com is displayed. So... I profit (you do the math).

      (Sorry for dumbing down my post so much, too much experience explaining things to my grand mother)
  • by Trolling4Columbine ( 679367 ) on Wednesday March 23, 2005 @11:41AM (#12024080)
    This is the last straw! I'm going back to MSN, where I know that my data and privacy are being protected!!

    *duck*
  • by r00t ( 33219 ) on Wednesday March 23, 2005 @11:41AM (#12024090) Journal
    Google has the records, and probably the original
    site exists with behavior dependent on browser name
    being GoogleBot or not. The replacement site will
    generally have some way of making money, which can
    be tracked via financial transactions.
  • by Cytlid ( 95255 ) on Wednesday March 23, 2005 @11:43AM (#12024124)
    For every Good Thing, there are at least 100 different ways to abuse it.
  • 302 (Score:5, Informative)

    by auralrothko ( 836578 ) on Wednesday March 23, 2005 @11:43AM (#12024135)
    I wasn't sure what a 302 hijack was, so here's the obligatory lowdown for those who didn't rtfa (from article linked page) This exploit allows any webmaster to have his own "virtual pages" rank for terms that pages belonging to another webmaster used to rank for. Successfully employed, this technique will allow the offending webmaster ("the hijacker") to displace the pages of the "target" in the Search Engine Results Pages ("SERPS"), and hence (a) cause search engine traffic to the target website to vanish, and/or (b) further redirect traffic to any other page of choice.
    • Re:302 (Score:5, Informative)

      by SassyDave ( 557868 ) on Wednesday March 23, 2005 @11:52AM (#12024264) Homepage
      For the full details of the exploit, TFA [clsc.net] gives a pretty decent recipe:
      The technical part: How it is done
      Here is the full recipe with every step outlined. It's extremely simplified to benefit non-tech readers, and hence not 100% accurate in the finer details, but even though I really have tried to keep it simple you may want to read it twice:

      1. Googlebot (the "web spider" that Google uses to harvest pages) visits a page with a redirect script. In this example it is a link that redirects to another page using a click tracker script, but it need not be so. That page is the "hijacking" page, or "offending" page.

      2. This click tracker script issues a server response code "302 Found" when the link is clicked. This response code is the important part; it does not need to be caused by a click tracker script. Most webmaster tools use this response code per default, as it is standard in both ASP and PHP.

      3. Googlebot indexes the content and makes a list of the links on the hijacker page (including one or more links that are really a redirect script)

      4. All the links on the hijacker page are sent to a database for storage until another Googlebot is ready to spider them. At this point the connection breaks between your site and the hijacker page, so you (as webmaster) can do nothing about the following:

      5. Some other Googlebot tries one of these links - this one happens to be the redirect script (Google has thousands of spiders, all are called "Googlebot")

      6. It receives a "302 Found" status code and goes "yummy, here's a nice new page for me"

      7. It then receives a "Location: www.your-domain.tld" header and hurries to your page to get the content.

      8. It heads straight to your page without telling your server on what page it found the link it used to get there (as, obviously, it doesn't know - another Googlebot fetched it)

      9. It has the URL of the redirect script (which is the link it was given, not the page that link was on), so now it indexes your content as belonging to that URL.

      10. It deliberately chooses to keep the redirect URL, as the redirect script has just told it that the new location (That is: The target URL, or your web page) is just a temporary location for the content. That's what 302 means: Temporary location for content.

      11. Bingo, a brand new page is created (never mind that it does not exist IRL, to Googlebot it does)

      12. Some other Googlebot finds your page at your right URL and indexes it.

      13. When both pages arrive at the reception of the "index" they are spotted by the "duplicate filter" as it is discovered that they are identical.

      14. The "duplicate filter" doesn't know that one of these pages is not a page but just a link (to a script). It has two URLs and identical content, so this is a piece of cake: Let the best page win. The other disappears.

      15. Optional: For mischievous webmasters only: For any other visitor than "Googlebot", make the redirect script point to any other page free of choice.
      • So let me get this straight... If I have www.crappywebsite.com and I want to pump up its pagerank, all I need to do is have "www.crappywebsite.com" redirect googlebot to www.cnn.com, and suddenly www.crappywebsite.com is a font of highly-ranked information?

        The REAL answer would be to have google not index redirects (which is pretty stupid, all things considered. Why link searchers to the "wrong" URL, instead of the destination URL of the redirect?)
      • Re-re-explained (Score:5, Informative)

        by fizbin ( 2046 ) <martinNO@SPAMsnowplow.org> on Wednesday March 23, 2005 @01:03PM (#12025291) Homepage
        Okay, so basically this is the problem: when Google encounters a status 302 redirection (as opposed to the status 301 redirection) it then indexes the content as belonging to the initial URL, not the URL at the end result of the 302 redirection. Other things happen later because of google's design.

        302 redirections are temporary redirections - the idea is that a 302 is supposed to be used when someone needs to be redirected to a new page, but should still use the original URL if they want to come back later. As an example, the page http://purl.oclc.org/OCLC/PURL/CONTRIBUTORS [oclc.org] performs a 302 redirect to http://purl.oclc.org/docs/contributors.html [oclc.org]. This means that although your web browser needs to go to some other URL for the content at the moment, they really should remember the first url as the permanent one.

        Contrast this with what happens when your browser visits http://snowplow.org/martin [snowplow.org] - you get sent a 301 redirect to http://snowplow.org/martin/ [snowplow.org]. (Note the extra slash) In this case, the server is saying "the url with the slash on the end is the real location, and you should not try to come back here without the final slash in the future."

        Ideally, if every web browser behaved according to spec., bookmarks (remember bookmarks?) would get automatically updated to the new URL when you selected them and the redirect was a 301 redirect. However, for a 302 redirect, the bookmark would stay as is.

        302 redirects can be very useful when you want to set up a hierarchy of "logical" URLs that will permanently point to the correct location. 301 redirects are useful when you're obsoleting an old URL and wish people to go and use the new URL from now on.

        Okay, so how does this relate to google? Well, let's suppose that you have a great site on fruitbats. I can set up http://www.example.com/topics/fruitbats to be a 302-style redirect to your site, essentially saying "The information at http://www.example.com/topics/fruitbats is temporarily being hosted by http://www.yoursite.com/". Now, google when it spiders pages will see that, will go retrieve the text from your page and will then index it under http://www.example.com/topics/fruitbat, since after all I just gave a temporary (302) redirect.

        But it gets worse, because a final part of google's indexing process is to compare pages for identical text, and throw out all but one of the URLs. Apparently this stage has nothing to go on other than the text and the recorded URLs, and so your URL stands a fifty-fifty chance of being thrown out.

        Except that I've not just redirected http://www.example.com/topics/fruitbats to your site, but also http://www.example.com/topics/fruitbat, http://www.example.com/topics/fruit_bat, and http://www.example.com/topics/fruit_bats. Now your lone URL doesn't stand much of a chance of being the one kept by the "throw out duplicates" processor, does it?

        In a sense, of course, there's little google can do to prevent this, because even if they weighted 302-redirects lower in their "throw out duplicates" stage, I could always just go snag a copy of your website each time googlebot visits, in essence doing the redirection myself. (How? Just search the apache mod_rewrite guide [apache.org] for "Dynamic Mirror") However, doing it through 302 redircts means that google pays for the bandwidth to go get your page, not me. (Not that this is necessarily a signficant amount of bandwidth, since we're only talking about basic google here and not images. Depending on the revenue you get by misdirecting google queries it might be economical)

        Of course, for this to really work, I'd need a list of websites sorted by category to build up my redirect db. But wait! The ODP feed provides exactly that.

        I am a little bit wary of doi
      • Re:302 (Score:5, Interesting)

        by Ryan Stortz ( 598060 ) <`ryan0rz' `at' `gmail.com'> on Wednesday March 23, 2005 @02:33PM (#12026444)
        I think a resonable solution to this would be for Google to send a second spider to the site for every 302 Redirect they find, with a user-agent indicating its IE or any other browser. Then compare the data.

        Although, they could probably still figure out it's google by their IP, but it's a step in the right direction.
    • Re:302 (Score:2, Interesting)

      by ari_j ( 90255 )
      I'm still not seeing any explanation of how it works, only what happens when it does work.
      • Re:302 (Score:5, Informative)

        by StrongAxe ( 713301 ) on Wednesday March 23, 2005 @12:19PM (#12024665)
        I'm still not seeing any explanation of how it works, only what happens when it does work. 1. Phisher creates (say) cïtïcorp.com and makes the home page redirect to the real citicorp.com page. 2. Googlebot browses cïtïcorp.com and gets a redirect to the real citicorp.com, and indexes its contents 3. User does a Google search looking for Citicorp, and finds cïtïcorp.com page that appears to contain the valid data (and it might be the only such page, if the legitimate page gets removed through the duplicate-removal process) 4. User clicks through to cïtïcorp.com expecting to see the valid web page 5. Phisher's server sees that the request is not from a Googlebot, so it serves up a fake page rather than redirecting to the legitimate real one. 6. User believes he is at the real citicorp.com web site, when he is in fact at the bogus cïtïcorp.com website, legitimized by Google. 7. Identity theft. 8. Profit. (OB. Slashdot joke.)
        • Re:302 (Score:3, Insightful)

          by ari_j ( 90255 )
          Thanks. And remember, identitiy theft is not a joke, unless you steal the identity of a clown.
    • Re:302 (Score:2, Informative)

      by windowpain ( 211052 )
      Thanks. Both the /. article and the linked story were utterly uninformative. Sometimes it seems that a lot techies disdain even the merest explanation as baby talk. Even when you're addressing a largely technical audience a little explanation helps because not everybody knows every technical detail about an entire field.
    • To continue having the victim's hits redirected, the redirect needs to stay in place, doesn't it?

      What in the world does the hijacker gain by having google point him, only to then load the victim's page?

      hawk
  • 301 redirects (Score:3, Interesting)

    by Anonymous Coward on Wednesday March 23, 2005 @11:45AM (#12024158)
    A few months ago, I rearranged my website. To make sure people could still find things, I put 301 redirects on all the old pages that I moved.

    I noticed in my logs that search engines have repeatedly requested the 301 pages, but often don't follow the links to the new pages. And when searched with google, the pages still show up with the old urls. Should I be using 302 redirects instead?
  • Why? (Score:2, Insightful)

    by dep01 ( 730107 )
    Why is it seemingly man's mission to "bring down" something that seems to provide such a great service for everyone?

    "Oh! Look! Something beautiful! Something impressive! I must destroy it!"

    pah. feeling jaded today, i guess.

    • Re:Why? (Score:2, Insightful)

      by a16 ( 783096 )
      In this case, it's more a case of "I must make money from it".

      The people using this exploit to get fake listings (just like all of the spam pages we see in search engines) aren't doing it for the fun of it.
    • Well, the obvious answer can be parphrased from Dune, "The ultimate control of something is the ability to destroy it". The more subtile answer deals with our species desire for "more".

      In a far off time, the Internet was a wonderfull place devoid of such mundane things as commerce. Now, fastforwarding a few years to the present, people are making significant sums of money off of the internet selling "products". One of the best to get somebody to buy something is to make them aware of a "need" they have
  • by Not_Wiggins ( 686627 ) on Wednesday March 23, 2005 @11:46AM (#12024181) Journal
    buy GOOG on the dip as many non-techie investors panic sell. 8)
    • Yeah, 'cause the non-techie investors read Slashdot...
    • Right, as long as Google is priced right now, and not insanely overblown. I don't have any idea what their stock (or more to the point their price/earnings) is at; but I know which way I'd bet.

      Free investment tip: Avoid buying stock in any company if an unsophisticated investor, for reasons unrelated to profitability, would think that company is Way Cool.

      It appears Google has a sound business plan and competent management. Which probably justifies some particular, perfectly healthy stock price. But I'
  • by gitana ( 756955 ) on Wednesday March 23, 2005 @11:47AM (#12024195) Homepage
    As web presence -defined as within about the first 10-20 results of a search- becomes more and more important to "success," black hat techniques such as this, to eliminate competitors, will become more and more common. Google, or any other search tool needs to be able to stay above the fray and not be subject to hacks such as this.
    • Exactly. And if they'd just stop giving PageRank credit to the redirect destination, it'd all be over. In fact, the algorithm should check to see what the link density is between to disparate domains if it's going to even cache 302'ed content. Because in these scam cases, the perpetrator never has an inbound link from the victim domain and Google could "grade" this relationship as being very one-sided and not generally very trustworthy. The more interlinkages, the more trust. But assigning Pagerank on 302's
  • Gopher (Score:5, Funny)

    by one_i_blind ( 613513 ) on Wednesday March 23, 2005 @11:48AM (#12024199) Homepage
    This is why Gopher will always be better than your feable world wide web junk.
    • Re:Gopher (Score:5, Funny)

      by ari_j ( 90255 ) on Wednesday March 23, 2005 @11:51AM (#12024236)
      Dude - the single biggest difference between Gopher and the web is that Gopher contains far fewer spelling errors. I hear that there are differences regarding interactivity, graphics, layout, and so forth; but those are all immaterial.
    • Gopher is part of the World Wide Web, as are several other protocols that pre-date the Web. You meant to say, "This is why Gopher will always be better than an HTTP server."

      The World Wide Web is the meta-index of (mostly) Internet-accessible content which can be addressed by URI (almost always more specifically by URL).

      Since Gopher can be addressed via the URI scheme, "gopher", it's part of the Web.
  • Wait... (Score:5, Funny)

    by dark-br ( 473115 ) on Wednesday March 23, 2005 @11:51AM (#12024235) Homepage

    Damn Google!!! Do you mean this is not www.kuro5hin.org ??

  • by kunkie ( 859716 ) on Wednesday March 23, 2005 @11:54AM (#12024304)
    I can imagine it now... The slashdotting to end all slashdots. If every site in google was 302 redirected to RIAA.com How amazing would that be...
  • by ites ( 600337 ) on Wednesday March 23, 2005 @11:59AM (#12024370) Journal
    1. search Google [slashdot.org] for 'allinurl:', e.g. 'allinurl:slashdot.org'.

    2. copy and paste any dubious URLS into this tool [thinkhost.com] and check whether they're using 302 redirects or not.

    3. Panic! /me notices that my company's web site has been thusly hijacked... and yes! Doing a Google search on the main text on my company's web site shows dozens of unrelated sites high in the ranking. None of these actually have the text on their pages.

    One example: http://www.tradedoubler.it.

    Luckily, the phrase in question is complete gibberish and no-one ever finds our site through Google, only by reputation and word of mouth.

    Still, I think it's clear Google have a serious problem here...
  • by angio ( 33504 ) on Wednesday March 23, 2005 @12:02PM (#12024408) Homepage
    Someone posted a nice explanation of the phenomenon [webmasterworld.com] at webmasterworld.com.

    302 hijacks work because Google goes to http://bad.site/ and gets redirected to http://good.site/. It then treats the contents of the bad.site as identical to that of good.site. The effect seems similar to if somebody simply copied an entire page off of your site (I'm not sure if it's actually more serious than this), but it's easier to do because you're just keeping a small table of redirections.

    How serious is it? Don't know. It's pretty easy for a webmaster to check for hijacking and have her pages de-hijacked (see aforementioned article). It's probably not as screamingly awful as the threadwatch.org article suggests, but the redirector sites are rather annoying. Several of the comments in the webmaster article suggest that Google has already started moving on the problem.

  • Not a surprise (Score:5, Interesting)

    by faust2097 ( 137829 ) on Wednesday March 23, 2005 @12:04PM (#12024439)
    For at least the last 18-24 months it's been increasingly difficult to find non-spam/redirect/affiliate program links for a search on any popular consumer product on Google. Maybe they have too much faith in their current PageRank and think it needs to be tweaked instead of overhauled. Maybe they think they have enough momentum and don't care. They certainly should have the talent and resources to do something about this and it's kind of sad that they haven't. I predict we'll see another whizzy side project in a few months instead.

    The thing is that all they have to do is keep it just good enough that people won't leave. Remember, AdWords is Google's product, everything else [gmail, orkut, etc] they've got is just a way to show you those ads. Google's success is entirely because they had clearly better search results than anyone else. If another company can clearly best them then Google may be in trouble.
  • Bleh... (Score:4, Funny)

    by Patrick Mannion ( 782290 ) <patrick DOT mannion AT gmail DOT com> on Wednesday March 23, 2005 @12:10PM (#12024542) Homepage Journal
    I was thinking that some major crisis had broken out and a million pages were hijacked at once creating something bigger than any other Internet event other, and it caused Google's stock to tank and force to them go private again, lay off workers and go bankrupt. But that's crazy. But still, word it right. Damn it.
  • My site is affected (Score:5, Interesting)

    by barcodez ( 580516 ) on Wednesday March 23, 2005 @12:11PM (#12024559)
    My site the humor archives [thehumorarchives.com] has been affected by this. I can tell because if you do the following search [google.co.uk] you can see a bunch of sites that are/were 302ing to my domain. I'm pretty pissed off and I seriously hope Google act soon to rectify the matter.
  • by YouMakeMeSoANGRY ( 641079 ) on Wednesday March 23, 2005 @12:12PM (#12024568)
    Google claim [google.com]...
    Fiction:A competitor can ruin a site's ranking somehow or have another site removed from Google's index.
    Fact:There is almost nothing a competitor can do to harm your ranking or have your site removed from our index. Your rank and your inclusion are dependent on factors under your control as a webmaster, including content choices and site design.

    How about adding "Fiction: Google information for webmasters contains any facts"?
  • by Anonymous Coward on Wednesday March 23, 2005 @12:20PM (#12024681)

    what major headlines ? millions of pages !! the world is coming to an end !!!!

    a quick whois on threadwatch.org (the submitters site) reveals its hosted by search engine spammers
    platinax.co.uk which is registed to a UK "company" called BriteCorp
    http://www.britecorp.co.uk/ [britecorp.co.uk]

    who offer all the usual SE spamming methods
    coincidence ?
    a whois on britecorp's platinex [platinax.co.uk] site reveals they have removed their address from the whois db, and their websites contact details are a mobile phone number (07963 808470)
    further investigation on britecorp reveals they are not a "real" company but trading as "Brian Turner" (pic [platinax.co.uk]) and companies house [companieshouse.gov.uk] dont seem to have any records of any of these companies, though iam sure further investigation could find out more

    so why would a supposedly reputable marketing company have a cell phone as a primary contact point ?
    something to hide egh ?
    or perhaps local trading standards would like to hear about them and their "services" ?

    northern scum by any other name
    • Absolute hilarity (Score:4, Informative)

      by brian_turner ( 851711 ) on Wednesday March 23, 2005 @05:53PM (#12028900)
      Absolutely Roflmao!!

      I guess some people have never heard of the term "sole trader". :)

      My internet business is barely a year old - almost everything is communicated with other webmasters via e-mail - phone support is provided as a last option, but it means that if anyone really needs to use it, then they can have my immediate attention wherever I am, to have their concerns addressed immediately. :)

      As for spamming - well, this is one of those "anonymous cowards" some of us are familiar with, who believes that if you purchase a link from another site, or become involved in a link exchange, or register your site in a directory - then you're a spammer. :)

      Thanks for the heads up on the Platinax registration details, though - hadn't realised they'd been left out. I had a run in with some Belgian Nazis last year, after I booted them from a forum I admin, when they tried to use it for promoting Neo-nazi propaganda. They've tried a few times to get back at me since, so I've been trying to reclaim some privacy online. Platinax reg details should be public, though - I'll put something online, then try and fine a PO Box for the hate crap.
  • by Animats ( 122034 ) on Wednesday March 23, 2005 @12:24PM (#12024734) Homepage
    Redirects to a page should be treated as having far less PageRank value than the page itself. That will fix the problem.

    It will also break many "click trackers", "portals", "directory sites", "search engine optimizers", and other annoyances, which is probably a plus for Google users. You know, those sites where you click on some phrase in Google and, three redirects later, you're at some irrelevant porno site.

  • by Hornsby ( 63501 ) on Wednesday March 23, 2005 @12:26PM (#12024767) Homepage
    Why not just fix the bug and then recreate the rankings index? Googlebot hits my sites all the time, so I know that it covers the rest of the internet quite often as well. With their amount of hardware, it probably wouldn't take long.
  • by wotevah ( 620758 ) on Wednesday March 23, 2005 @12:28PM (#12024804) Journal

    It seems that when page A redirects to B, Google not only considers that a hit for A, but also assigns B's content to A (I just skimmed through all the posts here so maybe that's not what happens).

    In that case, it seems to make more sense to just ignore A altogether since the hit and content rightfully belong to B.

    This could be done by treating redirects as empty one-link pages, thus unifying the handlers and defeating this practice.

  • by Anonymous Coward on Wednesday March 23, 2005 @12:57PM (#12025201)
    This was originally posted the first time a story about this ran, but since a lot of people are still confused, here it is again...

    There seems to be a lot of confusion as to why exactly this is such a big deal. A lot of people saying there's no problem or that this is nothing new... basically just not understanding the issue. Let me explain:

    Suppose you have a small business under the domain http://xyz.com/, and search engines bring you a lot of traffic because you rank high for keywords in your market. You have a lot of people out there linking to you, a lot of satisfied customers, good content on your site. You're always in the top 10 somewhere when people search for "xyz widgets".

    Well, this issue with Google makes it very easy -- incredibly easy -- for someone to knock your site out of the rankings entirely. And I mean for *everything*, to where searching for your own company name in quotes literally buries you hundreds of pages deep in the results. We're talking sites going from getting 1000 unique hits to 10 overnight.

    And here's the kicker: It requires absolutely no technical knowledge, no time investment, and is perfectly legal...

    All I have to do is have another domain handy that is roughly as popular as yours. And I make a "links" page, like one of those directory services, that lists your website. But instead of being a normal hyperlink, it's a CGI (or PHP or ASP or whatever) script that generates a 302 redirect to your domain... Now, these are very simple, common scripts. One-liners that you can download from cgiscripts.com and stick on your server. The original intent of these scripts is to track which links are being clicked on your site. But now they've found a new use, because when Google gets that 302, all hell breaks loose.

    See, according to the HTTP spec, 302 is a *temporary* redirect, which means Google is supposed to interpret whatever content it finds at the 302 target (your site) as really belonging to the URL of the source (my site). Google is just obeying the spec strictly here, and with devestating results. Why? BECAUSE THE DUPE FILTER NOW KICKS IN! You see, Google has a "dupe filter" that says if the same exact content is found for two unique URLs, then one of the URLs is obliterated in the rankings. Because after all, searchers don't want to be finding the same content over and over. If that happens, they'll start using a different search engine. But Google, sticking strictly to the HTTP spec, doesn't know who the content really belongs to when it gets a 302.

    So Google essentially flips a coin. And if it comes up tails, say bye-bye to your domain in the rankings. Your *entire* domain. Because the dupe filter isn't limited to just the page that the 302 is pointing to -- it applies across your entire domain.

    These 302 "exit-link-trackers" are all over the web. They've been used by webmasters for years. But it's just recently that Google has started treating 302 this way, so it didn't have any bad effect before. But now it kills you.

    The funny thing is, the solution seems pretty simple: Just stop treating 302s this way if they point to a different domain. But for whatever reason Google isn't listening. Hopefully the press that's being generated now will give them the kick in the ass that they need.
  • Doesn't effect Yahoo (Score:5, Interesting)

    by X ( 1235 ) <x@xman.org> on Wednesday March 23, 2005 @01:30PM (#12025606) Homepage Journal
    I'm surprised nobody has mentioned that Yahoo has already closed the 302 hole.
  • Simple Answer (Score:5, Insightful)

    by rabtech ( 223758 ) on Wednesday March 23, 2005 @01:38PM (#12025694) Homepage
    There is a simple solution for Google: Only honor 302 redirects when the original and target domains match (or points to a subdomain of the original domain.)

    In all other cases treat a 302 (temporary) as a 301 (permanent) redirect, thus giving credit for the content to the actual hoster of the content.

    This allows webmasters to continue using 302s to setup logical URLs to mask the organization of underlying content but eliminates the ability to hijack completely.
  • by luap2000 ( 314919 ) on Wednesday March 23, 2005 @02:25PM (#12026337) Homepage
    here's my write-up on the problem from early February called Google and the Mysterious Case of the 1969 Pagejackers [kuro5hin.org]. the problem has been around for a long, long time.

    personally, i'm ready to give up google maps or something else (autolink?) if they would 'fix' this or at least be more transparent about what's going on. ;)

    btw, the word on the net is that the googleguy posting here isn't the real one. anybody have details on this?

    -kpaul
  • I don't get it... (Score:3, Insightful)

    by jafiwam ( 310805 ) on Wednesday March 23, 2005 @03:19PM (#12026967) Homepage Journal
    Why all the yammering and discussion on this?

    It's pretty simple; 302 redirects allow bad guys to exploit Google.

    It doesn't matter that it's the wrong way to use a 302 redirect. They are the BAD GUYS. Remember the "spammers lie" truism?

    It's the Google rule that is broken. 302 should be treated as "cant find site" in their search rankings rather than assuming the the data sent by the web server is honest. It sucks that some legit users of 302 won't get ranked as well because of it, but boo hoo. Let anybody that has hardware or software problems get better equipment in the first place if their freaking world ends when they don't get ranked in their keyword group. I have NO SYMPATHY for someone that shoestrings their vital revenue stream infrastructure and then wonders why things go bad. It reminds me of my job too much.

    Buy Google ADs if you need to make money off your site traffic.

    Google will change the rule or they won't. If they want to stay relevant, they'd better. I find myself getting irritated with Google's crappy search results a lot now days, sooner or later I will find one of the little startup to use and they can kiss off if it keeps up. So I figure they will get to it. They are Google, they are good at what they do.

    Now what I think they should do is download snippets of pages via the Google toolbar which then sends the data to Google to make a massively distributed bot-net spider that is indistinquishable from the web-using masses. At that point, as far as exploiting Google via IP of the bot or user agent of the bot IT IS ALL OVER.

    Move along, nothing to see here but a bunch of people that don't understand redirect and HTTP protocols.

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...