Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
Google Businesses The Internet Technology

Google Crawls The Deep Web 197

Posted by Zonk
from the delved-too-deeply dept.
mikkl666 writes "In their official blog, Google announces that they are experimenting with technologies to index the Deep Web, i.e. the sites hidden behind forms, in order to be 'the gateway to large volumes of data beyond the normal scope of search engines'. For that purpose, the engine tries to automatically get past the forms: 'For text boxes, our computers automatically choose words from the site that has the form; for select menus, check boxes, and radio buttons on the form, we choose from among the values of the HTML'. Nevertheless, directions like 'nofollow' and 'noindex' are still respected, so sites can still be excluded from this type of search.'"
This discussion has been archived. No new comments can be posted.

Google Crawls The Deep Web

Comments Filter:
  • Just think! (Score:5, Funny)

    by scubamage (727538) on Wednesday April 16, 2008 @04:14PM (#23095774)
    Soon, they'll start injecting SQL too to help map databases! Google is so useful indeed! :)
    • by AKAImBatman (238306) <akaimbatman@gma i l . c om> on Wednesday April 16, 2008 @04:28PM (#23095906) Homepage Journal
      Hmm... that reminds me of this DailyWTF [thedailywtf.com]. Who knew that Mr. Test User was such a big customer? :-P
    • by CastrTroy (595695)
      Actually, we (the web) have had problems with this before. Web accellerators started following links on pages before you clicked them. If the link happened to link to an action deleting something, it would delete it just by visiting a page with the delete link on it. Granted you should never do anything destructive with a get request, but now Google is starting to submit forms. I wonder how much stuff they will end up deleting with their program that automatically submits forms with values it think shoul
      • I've seen a number of users come crying in the mythtv forum that somehow all of their recordings mysteriously disappeared. Seems having your mythweb completely unsecured isn't such a good thing.

        For those people, this move by Google is great news. You see, the delete links were all simple GET requests, so the spiders were able to delete content. However, the scheduling is all done via POST'ed forms, so nothing would ever get recorded. This move on Google's part is really just an attempt to combat this. The o
      • Re:Just think! (Score:5, Interesting)

        by jc42 (318812) on Wednesday April 16, 2008 @10:08PM (#23099630) Homepage Journal
        I had similar problems a few years ago. The database had a lot of data in a compact format, and I wrote some retrieval pages that would extract the data and run it through any of a list of formatters to give clients the output format they wanted. Very practical. Over time, the list of output formats slowly grew, as did the database. Then one day, the machine was totally bogged down with http requests. It turned out that a search site had figured out how to use my format-conversion form, and had requested all of our data in every format that my code delivered.

        Google wasn't too bad, because at least they spread the requests out over time. But other search sites hit our poor server with requests as fast as the Internet would deliver them. I ended up writing code that spotted this pattern of requests, and put the offending searcher on a blacklist. From then on, they only got back pages saying that they were blacklisted, with an email address to write if this was an error. That address never got any mail, and the problem went away.

        Since then, I've done periodic scans of the server logs for other bursts of requests that look like an attempt to extract everything in every format. I've had to add a few more gimmicks (kludges) to spot these automatically and blacklist the clients.

        I wonder if google's new code will get past my defenses? I've noticed that googlebot addresses are in the "no CGI allowed" portion of my blacklist, though they are allowed to retrieve the basic data. I'll be on the lookout for symptoms of a breakthrough.

    • Or just deleting those databases in order to reduce the set of information it has to index. Google "Google Purge onion"! :P
  • Bright Planet's DQM (Score:4, Interesting)

    by eldavojohn (898314) * <eldavojohn@nOsPam.gmail.com> on Wednesday April 16, 2008 @04:15PM (#23095788) Journal
    Several years ago, I tried a demo of Bright Planet's Deep Query Manager [brightplanet.com] that would essentially do these searches through a client on your machine in batch-like jobs. Oh, the bandwidth and resources you'll hog!

    Their stats on how much of the web they hit that Google missed was always impressive (true or not) but perhaps their days are numbered with this new venture by Google.

    Quite an interesting concept if you think about it. I always presupposed that companies would hate it but never got 'blocked' from doing it to sites.

    Here, suck up my bandwidth without generating ad revenue! Sounds like a lose situation for the data provider in my mind ...
    • Re: (Score:3, Interesting)

      You could build a really interesting "Deep Web" crawler by ignoring robots.txt. In fact, an index just of robots.txt files would be pretty cool in its own right. Call it "Sweet Sixteen" (10**100 in binary) or something.
      • Re: (Score:3, Interesting)

        by enoz (1181117)
        One time when I was Deep Crawling a particular website I decided to take a peek at their robots.txt file. To my amazement they had listed all the folders that they didn't want anyone to find, yet had provided absolutely no security to prevent you accessing the content if you knew the location.

        It's cases like that where doing a half-arsed job is worse than not trying at all.
    • The more content they have off your site, the more visitors they send.

      The visitors *do* generate ad revenue. :)
  • Oops... (Score:5, Funny)

    by JohnnyDanger (680986) on Wednesday April 16, 2008 @04:16PM (#23095790)
    They just bought everything on Amazon.
    • Re:Oops... (Score:5, Informative)

      by Bogtha (906264) on Wednesday April 16, 2008 @04:57PM (#23096200)

      This won't post forms of that sort. In the blog post, they say that they are only doing this for GET forms, which are safe to automate as per the HTTP specification.

      This is for things like product catalogue searches where you pick criteria from drop-down boxes. Not so common for run-of-the-mill e-commerce sites, but I've seen a lot on B2B sites.

      • Re: (Score:3, Funny)

        by Firehed (942385)
        HTTP spec be damned - has IE taught you nothing?
      • Re:Oops... (Score:5, Insightful)

        by orkysoft (93727) <orkysoftNO@SPAMmyrealbox.com> on Wednesday April 16, 2008 @06:17PM (#23097390) Journal
        Unfortunately, there are tons of sites whose developers did not understand the part about GET being for looking up stuff, and POST being for making changes on the server.
        • by jrumney (197329)

          Unfortunately there are also tons of sites whose developers did not understand the part about POST being for creating new resources, and PUT being for making changes on the server.

          HTTP verb semantics are a very dangerous thing for Google or any other third party to rely on, unless they are using a documented API where the developers have explicitly followed REST principles.

          • by FooAtWFU (699187)
            What's it to Google (or a third party) if they mess up your pathetically-designed form? It's not like they're going to "accidentally purchase something" (like some people suggested) unless they have their robots equipped with billing information submission functions (somehow I doubt it).
            • Re: (Score:3, Interesting)

              What's it to Google (or a third party) if they mess up your pathetically-designed form?

              That depends. If they effectively launch a denial-of-service attack and eat zilliabytes of people's limited bandwidth by attempting to submit with all possible combinations of form controls and large amounts of random data in text fields, would that be:

              1. antisocial?
              2. negligent?
              3. the almost immediate end of their reign as most popular search engine as numerous webmasters blocked their robots?
              4. illegal?
              5. all of the above?
              • by rtb61 (674572)
                Well that does bring up a point. Should you have to include extra coding in your html to block google, or should google only be allowed to deep search sites that have extra coding that invites them in.

                Google in a way is saying that if you fail to properly secure your site that they have a right to data mine it and generate profits from your data. Perhaps, mind you, just perhaps, that really, legally, is not appropriate and perhaps a legal investigation is required to clarify this before everyone starts do

          • by jlarocco (851450)

            HTTP is a documented API.

            What makes you think somebody who's just fucked up HTTP isn't going to go right ahead and fuck up "REST principles" while they're at it?

      • Do you thing the DoD use GET or POST for launching nuclear warheards? Is there a guideline about that?
  • by lastninja (237588) on Wednesday April 16, 2008 @04:17PM (#23095806)
    only half kidding
  • Forums? (Score:5, Funny)

    by fishybell (516991) <fishybell@hotmaCOLAil.com minus caffeine> on Wednesday April 16, 2008 @04:18PM (#23095814) Homepage Journal
    Well, I certainly hope that they put in some decent smarts to prevent it from making posts onto forums, blogs, /., etc.


    On the plus side, this should enable Google to get by the "Must be 18 to view" buttons ;)

    • Re: (Score:3, Informative)

      by brunascle (994197)
      as TFA states, it's only GET requests, not POSTs. so it would mostly be search queries.
      • by fishybell (516991)
        ...and porn. You can't forget the porn.
      • by MenTaLguY (5483)
        Unfortunately a lot of developers misuse GET requests for actions which modify state. (I suppose this'll teach them...)
        • by Bogtha (906264)

          The usual excuse for that is that they want a link — for aesthetic purposes, to put in an email, etc. If you're using a form anyway, those reasons disappear. I'm sure there are a few developers who screw this up, but it won't be anywhere near as common as the problems GWA uncovered.

    • Re: (Score:3, Funny)

      by spintriae (958955)
      Google's only 12 years old. It shouldn't be visiting those sites.
  • by Anonymous Coward on Wednesday April 16, 2008 @04:19PM (#23095828)
    I am just submitting this form to see what's behind it. PLEASE IGNORE ME.
  • This brings up a concern from the description.

    So Googlebot will come across a web page.
    It follows a link.
    The link leads to a page with a form.
    Googlebot fills out the form based on content already on the site.
    Googlebot clicks submit.
    Googlebot goes to the next page, and continues to follow links.

    The problem comes when that form was a post form like the one I am typing on right now for a forum, or some other type of form to create user generated content. This makes it seem like google will see the text box an
    • Google indexes more than any other search engine by expanding the web themselves. It was moving too slow for them.

      Really though i don't think this will be a problem. People at google are pretty smart and i'm sure they've thought of this. Even if you believe google is evil there no evil corporate benefit to spamming garbled text to the entire internet.
    • by mmkkbb (816035)
      They will use Markov chains which may end up sounding more intelligent than many forum denizens. Fark, Free Republic, LGF, etc. won't even notice.
    • This makes it seem like google will see the text box and input random content from the site, then post it.
      No. Googlebot will only do gets, not posts.
    • I am tempted to copy and paste that and post it as my reply, but I think that would be insufferably clever. So, too, is referring the fact that I could be insufferably clever, but choose not to be. Etc...
    • What keeps googlebot from becoming a nonsensical spambot? Yes, you can use nofollow, but there is such a huge quantity of web forms that don't have that now because they've never needed it. Retrofitting all of them web wide is not the most realistic of goals.
      The captcha or other anti-bot mechanism. Any forum that can't stop a "good" bot is going to have spam all over it anyway from the "bad" ones...
      • Re: (Score:3, Funny)

        by enoz (1181117)

        Any forum that can't stop a "good" bot is going to have spam all over it anyway from the "bad" ones...
        C'mon there's no point in Google launching a war against phpBB, there are more than enough spambots doing that already.

    • Re: (Score:3, Informative)

      by Z80xxc! (1111479)

      Seems to me it would be easy enough to detect the googlebot user agent, then if so, automatically redirect it to the page on the other end (or even send it to a random 404 page or something), all without processing the form data at all.

      <? if ($_SERVER['HTTP_USER_AGENT']=="User_agentMozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"); { header( 'Location: /landing_page.php' ) ; } else { processtheform(); } ?>

      Of course, this would have to be implemented, which would b

  • good and bad (Score:4, Insightful)

    by ILuvRamen (1026668) on Wednesday April 16, 2008 @04:20PM (#23095838)
    Well first of all, it's about time they learn how to read advanced sites! If your site is dependent on input from the user to display content, you're basically invisible to google. Now all they need is something to read text in flash files and they've got something going. But on the other hand, this is almost auto-fuzzing which could be considered hacking and I bet they'll often get results they didn't intend to and expose data that's supposed to be protected and private.
    • Re:good and bad (Score:5, Insightful)

      by QuoteMstr (55051) <dan.colascione@gmail.com> on Wednesday April 16, 2008 @04:40PM (#23096022)
      And should we not make any progress because we might step on a few toes while doing it? If Google can get your into uber-secret-private-database, so ran random user, or random Russian cracker. Fix your damn site if you're worried about this particular attack.
    • Re:good and bad (Score:5, Insightful)

      by Bogtha (906264) on Wednesday April 16, 2008 @05:02PM (#23096264)

      Now all they need is something to read text in flash files and they've got something going.

      They've indexed Flash for about four years now.

      I bet they'll often get results they didn't intend to and expose data that's supposed to be protected and private.

      No doubt. There are a lot of clueless developers out there who insist on ignoring security and specifications time and time again. I have no sympathy for people bitten by this, you'd think they'd have learnt from GWA that GET is not off-limits to automated software.

    • expose data that's supposed to be protected and private
      Ugh, it's the friend class of the entire Internet!
  • Cracking your forms. Sorry, could not help myself.
  • robots.txt (Score:5, Funny)

    by B3ryllium (571199) on Wednesday April 16, 2008 @04:37PM (#23095986) Homepage
    Okay, so how long until the spec for robots.txt is updated to have a "DontBeStupid" directive?
  • by fahrbot-bot (874524) on Wednesday April 16, 2008 @04:38PM (#23095990)
    our computers automatically choose words from the site that has the form; for select menus, check boxes, and radio buttons on the form, we choose from among the values of the HTML...

    ...post invoice forms ordering expensive items to be shipped to Google. Be sure to log incoming IP addresses for verification.

  • If well you can have links that do actions and change information, submitting forms is a good recipe for massive changes, from comment spam to anything, sky is the limit.

    Now you can't see what is on the web, by crawling, without changing it.
  • by kiehlster (844523) on Wednesday April 16, 2008 @04:49PM (#23096118) Homepage
    If you haven't already noticed, AdSense has features now to tell Google how to log into your website so it can catalog your user-only pages. You know what that means. Porn sites are going to start using this so that Googlebot can confirm that it's age is over 18. We'll be showered with a gigantic wave of pornographic information. We will soon have to press juvenile charges against a corporate entity because it lied about its age on web forms to gain access to pornography and forum discussions.
  • by frovingslosh (582462) on Wednesday April 16, 2008 @04:53PM (#23096160)
    Nevertheless, directions like 'nofollow' and 'noindex' are still respected, so sites can still be excluded from this type of search.

    Maybe they shouldn't be, at least not in all cases. Several years back I had done many Google searches for some information that was very important to me, but never could find anything. Then a few months later (too late to be of use), pretty much by a fortunate combination of factors but with no help from Google, I came across the exact information, on a .GOV website in a publicly filed IPO document. As far as I can tell, our US government aggressively marks websites not to be indexed, even when they contain information that is posted there to be public record. When these nofollow directives are over used by mindless and unaccountable bureaucrats, perhaps someone needs to make the decision that these records should be public and that isn't best served by hiding them deep down a long list of links where they are hard to locate. In cases like this I would applaud any search engine that ignores the "suggestion" not to index public pages just because of an inappropriate tag in the HTML. In fact, if I knew of any search engine that was indexing in spite of this tag, I would switch to them as my first choice search engine in an instant. For starters, I would suggest that any .GOV and any State TLD website should have this tag ignored unless there were darn good reason to do otherwise.

    • Re: (Score:2, Insightful)

      by QuantumHobbit (976542)
      But they don't want you to find out that the moon landing was faked and that Jimmie Hoffa shot Kennedy while driving a car that runs on water. I agree with you. If you don't want people to know something don't put it on the web. If you want people to know put it on the web and let google send the people to you. It's all bureaucracy inaction.
    • Re: (Score:3, Interesting)

      As far as I can tell, our US government aggressively marks websites not to be indexed, even when they contain information that is posted there to be public record.

      I'd mod you up if I had some points. I'm sure there are ethical implications or something when it comes to respecting the website owner's wishes not to index, but it's all public information anyway. If it's on the web and I can look at it, then Google should be able to look at it and index it.

      I had no idea that government sites don't allow themselves to be indexed. That is BULLSHIT. People often NEED information from .gov sites and ALL of it should be made easy to find. Refusing to allow indexing

      • by STrinity (723872)

        Is there a law saying that search engines MUST follow these robots.txt, nofollow, etc?
        No, only Internet standards. No need to follow those antiquated things. Google can become the search equivalent of IE.
        • by enoz (1181117)
          The search equivalent to IE.... so being the dominant player, using a feature-limited interface, and prone to leaking private information?

          I think Google is already there.
    • While I don't see Google doing it because of the backlash I'm a bit surprised that no other search engine has touted ignoring "nofollow" and "noindex" as a "feature" of their search engine in the attempt to look like they are better than Google.
  • Wimps. Index it all, who cares if the site doesn't want it. If its public facing it deserves to be indexed.
  • Fuzzing the world (Score:3, Insightful)

    by corsec67 (627446) on Wednesday April 16, 2008 @04:59PM (#23096222) Homepage Journal
    Sweet, now Google will be Fuzzing [wikipedia.org] the entire web.

    How will this work for forms that perform translations, validations and similar kinds of operations on other websites? Try to pull the entire internet through each such site it finds?

    And then not every web development environment forces GET to not change data. In Ruby on Rails, adding "?methond=post" to the end of a url fakes a post, even though it is actually a GET, which I disabled in the company I work for. Not everyone is going to do that.
    • Re: (Score:3, Insightful)

      by Bogtha (906264)

      In Ruby on Rails, adding "?methond=post" to the end of a url fakes a post, even though it is actually a GET, which I disabled in the company I work for. Not everyone is going to do that.

      More precisely: Not everyone has been doing that. I'm sure when Google comes along and exposes all their bugs they will quickly take the hint.

      I don't really see the problem. The developers who know what they are doing, like you, won't be adversely affected, while the incompetent developers have to scurry around fi

  • For text boxes, our computers automatically choose words from the site that has the form


    And a few relevant URLs from helpful sponsors?

    Now you just need to hire a few sweatshop workers to get past those pesky captchas...
  • by arrrrg (902404) on Wednesday April 16, 2008 @05:12PM (#23096424)
    When I interned at Google, someone told me a funny anecdote about a guy who emailed their tech support insisting that the Google crawler had deleted his web site. At first, I think he was told that "Just because we download a copy of your site, doesn't mean your local copy is gone." (a'la obligatory bash [bash.org].) But, the guy insisted, and finally they double checked and his site was in fact gone. Turns out that it was a home-brewed wiki-style site, and each page had a "delete" button. The only problem was, the "delete" button sent its query via GET, not POST, and so the Google spider happily followed those links one-by-one and deleted the poor guy's entire site. The Google guys were feeling charitable and so they sent him a backup of his site, but told him he wouldn't be so lucky the next time, and he should change any forms that make changes to POSTs -- GETs are only for queries.

    So, long story short, I wonder how Google will avoid more of this kind of problem if they're really going off the deep end and submitting random data on random forms on the web. Like the above guy, people may not design their site with such a spider in mind, and despite their lack of foresight this could kill a lot of goodwill if done improperly.
    • by Arimus (198136)
      [blockquote]
      end and submitting random data on random forms
      [/blockquote]

      Sod worrying about zapping sites, what will happen when they crawl the nuclear launch site and enter random data into the authorisation field, and in a rare feat of sod's law end up getting the code just right....

      (oh and what's the betting they'll put redmond in as a target string?)
    • by RyoShin (610051)
      This could be the incident you speak of. [thedailywtf.com] :)

      (Or at least super similar.)
    • by sootman (158191)
      That happened to me on a database demo site that I did. The 'edit,' 'details,' and, yes, 'delete' buttons were just plain old text links. I posted the URL of the page to a mailing list, Google came in through that, and methodically 'clicked' on each link, including the 'delete' ones. (There was even a confirmation page with 'Are you sure you want to delete this? _Yes_ or _No_'--as links, of course.) I went to show someone it one day and all the data was gone. It was just sample data, so no great loss. I fig
  • In a few months, there'll be a new blog post - Google will attempt to access and index all sites' password-protected pages by matching usernames found elsewhere on the site (e.g. from email addresses) with intelligent guesses at passwords, based on information it's gleaned regarding those individuals. Failing that, it'll run through entries found in various cracker dictionaries.
  • In other news, (Score:2, Insightful)

    by mbstone (457308)
    Google has announced that Google Phones (beta) will soon unveil the results of its having wardialed all 6,800,000,000 U.S. telephone numbers. Visitors to the Google Phones site will be able to search individual phone numbers to determine (without personally dialing the number) whether the number belongs to a landline telephone, cell phone, fax, or modem.

    On phone numbers where a VMS is detected, Google plans to dial "#0#" and other codes in order to determine how to reach a human.

    "Since we are a big, rich e
  • Repeatedly querying to extract every permutation of their API could be much larger than their underlying data (think of the combinatorics of only 5 query fields of only 5 values each, against only a couple of hundred values in the database, like many at sites), and far too much traffic for small sites (and probably for big sites, too, if their combinations of queries at all matches their traffic).

    What could be even better would be if sites that don't want get that huge load just to have their data searchabl
  • The problem with their searching is a form like this one: http://quaker.org/users.cgi [quaker.org] It's *meant* to keep people out unless they've entered into a legal agreement.
    • Re: (Score:3, Informative)

      by dave420 (699308)
      That is a POST form, which Google have said they will not mess with.
    1. 1. Set up a shopping cart which is lack on security and uses GET forms instead of POST forms
    2. 2. Put one item in the shopping cart, a used tic tac box for 1 million dollars (it's a collector's item)
    3. 3. Wait for the google bot to buy the tic tacs with the corporate credit card
    4. 4. Profit!!!!
  • Title Correction (Score:2, Insightful)

    by awyeah (70462) *
    "Technology: Google fills your backend database with garbage"
  • ... they hit the Solar Dynamics Observatory database next year. It'll be collecting several petabytes of images...
  • until the google trawler starts making it's own first posts.

"No job too big; no fee too big!" -- Dr. Peter Venkman, "Ghost-busters"

Working...