Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Google Businesses The Internet IT

The Man Behind Google's Ranking Algorithm 115

nbauman writes "New York Times interview with Amit Singhal, who is in charge of Google's ranking algorithm. They use 200 "signals" and "classifiers," of which PageRank is only one. "Freshness" defines how many recently changed pages appear in a result. They assumed old pages were better, but when they first introduced Google Finance, the algorithm couldn't find it because it was too new. Some topics are "hot". "When there is a blackout in New York, the first articles appear in 15 minutes; we get queries in two seconds," said Singhal. Classifiers infer information about the type of search, whether it is a product to buy, a place, company or person. One classifier identifies people who aren't famous. Another identifies brand names. A final check encourages "diversity" in the results, for example, a manufacturer's page, a blog review, and a comparison shopping site."
This discussion has been archived. No new comments can be posted.

The Man Behind Google's Ranking Algorithm

Comments Filter:
  • by Anonymous Coward on Sunday June 03, 2007 @09:53AM (#19371217)
    Pigeon Rank?
  • apple vs Apple (Score:1, Informative)

    The formulas can tell that people who type "apples" are likely to be thinking about fruit, while those who type "Apple" are mulling computers or iPods.

    Well the results for both "apple" and "Apple" are identical for me (apple computer dominated), with the exception of the text in the ads on the right hand side (which are both for apple computers). Maybe they are doing other stuff (Linux users prefer computers over fruit?).

    Does anyone see anything different when they search for "apple" versus "Apple"?

  • Amit Singhal ... (Score:5, Informative)

    by WrongSizeGlass ( 838941 ) on Sunday June 03, 2007 @09:55AM (#19371235)
    ... is not to be confused with Amit Singh [kernelthread.com], who also works at Google and has authored an excellent book on Mac OS X Mac OS X Internals [osxbook.com].
  • by dwater ( 72834 ) on Sunday June 03, 2007 @09:58AM (#19371261)
    > They use 200 "signals" and "classifiers," of which PageRank is only one.

    How many did they expect PageRank to be? In the words of someone immortal, "There can be only one.".
    • Now, what if they cut out pagerank completely, would their search results still be just as good?
      • Re: (Score:3, Interesting)

        by rtb61 ( 674572 )
        From the results I've been getting lately, they seem to dropping page rank in preference to how many times the words 'google adwords' appears om the page, or more precisely the code for generating them. Totally worthless pages but obviously not worthless for google's bottom line. This story obviously reflects one thing and one thing only, the growing perception in the public's eye of the deteriorating quality of google's results, hence yet another marketing fluff piece, to try to convince them, it just ain'
  • I wish I could give google.ca a signal to return pages from North America.

    I'll search for a product and the first page of results will all be *.co.uk results.

    Not much use to that. Makes me think on how to rephrase the search, which is good.
    • You could add "site:.com" to the query. That might help.
      • Re: (Score:2, Informative)

        Actually, using -site:.co.uk would yield much better results. Since he will then get everything except .co.uk instead of just .com
    • What's wrong with specifying "pages from Canada" or typing "stuff to search site:com site:ca" in the search bar. Not a perfect solution, but it takes away all the co.uk stuff. Or -site:co.uk if those are the only ones bothering you.
    • Re: (Score:3, Funny)

      by aldheorte ( 162967 )
      If the UK sites in particular are the ones you want out of you search results, compare these searches on Google:

      digestives london

      digestives london -inurl:.uk
  • Feature Request (Score:5, Insightful)

    by rueger ( 210566 ) * on Sunday June 03, 2007 @10:13AM (#19371383) Homepage
    My ongoing gripe with Google is the number of times when the first page is filled with shopping sites, "review" pages, and click through pages that exist only to grab you onto the way to where you really want to go.

    I would love a switch, or even a subscription, that would allow me to filter these usually useless types of pages and instead show me pages with real content.

    • Re: (Score:3, Funny)

      by Fred_A ( 10934 )
      Haven't had much trouble with the click through sites but when looking for some information on anything that can potentially be sold (or even, as I recently experienced, has been sold in the not too distant past but hasn't been in the last five years), the shopping sites are a real problem

      This item you're searching for hasn't been in inventory for 6 years since nobody makes it anymore, would you like to read a review ? : be the first to write one !

      Yay.
    • Try a more specific query, or try a query that excludes "review", "sale", "price", or whatever you like.

      I find that most queries give me what I want right away (eg paris hilton), and those that don't (eg lindsay lohan) do give me what I want after narrowing down the sites returned (eg lindsay lohan drunk car -herbie -vomit -intitle:"fan site").
    • by 2short ( 466733 )
      I'm fairly confident that the feature you want is one Google is trying very hard to provide. I doubt adding a switch somewhere is the problem.
    • Re:Feature Request (Score:5, Informative)

      by SilentStrike ( 547628 ) on Sunday June 03, 2007 @01:42PM (#19372959) Homepage
      This probably does what you want.

      http://www.givemebackmygoogle.com/ [givemebackmygoogle.com]

      It just negates a whole lot of affliate sites.

      This is part of the query it feeds to Google.

      -inurl:(kelkoo|bizrate|pixmania|dealtime|pricerunn er|dooyoo|pricegrabber|pricewatch|resellerratings| ebay|shopbot|comparestoreprices|ciao|unbeatable|sh opping|epinions|nextag|buy|bestwebbuys)
      • This (http://www.myserp.com/) probably does it better.

        It does this:

        In November 2004 we built the first version of MySERP our aim was to help us find more interesting things via search by simply taking out the websites we already knew about or weren't interested in. Instead of opting in to a set of websites or web pages our theory is it will be easier to just opt out of them by default - no more shopping comparison sites, no more affiliate link sites, no more Pay Per Click ad sites.

        Pretty cool huh?

        monk.

      • Comment removed based on user account deletion
    • Re:Feature Request (Score:5, Informative)

      by quiddity ( 106640 ) on Sunday June 03, 2007 @01:51PM (#19373039)
      Firefox extension: http://www.customizegoogle.com/ [customizegoogle.com] lets you filter out URLs from the results (plus dozens of other useful things).

      You can filter out Wikipedia mirrors (using that extension) with the list here: http://meta.wikimedia.org/wiki/Mirror_filter [wikimedia.org]
    • I would love a switch, or even a subscription, that would allow me to filter these usually useless types of pages and instead show me pages with real content.

      Ditto for Google News. I'd love to click something and have all the worthless blogs trying to pass for journalism disappear from the results.

      Even worse is that Google News gives high rankings to some "news" web sites that merely steal the content of other sites and then re-publish it as their own. I'm not talking about link aggregators like Fark

    • The article summary even implies that the reasons for not providing a filter to remove shopping sites are not technical:

      A final check encourages "diversity" in the results, for example, a manufacturer's page, a blog review, and a comparison shopping site."

      So if they have an algorithm to ensure that the results contain a good mix including comparison shopping sites, doesn't that imply that they could technically provide exactly the kind of switch that the parent poster asked for - i.e. to exclude those comp

    • by Tarqwak ( 599548 )

      My ongoing gripe with Google is the number of times when the first page is filled with shopping sites, "review" pages, and click through pages

      Then create your own Google Custom Search Engine [google.com] or use some existing ones such as Google Search Excluding Shops [rapla.net] that's excluding hand picked 700+ shopping and spam sites and gives ranking boost to 160+ websites of IT and other electronics companies.

    • by slacka ( 713188 )
      agreed. I hate how these useless sites come up when I'm looking for computer hardware reviews
    • Get google search history. It remembers your searches and what you've clicked on and will try to tailor your results to you. Now when I search for anything Java I get Sun's stuff coming up first, when Wikipedia has an article on anything I search for its in the first 5 results, if I search for a piece of hardware I'll get pages on linux support of said hardware first. It's not perfect, if you search for something you dont usually search for your back with all the junk but it works quite well.

      The tinfoil hat
  • by Xoq jay ( 1110555 ) on Sunday June 03, 2007 @10:14AM (#19371387)
    Pagerank is the source of all wisdom in google... but there is so much more... Like string searching & matching algos, file searching.. you name it.. Just the other day I was searching for books about Google's algorithms... I found zero interesting stuff.. They keep their algorithms secret and out of the public domain... (like they should..). we praise Pagerank, but if we knew what other stuff is there, we would all be members of Church of Google (http://www.thechurchofgoogle.org/) :P
    • Why do so many people so strongly believe that Google needs to keep their page ranking algorithm secret? Couldn't the argument be made that keeping their algorithm secret is analogues to encryption by obfuscation? I don't have a strong opinion one way or another, and maybe I'm missing some simple reason that invalidates this comparison. Perhaps people just feel that it's impossible to come up with a ranking algorithm that can't be cheated without using obfuscation?
    • Re: (Score:2, Informative)

      by chainLynx ( 939076 )
    • How does it work (Score:5, Informative)

      by Anonymous Coward on Sunday June 03, 2007 @03:11PM (#19373743)
      It is rather simple (I am an insider).

      Google breaks pages in words. Then, for evey word it keeps a set which contains all the pages (by hash ID) that contain that word. A set is a data structure with O(1) lookup.

      When you search for "linux+kernel" google just does the set union operation on the two sets.

      Now a "word" is not just a word. In google sees that many people use the combination linux+kernel, a new word is created, the linux+kernel word and it has a set of all the pages that contain it. So when you search for linux+kernel+ppp we find the union of the linux+kernel set and the "ppp" set.

      So every time you search, you make it better for google to create new words. And this is part of the power of this search engine. A new search engine will need some time to gather that empirical data.

      Of course, there are ranks of sets. For example, for the word "ppp" there are, say, two sets. The pages of high rank that contain the word ppp, and the pages of low rank. When you search for ppp+chap, first you get the set union of the high rank sets of the two words, etc.

      Now page rank has several criteria. Here are some:
      well ranked site/domain, linked by well ranked page, document contains relevant words, search term is in the title or url, page rank not lowered by google emploee (level 1), page rank increased, etc.

      It is not very difficult actually.

      (posting AC for a reason).
      • That's just awesome.. I never read that anywhere

        that is cleverly simple actually!

        well explained

        Thank you!
  • by Timesprout ( 579035 ) on Sunday June 03, 2007 @10:15AM (#19371401)

    Search over the last few years has moved from Give me what I typed to Give me what I want, says Mr. Singhal
    So this is why all my results are links to lesbian porn regardless of what I search for.
  • by Anonymous Coward on Sunday June 03, 2007 @10:24AM (#19371463)
    One of the most annoying things about google for me is how it interprets queries with strange characters common to almost all programming languages. A google search for "ruby <<" returns no results related to the ruby append operator. A Simple search for "<<", by itself returns ZERO results.
    • by Dun Malg ( 230075 ) on Sunday June 03, 2007 @11:05AM (#19371769) Homepage

      One of the most annoying things about google for me is how it interprets queries with strange characters common to almost all programming languages. A google search for "ruby <<" returns no results related to the ruby append operator. A Simple search for "<<", by itself returns ZERO results.
      Yes, well you see that's a problem common to most search systems. Non-alphanumeric characters tend to be reserved for search logic. It would indeed be nice if there was a way to force literals into the search terms, but for now we just have to make do the way we always have: search for ruby append [google.com] instead, or (if you don't know what it's called) search for ruby string operators [google.com] and find out.
      • by Animats ( 122034 ) on Sunday June 03, 2007 @11:11AM (#19371815) Homepage

        Yes. Try to find information on the web about the language "C+@". It's real, and it was developed at Bell Labs some years ago back in the Plan 9 era, but it's unsearchable.

        • So how does Google know to tailor its results for C [google.com], C++ [google.com], and C# [google.com], which all return results specific to the requested language, but not for C+@ [google.com]?
          • by Animats ( 122034 )

            So how does Google know to tailor its results for C, C++, and C#, which all return results specific to the requested language, but not for C+@?

            Manually implemented special cases, perhaps. Or Google may not consider the possibility that "@" can be part of a word, which is likely.

          • Re: (Score:2, Interesting)

            by Spy Hunter ( 317220 )
            This is an interesting question that I've often wondered about. It's possible that Google programmers simply went in and special-cased C++ and C#, but I personally think that Google has an automated process which notices that "C++" and "C#" are commonly occurring both in web pages and queries, and then automatically adds them to the list of "strange" tokens to index.
        • Try allintitle: worked for me! [louisiana.edu] It was on the first page. (Well, one link away; however the text "C+@" [louisiana.edu] _was_ in the discription text)
          Also try calico. (aka)
      • Non-alphanumeric characters tend to be reserved for search logic.

        True, but I'd hope that at least using quotation marks to search for phrases would also include special characters.

        I mean, there can't be any search logic inside quotes anyway; then that would be part of the phrase.
        Like "Apples or oranges" won't search for either apples or oranges, but the actualy phrase.
      • by zobier ( 585066 )
        Google code search lets you search using regular expressions -- but only within code not the whole web AFAIK.
    • Re: (Score:2, Insightful)

      by Blikkie ( 569039 )

      One of the most annoying things about google for me is how it interprets queries with strange characters common to almost all programming languages.

      You should try google code search [google.com].

    • Re: (Score:3, Insightful)

      by drix ( 4602 )
      I have the same problem. But if you're searching for actual code, you're better off using a code search engine [koders.com]. Or as others have pointed out, search "ruby append operator" if you're interested in the concept.
    • google code [google.com] doesn't discriminate against punctuation characters. (You can even do a regex search).
  • One search feature (Score:5, Interesting)

    by Z00L00K ( 682162 ) on Sunday June 03, 2007 @10:27AM (#19371493) Homepage Journal
    that has been lost was the "NEAR" keyword that AltaVista used earlier. I found it rather useful.

    This could allow for a better search result when using for example "APPLE NEAR MACINTOSH" or "APPLE NEAR BEATLES"

    Ho hum... Times changes and not always for the better...

    • Clusty [clusty.com] does something similar. Searching for "Apple" will show categories for OSX and fruit, for instance.
    • I think "NEAR" is implied with Google. That is to say, if you search for "apple macintosh", pages with those two terms in close proximity will rank higher than pages which simply contain the terms. Since Google's exact algorithms are proprietary, I cannot swear to this, but that seems to be the way it behaves in my own use.

      What I miss from Alta Vista is the ability to go grouping to set precedence, i.e., parenthesis. I don't have to do this very often, but when I do, I really miss it. The need generally
      • This is definitely not always the case. I've had this problem a few times recently - the first page or two of results is a mix of a few useful sites and a lot of sites that happen to contain the two words, but on unrelated parts of the page. I have to dig through the results to find what I need. Especially if the unuseful sites are very popular ones and the ones I want are more obscure.
    • A way to get that (Score:3, Informative)

      by i kan reed ( 749298 )
      Wildcards in strings "apple * macintosh" will return pages with the word macintosh shortly following apple. Not reversable, but still quite useful for that kind of search.
    • > "NEAR" keyword

      Isn't that what the single quote (') construct is for: 'widget offbeat'
  • by rbarreira ( 836272 ) on Sunday June 03, 2007 @10:38AM (#19371569) Homepage
    Does the algorithm account for the toilet seat's positon?
  • by polarbeer ( 809243 ) on Sunday June 03, 2007 @11:27AM (#19371905)
    One interesting thing about the article was the down-to-earth lack of abstraction in the problems described, such as the teak patio palo alto problem. Other search engines brag about their web-filtered-by-humans approach, as opposed to the "cold" algorithmic approach of Google. But it turns out Google is pretty human too, only with higher ambitions of creating generalizations from the human observations.
  • If only they could solve googlebombing on news.google.com by bloggers with right wing agendas. The left wing agendas seem to be gone already, for some reason.
  • I find it extremely annoying the google indexes blogs.
    Blogs are read only by bloggers and the press, and present absolutely no interest to normal people (including me). Currently, because of google's idiotic blog fetish, I have to eliminate 50% of the results just based on URLs, hoping that I won't stumble upon someone's personal ramblings. Blogs became popular only due to google's absolutely unexplainable love to blog content, and sticking it into perfectly normal search results, it's like searching in a
    • Blogs are read only by bloggers and the press, and present absolutely no interest to normal people (including me).

      Considering that you're reading a blog, I think it's pretty fair that your only counting web pages that you think suck as blogs... so of course you don't like the results. Amazingly, no one is willing to tag their blog as "shohat will think this sucks, so please don't search me."
      • Re: (Score:2, Insightful)

        by Shohat ( 959481 )
        Slashdot is as much of a blog as I am a Egyptian gerbil. Slashdot links to stories that generate discussions. Slashdot is NOT about the people that create the posts, but about the people that comment here.
        • Slashdot is very very much a blog, which is a chronologically arranged web page. You're really bitching about personal home pages, which used to exist as regular ole' web pages, but now are blogs because they're easier to setup (no HTML required) and because the "chronological" nature of blogs works very well for journals.

          If blogs didn't exist we'd just have more geocities pages getting lots of links.
  • I'd like to know how they transform their queries before running them against the index. I.e. how they decide whether they should throw out the "stop" words (most prepositions, some verbs, some nouns) or keep them, whether they should throw in an alternative spelling or synonym, whether they should throw in a semantically related word or two to increase recall (this is evident when you search for something and get related words highlighted in the results), when to stem and when not to stem.

    Those are the thi
    • by martin-boundary ( 547041 ) on Sunday June 03, 2007 @09:42PM (#19376743)
      Read the article, it gives a pretty clear picture of what's going on if you're a little familiar with classification ideas, eg bagging, boosting etc. Don't read further if you're familiar with those terms.

      A classifier is a black box which takes some data as input, and computes one or more scores. The simplest example is a binary classifier, say for spam. You feed some data (eg an email) and you get a score back. If it's a big score say, then the classifier thinks it's spam, and if it's a small score it's not spam. More generally, a classifier could give three scores to represent spam, work, home, and you could pick the best score to get the best choice.

      So you should really think of a classifier as a little program that does one thing really well, and only one thing. For example, you can build a small classifier that looks if the input text is english or russian. That's all it does.

      Now imagine you have 100 engineers, and each engineer has a specialty, and each builds a really small classifier to do one thing well. The logic of each classifier is black boxed, so from the outside it's just a component, kind of like a lego brick. What happens when you feed the output of one lego brick to the input of another lego brick?

      Say you have three classifiers: english spam recognizer, russian spam recognizer, english/russian identifier. You build a harness which uses the english/russian identifier first, and then depending on the output your program connects the english spam recognizer or the russian spam recognizer.

      Now imagine a huge network with some classifiers in parallel and some classifiers in series. At the top there's the query words, and they travel through the network. One of the classifiers might trigger word completion (ie bio -> biography as in the article), another might toggle the "fresh" flag, or the "wikipedia" flag etc. In the end, your output is a complicated query string which goes looking for the web pages.

      The key idea now is to tweak the choice thresholds. To do that, there's no theory. You have to have a set of standard queries with a list of the outputs the algorithm must show. Let's say you have 10,000 of these queries. You run each query through the machine, and you get a yes/no answer for each one, and you try to modify the weights so that you get a good number of correct queries.

      Of course you want to speed things up as much as possible, you can use mathematical tricks to find the best weights, you don't need to go get the actual pages if your output is a query string you just compare the query string with the expected query string etc, but that would be depend on your classifiers, the scheme used to evaluate the test results, and how good your engineers are.

      The point is that there's no magic ingredient, it's all ad-hoc. Edison tried a hundreds of different materials for the filament in his lightbulb. Google is doing the same thing according to the article. What matters for this kind of approach is a huge dataset (ie bigger than any competitors') and a large number of engineers (not just to build enough components, but to deprive its competitors of manpower). The exact details of the classifier components aren't too important if you have a comprehensive way of combining them.

      • And the thing that I want to know is how they evaluate the results. I actually do research in this space right now, and by far the most painful thing is evaluation of results. We have a system that automates most of the work, but there's still a lot of human involvement, and this limits the input dataset size and speed with which we can iterate the improvements.
        • by martin-boundary ( 547041 ) on Sunday June 03, 2007 @11:25PM (#19377367)
          Good question. I agree with you that the article doesn't say anything valuable in this respect :(

          When you say that your system is limited by human involvement, I presume you mean that implementing new features can have serious impact on the overall design (and therefore on testing procedures)? Feel free to not answer if you can't.

          One thing I found interesting in the article is that Google's system sounds like it scales well. It reminded me of antispam architectures like Brightmail's (if memory serves), which have large numbers of simple heuristics which are chosen by an evolutionary algorithm. The point is that new heuristics can be added trivially without changing the architecture. I think their system used 10,000 when they described it a few years ago at an MIT spam conference. Adjustments were done nightly by monitoring spam honeypots.

          I'd love to see better competition in the search engine space. I hope you succeed at improving your tech.

    • Dude most of the things he talked about are taught in any decent Web Search or Machine Learning course. He is not disclosing any secrets and Page Rank is actually a 5 day homework assignment not a life's work. Google has gone far beyond Page Rank and Page Rank is just the dummy Google likes to wave about so that people are busy trying to beat Page Rank and not their real classifiers. And classifiers are dime a dozen. Tying them up with efficient network and database resources is Google's key contribution.
  • From TFA:
    >>A search-engine tweak gave more weight to pages with phrases like "French Revolution" rather than pages that simply had both words.

    So, now search engines are giving more importance to connected words rather than scattered words. How refreshing!
  • Come now, everyone knows there's no man behind Google's page rank. It's handled entirely by an army of birds.

    http://www.google.com/technology/pigeonrank.html [google.com]
  • by aldheorte ( 162967 ) on Sunday June 03, 2007 @02:34PM (#19373403)
    Not sure about this:

    "Google rarely allows outsiders to visit the unit, and it has been cautious about allowing Mr. Singhal to speak with the news media about the magical, mathematical brew inside the millions of black boxes that power its search engine."

    I could see tens of thousands, maybe hundreds of thousands, but millions?
    • by mestar ( 121800 )
      I don't see any problems. Google's computers are powered by millions of tiny black rectangular box-shaped bateries.
    • I could see tens of thousands, maybe hundreds of thousands, but millions?

      It's in Google's interest to have competitors think of it as bigger than it is.

      So, if they count each IC on a mobo or drive controller, they probably do have millions of black boxes at Google, literally.

      Alternately, they could be talking about algorithms, instances thereof, etc., though I like the black IC's better.
    • Re: (Score:3, Informative)

      by asninn ( 1071320 )

      This [baselinemag.com] is from a year ago (July 2006):

      Google runs on hundreds of thousands of servers--by one estimate, in excess of 450,000--racked up in thousands of clusters in dozens of data centers around the world.

      If this figure is accurate, a million boxen nowadays doesn't seem out of reach.

  • "But last year, Mr. Singhal started to worry that Google's balance was off. When the company introduced its new stock quotation service, a search for "Google Finance" couldn't find it. After monitoring similar problems, he assembled a team of three engineers to figure out what to do about them."

    But then they changed the algorithm and now Google Finance site is at the top.

    • Are you saying you don't think the official website of a product should return as the first result in a search for that product?
      • by mestar ( 121800 )
        I guess I'm saying that Google should not change its algorithm just to boost their own rankings.
        • by shird ( 566377 )
          Exactly... they wouldn't bend over backwards to change their algorithm when someone elses product doesn't rank 1st for a search, especially if it was just new. Only when it's their own do they think 'something needs to change'.

          If they just gave it a few months people would link to it, it would get older etc and its ranking would boost over time. That is the stock response they would give to anyone else that complained. I don't know why they think their algorithm has to list their product as first overnight
    • by mwvdlee ( 775178 )
      Which is what you'd expect if your search query was "google finance", as the article states.
  • One of the New Yorkers munched on cake.
  • I find it frustrating when i am searching for free market data, often available in the form of press releases or summaries of whitepapers. Things such as the size of a particular software or appliance market.

    When i search Google usually gives me information from 2001, 2002, 2003 and it is hard to tell it i want only data from 2006/2007. The problem is that the sites that end up in the search constantly refresh the ads and links around their old stories which makes google think its fresh.

    This was not

"I've finally learned what `upward compatible' means. It means we get to keep all our old mistakes." -- Dennie van Tassel

Working...