Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Google Businesses The Internet Yahoo!

Search Engine Results Relatively Fair 100

perkr writes "The Economist and PhysicsWeb report on a study from Indiana University claiming that search engines have an egalitarian effect that gives new pages a greater chance to be discovered, compared to what would be the case in the absence of search engines. Based on an analysis of Web traffic and topology, this result contradicts the widely held 'Googlearchy' hypothesis according to which search engines amplify the rich-get-richer dynamics of the Web."
This discussion has been archived. No new comments can be posted.

Search Engine Results Relatively Fair

Comments Filter:
  • google good (Score:5, Insightful)

    by seanadams.com ( 463190 ) * on Saturday November 19, 2005 @03:31AM (#14069557) Homepage
    First of all any time you want to analyze Google, you have to realize that they've had ten PhDs crunching the problem already for years. Google is designed to give the best results for whatever its users are searching, thus any apparent bent towards egalitarianism, monopolism, antidisestablishmentarianism, or what-have-you, is purely incidental.

    If you're searching for something obscure, Google will instantly tell you the one startup company building it. On the other hand, if you want something mainstream, they'll give you a prioritized list of the best sources. There's no alterior motive it seems - they just give you what you searched for... imagine that! I've seen a business through from obscure geek hack to the mainstream consumer, and Google has been there at every step of the way, working exactly as users expect. To accuse them of favoring any particular stratum of that chain is awfully unfouned IMHO unless there are some specific examples. Indeed, answering users' needs instead of pandering to the status quo seems to be he most valuable bit of what google does.
    • Re:google good (Score:3, Insightful)

      by cperciva ( 102828 )
      any time you want to analyze Google, you have to realize that they've had ten PhDs crunching the problem already for years

      First, I'm not sure that this statement is even true; sure, Google has lots of PhDs, but whether as many as ten of them are actually doing research about searching is not so clear. Managing researchers is even harder than managing programmers.

      Second, not all PhDs are created equal. Some do brilliant research both as graduate students and thereafter; others barely manage to achieve a de
    • It makes no attempt to filter spam, which like email will soon account for about 80% of content.

      Try this search for Tartfuel http://www.google.co.uk/search?q=tartfuel [google.co.uk] once a local band. When Google claims to have 28,600 results, in fact there are only 36. Now that's a con. When I give search advice look through all the results and they look me to say "but there's millions". Never, if you're doing a specific search, Google won't even display a tenth page (which is the max).

      So, of the 36 results, how many are
      • by baadger ( 764884 ) on Saturday November 19, 2005 @05:41AM (#14069795)
        Too true. How about some serious search innovation from Google?

        - Effective (but switchable) web spam filtering, as parent mentions.
        - The ability to search for strings like "-x flags" (note the quotes) and actually get meaningful results.
        - More complex patterns (mathematical expressions, anyone?)
        - Sort search results by the date pages were modified, they were discovered by Google? (useful in circumstances when you're looking for the latest information on a topic).
        - Semantic sensitive search bots.
        - Better results for filetype: operator. Why can't Google index all major filetypes even if it can't make them searcheable?

        Anyone got any others?

        Google could be working constantly behind the scenes on their engine but perhaps they should start making more noise about it. When was the last time Google's web search engine trod some new ground? Or any search engine for that matter (I refer to Google because they are 'innovating' so much).
        • I use it all the time and I'm constantly frustrated by clicking on an image or link and then getting a page that tells me I don't have permission to access it. Can they could fix the search engine so such images are excluded? That's one area in which all search engines seem to be lacking, so far as I know.
      • It makes no attempt to filter spam, which like email will soon account for about 80% of content.

        Not true at all. Insider screenshots of Google's special internal interfaces to employees show that they actually have a human driven spam filtering services. They basically display a page at a time and have the user rate how likely they think it is spam. I can't remember where I saw the screenshots, so I can't find them.

        When Google claims to have 28,600 results, in fact there are only 36. Now that's a con.

        Di
      • Funny. I find only 14 results. What does this mean? Did Google suddenly know about those 20 spam results? Or is someone over there monitoring this site and removing results especially for /. readers?
      • Nice try.... but trying the same search "tartfuel" in MSN gave me 4,000 entries and they DID show all of them. So I'm unsure of your position here. It appears that you are saying that you are unhappy with the fact that when you put in this mostly unknown search criteria and Google came back w/ 30 results, 1/3 of which you realized were good, you believe there is a problem? Frankly, I'm impressed w/ how good the results were... you may consider choosing a more clear example next time - perhaps one that il
      • I suppose the future belongs to social search engines like the new Swicki project.
    • by Analogue Kid ( 54269 ) on Saturday November 19, 2005 @04:31AM (#14069679) Homepage
      I've made sites with fairly mainstream content before, which were totally ignored by google. But then, I put an article on my blog [blogspot.com] about the history of a certain group of elite English schools in Taiwan. Previously, this information had not been on the internet anywhere. Now, if you type the name of the original school of that group (Modawei) into google, my article [blogspot.com] comes up #1.
      • by DeadSea ( 69598 ) * on Saturday November 19, 2005 @06:34AM (#14069920) Homepage Journal

        I've made sites with fairly mainstream content before, which were totally ignored by Google

        That is precisely what the "rich get richer" effect is about. This study seems to be measuring the wrong thing. Of course your mainstream site is going to get a few hits from Google because your site mentions something in some quirky way that other sites don't. However, because there are already 10,000 sites about what you have written, you will never get into the top ten search results. Google puts sites near the front of the SERPs because they have lots of incoming links. Sites that are in the top will get a lot more traffic and some percent of that traffic links to them. Sites at the bottom, get few new incoming links.

        Yes those few visitors that you are getting from Google are more visitors than you would get if Google did not exists, but that says nothing about the relative number of visitors that your competitors are getting.

        • by enjo13 ( 444114 ) on Saturday November 19, 2005 @09:35AM (#14070341) Homepage
          Yes, but doesn't that also give you a chance to 'build' that content into a front page resource. That's the point.. Google isn't making the rich richer, its making (assuming the algorithm is sound) the most useful richer. It SHOULD be difficult to displace a highly useful site for a particular topic from the front page.

          The issue, of course, is how we measure how useful content is. Since computers currently aren't that good at analyzing the actual content we have to instead rely on other metrics. Such as popularity, number of links, referrals, and whatever other madness google is currently using. It may not be optimal, but it's certainly much better than other systems we may have. Being on the front page of google for a mainstream subject is certainly rewarding. However, it is still POSSIBLE to displace a page by increasing the visibility of your content organically (such as getting it into the blogosphere) and thus eventually moving yourself onto that highly valuable first page.

          For proof of the process you only need to look at the various lawsuits filed against google by companies/individuals who saw their page moved from the front by other more useful sites. I think that google is a highly valuable tool that brings a lot of order to an otherwise chaotic web.
          • Being on the front page of google for a mainstream subject is certainly rewarding. However, it is still POSSIBLE to displace a page by increasing the visibility of your content organically (such as getting it into the blogosphere) and thus eventually moving yourself onto that highly valuable first page.

            Well said. There is no promise that your site will get to at the top of the search pile because you made it. If you are basing yourself on popular things, all you really need to do is do something that is
            • There is no promise that your site will get to at the top of the search pile because you made it.

              More to the point, there is no promise that your site will get to the top of the search results if your site is the most useful. The reason for this is the absolute, incontrovertible truth that:

              most-linked != most-useful

              The newest, least linked site google has may, in fact, be the most useful result. Since google does not (cannot, apparently... so much for those PHDs) evaluate the site for its actual i

          • The biggest problem as I see it is not in any particular search engine itself (although I personally think Google is best), but once again, human nature.

            There are simply an enormous number of websites that have no purpose other than to trick people into visiting (pay per click scams, etc) or trick search engines into making some other page sit higher in the rankings. This isn't a problem created by search engines. This is a problem created by assholes, while some companies like Amazon make the problem wor
      • Search for Ikioi on Google, MSN, Yahoo, take your pick. (Not yet an entry in H2G2.) I'm obscure and #1! Hmm, not sure if that was a fair trade off...

        I think there is a fairly straight forward relationship between rating and specialization, and it has everything to do with competition. And, obscurity is the best way to avoid competition. For instance, the top results are still funny for something so utterly obscure as "French Military Victories".
    • First of all any time you want to analyze Google, you have to realize that they've had ten PhDs crunching the problem already for years.

      Sorry, but the next step is that you need to realize that there are MILLIONS of people with a vested interest in making money trying to 'game' the system. Over the past year, I have found search engines less and less useful. More often, the top results for many items are 'proxy' sites that come up that aim to make money on ads. Somehow through link sharing or manipulating

    • To quote you Sean,

      "To accuse them of favoring any particular stratum of that chain is awfully unfouned IMHO unless there are some specific examples."

      Yeah, unless you're one of the billion people in China.
  • I can't complain. (Score:3, Informative)

    by jessecurry ( 820286 ) <jesse@jessecurry.net> on Saturday November 19, 2005 @03:44AM (#14069591) Homepage Journal
    I've had worldwidewingtour.com live for about 3 days and I have a good google ranking. Even a search like "hooters wing tour" places me at number 7 on google.
    • google "ranks" on a 0-10 scale. you have a 0. anything under 4 is not a "good ranking". whatever searches you're getting good search placement on (which is different thank pagerank.. PR is only one of a few factor in a search) must not be highly contested searchs heh.
      • I'm 7 out of 260,000... it's a lot better than it could be.
        • Why is google serving up ads for concert tickets(U2/Aerosmith) on the site? Doesn't seem too relevant. Just hit me, must be the word "tour". Well I clicked a few ads, good luck on the tour.
          • thanks :)
            I tried to get a little more content on the container page, but it seems that the title holds more weight than the keywords, perhaps my frame settings should be different for the container.
            I hope to get a lot of people interested in what I plan to do, I know I'd read about it... I just hope others feel the same.
      • by arrrrg ( 902404 )
        Google only updates their publicly visible Pagerank data every couple weeks or so, to make it more difficult to game the system. New pages will show up as PR 0 until the next public update, but (of course) Google updates their private Pagerank database more often. That being said, WTF?! World Wide Wing Tour?!
        • That being said, WTF?! World Wide Wing Tour?!

          LOL.... did you actually take a look at the site? The idea sprung up one day and has just grown and grown, I ended up stuck in bed for 3 weeks unable to use the computer, or even stand up for more than two minutes at a time and during that time got to think a lot about things that I wanted to do. One of those things was see the US and the rest of the world, so the wing tour idea came back to the forefront, then I ended up buying the domain and some hosting a

    • uhhh are you sure?
      Your search - "hooters wing tour" - did not match any documents.
  • by Anonymous Coward
    It's all in the rhythm of the algorithm.
  • More than fair to me (Score:3, Interesting)

    by JunkyardCat ( 795659 ) on Saturday November 19, 2005 @03:58AM (#14069612)
    I've established a number of websites primarily for small groups of users, and every one of them has been ranked, even one set up for a friend strictly to put up family pics for his brother to see. If it's out there it's googlable. And no, I don't care if it's not a word :)
  • Woot! (Score:1, Offtopic)

    by seebs ( 15766 )
    Check out the google results for "rebate lawsuit".

    As of this writing, they go to "some guy's blog", namely mine. No links to it that I know of, either, which is sorta weird.
    • It's linked off your cafe shop, which is I imagine listed and linked off of cafepress and whatever else. You're also running Moveavble Type which can be set to ping Technorati and such whenever you post.
  • Re: (Score:1, Offtopic)

    Comment removed based on user account deletion
    • Then, when I search for "digital voice recorder", the first 2,000,000 results will be lame ass "coming soon" pages, or pages that suck so much nobody has ever felt like linking to them. Usefull sites, like this one [olympusamerica.com] would be near the bottom so as to ensure that "the rich are robbed".

      Hell, while we're at it, why not make roads that way too! Let's rob the "population rich" metropolitan areas and focus our road building on the isolated rocky passes passes which have been deprived of people and infrastructure
  • by NickFortune ( 613926 ) on Saturday November 19, 2005 @04:48AM (#14069711) Homepage Journal
    Suppose there were no search engines.

    Most web newbies would form their impressions of the web from their ISP's portal site. That would give a lot of power to corps like AOL, who for a long time tried to persuade their subscribers that there was no web outside of AOL hosted content.

    There might still be blogs and social networking sites, but the take up would be slowed since fewer people wold have heard of them, and both might have failed to ignite into the movement we see today.

    Which would probably mean that if you wanted something outside of the main ISP channels, you'd be reduced to digging through the spam on USENET to find it.

    Google as an egalitarian influence on the web? I think it's a bit of a no-brainer, personally.

    • " AOL, who for a long time tried to persuade their subscribers that there was no web outside of AOL hosted content." "Google as an egalitarian influence on the web? I think it's a bit of a no-brainer, personally."

      Just because the alternative to having search engines is much worse does not make Google an egalitarian influence by default. It is the least worst solution, definitely, and one I for one can happily live with, but we are still in a situation where if (when?) Google decides to jump ship and to s

      • What we need is what DNS should have been for domain names, but for webpages.
      • So, even if the answer to the question is "mmm... yes" today, it doesn't mean it has (or will) stay like that forever.

        By the same token, the fact that you are (I presume) a law abiding and well mannered member of society doesn't mean you won't suddenly be seduced by the Dark Side and become a serial killer. Should we all view you with fear and distrust based upon your possible future actions, or should we treat you as your actions to date warrant?

        Why then is everyone so keen to condemn google for crim

        • "Why then is everyone so keen to condemn google for crimes that remain hypothetical?"

          My main concern isn't really an ethical one, and I'm not one to judge Google on their spotless reputation so far. However, my point was to highlight the fact that search engines play a pivotal role in the way Internet works, and what they choose to highlight (or not) has very deep implications. Google, in the end, is a corporation, not a religious/moral institution. and there currently is no reliable neutral third-party t

          • I'm just bothered by the fact that we should rely on Google's reputation to make sure that they won't do something stupid/illegal. In business, good faith just doesn't cut it anymore,

            We let Microsoft weild far more power when that particular corporation has a track record of corporate misbehaviour. If we decide that good faith isn't good enough, how about start with those who have sinned in the past, rather than by punishing the innocent? Just a thought.

            Incidentally, am I the only one who sees all sor

            • "So in effect, you're proposing regulating a corporation that has yet to do wrong, while far more potent opinion shapers...are let off scott free."

              It is BECAUSE we have let the "politically controlled media cartels" degenerate to this point that I believe we should be proactive in this case. I don't want search engines to end up as just another product placement/promotion tool and with limited or no practical use.

              " how about start with those who have sinned in the past, rather than by punishing the innoc

              • Nobody actually said anything about "punishing". "Controlling", even passively, is more what I had in mind. And this control has to be a neutral third-party, not necessarily a government entity

                mmm... but unless you have a remedy that can be applied equally in all cases, then you still have the effect of penalising one company, or possibly one sector of industry. Call in it passive control is a bit like say "it's for your own good" or "this hurts me much more than it does you".

                Of course, if all you wan

    • I don't need to suppose; I've been on the web since very early days and online prior to the web on networks such as CompuServe and BIX, bulletin boards, ham radio packet systems (AA7AS) and so forth. People linked from here to there anyway; and early on, there were far more technical types who were already in a "web" of email and forum communications to let each other know when something new started up. The web was not the first web, in other words.

      There is no question that google is actually useful to a

      • I couldn't afford CompuServe, way back when, but I remember the early web from the days when it was possible to have read it all. I don't think any informal group of techies could manage all that information these days - and your non-tech wouldn't have a hope.

        The day that google figures out how to evaluate site content instead of using indirect and gamable measures of site popularity will be a wonderful day

        I'm sure they're just as eager to bring about this day as you are to see it happen.

        So, what do

        • I don't know of a better engine. I didn't mean to imply that I did. I don't use search engines much, period. I prefer to follow links selected by people. To that end, I use del.icio.us, various blogs that follow issues of interest to me, mailing lists, RSS feeds from sites that specialize in areas of interest, and company web sites and message boards for the products I own. I pay attention to links I find on slashdot, too. :-)
          • I don't know of a better engine. I didn't mean to imply that I did.

            Fair enough. I'm doing a lot of research at the moment and I sometimes need to find information in areas I never thought of before. I'm always looking for another good engine :)

      • The day that google figures out how to evaluate site content

        Natural language processing powerful enough for full content evaluation, fast enough to be useful for evaluating the entire Web, and unbiased and broad enough to be useful for a major fraction of the users of the Web . . .

        . . . If we had that, what would humans need the information for? The program that can do that will be able to outthink any human decision-maker.
        • If we had [AI]

          I don't think so. I think we already have grammar analyzers that can catch random strings of (key)words as opposed to well formed sentences, we are already able to discern what words are general and what are topic-specific, we can determine if spelling is reasonable or 133t-5h173 or uneddikashunal, we can see if some rational proportion of links from a particular site go to sites that have something to do with what the original site had to do with, we can figure out if the site is full of

  • by Omar El-Domeiri ( 10568 ) <slash@noSpaM.doesnotexist.com> on Saturday November 19, 2005 @04:53AM (#14069716)
    There is another paper out of UCLA that is similar to this one except with somewhat opposing results. In which, the authors show analytically that the rich-get-richer phenomenon does exist. http://oak.cs.ucla.edu/~cho/papers/cho-bias.pdf [ucla.edu]

    It seems tough to reconcile these two sets of findings, and this new paper even makes mention of this:

    "The connection between the popularity of a page and its acquisition of new links has led to the well-known rich-get-richer growth paradigm that explains many of the observed topological features of the Web. The present findings, however, show that several non-linear mechanisms involving search engine algorithms and user behavior regulate the popularity of pages. This calls for a new theoretical framework that considers more of the various behavioral and semantic issues that shape the evolution of the Web. How such a framework may yield coherent models that still agree with the Web's observed topological properties is a difficult and important theoretical
    challenge."

    • The solution to this is simple. Encourage people not to link to popular pages. It still amazes me how some personal pages have a link to Google on a links page.
    • I noticed the "rich-get-richer" study didn't account for activity on the respective pages. A low activity page is unlikely to either attract links or make it into the 20% most popular web pages.
    • 1.) The two results are not mutually incompatible if you keep in mind that they measure different things. The UCLA paper measures popularity by the number of links to a site while the Indiana paper measures web traffic to a site. While they may be correlated, they are different quantities.

      Obscure websites may not have many other sites linking to them, but still get more traffic than what they otherwise would have if search engines did not exist.

      2.) Web traffic is not zero-sum. By that I mean, it isn't
  • As is the case with many things the truth is somewhere in between these two. While I will quite commonly end up visiting an obscure site through google because it has high relevance, the larger sites will almost certainly be listed alongside these results.

    For instance slashdot is highly ranked and grows because it has high relevance to a wide selection of technical topics and is also linked from a large number of sites because it is well known.
  • I'm quite certain that new domains get a boost by Google for a little while and then a push down. Or the other way around. Google does something special with new websites (or possibly only domains) anyway. I wish I could be more verbose, but I'm afraid I've forgotten all I read about it, and I'm too lazy to search for decent SEO websites.
  • compared to what would be the case in the absence of search engines.

    Well duh! Otherwise you'd have to browse. and browse. and browse some more, hoping to find a site with the info you wanted. And you'd probably only know about sites that had a big budget to advertise. Search engines are inevitable - some bright spark is always going to realise that there must be a better way to automate the process by having a computer browse for you, so you can ask it later if it found anything on your topic.

  • I didn't even have to read through all the details of the study to see how it was bunk. I was quite suspicious as to how someone could conduct a small study to determine this, especially considering the extremely large amount of sampling and data analysis require to do such a study. And then, it would rely on a bunch of assumptions about your relatively uknown and non-established testing procedures being accurate. Of course, reading the study it appears they have used the notoriously unreliable Alexa ran
  • by Frankie70 ( 803801 ) on Saturday November 19, 2005 @05:28AM (#14069772)
    Search Engines give new pages a greater chance to be discovered

    This just in - Yellow Pages give new businesses a greater chance to be
    discovered.
  • by Morosoph ( 693565 ) on Saturday November 19, 2005 @05:40AM (#14069792) Homepage Journal
    Since pages are weighted by the ascribed importance of referencing pages, and so on in an endless mesh, it's clear that there is something non-egalitarian about Google, but this is not enough to make it hierarchical. It's more like capital flow.

    Here's how: the wealthy get to decide who receives their spending, and those people in turn decide how strongly to weight their suppliers' votes in the allocation of resources. This perpetuates through in a cycle that reaches a very rough, shifting equilibrium that very much resembles Google's "pagerank", IMO.

    Compared with outright hierarchy, this kind of inequality is still going to appear relatively fair, but it doesn't measure up to equally weighted votes. That is, it isn't democratically fair. However, this, or at least some inequality appears to be essential to making useful discrimination, if you're going to use the "intelligence" of the web itself to do it. Ideally, the results would be based upon the quality of the content itself, no matter how obscure, but the artificial intelligence required to do that would be mind-boggling.

    Besides, people often want to find something that they were surfing the other day (ie. relatively more likely to be strongly linked), or else read up on what others are talking about, so that they need the same points of reference... An objectively better site might actually be inferior for socialising with one's peers, or engaging in political tribal virtual warfare: a third point of reference in such cases leaves you out of the discussion!

  • Blogs are an example of Google's support for new content. It's excellent indexing of blogs supported the popularity of the concept. And as a consequence of the importance give to new content, many keywords (esp technical) list blogs right on top.

    Supporting new content is essential for the growth of the web. A web NOT weaved around high profile websites, built by media monsters (CNN, BBC...). The new web is about independent content, free thought and free speech. Yours and mine.
  • The bit this ignores is that it's up to the search engines to decide who gets chosen. So for example, the google sandbox penalises most new sites for up to 18 months. If the best answer to your question is on a new site, sorry, but google probably won't find it. Whatever rules they make up determines what happens.
    • "The bit this ignores is that it's up to the search engines to decide who gets chosen. So for example, the google sandbox penalises most new sites for up to 18 months. If the best answer to your question is on a new site, sorry, but google probably won't find it. Whatever rules they make up determines what happens." My Doctor Who site is less than 18 months old, but it still gets good rankings in google for some relevant keywords, and has done for the vast majority of its lifetime. The problem is probably
      • The google sandbox is a well documented algo that google use. It works on the theory that most spam sites have a short life. Unfortunately they throw out the baby with the bathwater....
  • I don't buy it (Score:4, Insightful)

    by Jesus IS the Devil ( 317662 ) on Saturday November 19, 2005 @06:55AM (#14069960)
    If you RTFA you'll notice some of the arguments against it.

    But beyond that, common sense alone tells you winner takes all, and it continues to be that way, with google or with anyone else.

    The entire pageranking algorythm is there to point you to the most likely result you're looking for. They base that on popularity, number of links coming in, and the importance of the referring sites linking you. The net effect is, the more popular you are, the more relevent you become and the higher ranked you are.

    Also, when you type in say "windows" Google automatically assumes you're talking about the Windows OS. What if you were looking for real windows? The search engines are always assumming based on popular demand. This steers people's thoughts and pushes them in a non-neutral direction. As a word's context changes to favor a certain direction, search engines rank that as more relevent, which leads to it being more favorable, etc. Cycle repeats.
  • google cannot be completly fair since people are aware of its methods and can design ways to negate them. i can't see how this test was carried out scientifically. the test should not be whether the results are mainstream or not, it should be whether the results are relevent. it appears that economically poor websites have as much ability to fix search results as rich ones do. great. the information content relevence is scored, not by an independent expert, but by the number of other websites that link
  • Umm... DUH! (Score:2, Informative)

    by jonadab ( 583620 )
    Of course. Doesn't anyone remember what the web was like before search engines became popular, when the main way to find pages was by following links there from other pages? If you could get someone to link to your page who in turn was listed prominently on the Humor, Jokes, and Fun page on akebono, then you were all set, but otherwise, it would take *months* for anyone to find out about your page, if they ever did.

    Don't even bother replying to this unless you know the significance of akebono.
    • After surfing the web,... I can now safely say:

      The name of a Hawaiian sumo wrestler after which the server that first hosted Yahoo! was named after.
      • akebono was a server at Stanford where graduate students in the computer science department could get projects hosted. And yes, the Yahoo! index originated there. It was rather smaller at the time; for instance, "Colleges and Universities" was originally a toplevel category, and did not have any subcategories at first, although it wasn't long before it was subdivided.

        This was fairly early in the history of the web, before Netscape, when Gopher was still in more widespread use than the web, although ISTR t
  • The end goal is to be able to return the single 1 page that a user wants that contains ALLLLL the information possible for him/her. Too bad for us these pages dont exist. So the logical thing to do is return all the pages that would satisfy this need, and just those pages - no more, no less. But what google attempts to do is, return ALL pages that it determines as relevent and rank them. This is why we get queries of 100,000+ for somewhat broad terms. It sucks and is a crappy way to do searching. Then on to
  • If anybody has used Freenet, then perhaps they've already encountered some interesting bookmarked pages. Then again, if I'm not mistaken, most of the content there is in Freenet, is most likely some sord of eye-twitching pr0n.

    Then again, this quote from their FAQ is interesting:

    Is Freenet searchable?
    No search mechanism has yet been implemented. One of the design goals was to make it impossible to locate the exact place where any piece of information is stored. Even a server operator cannot determine

  • Scrapers are all those pages, that simply crawl SE results with expensive keywords, then put them onto a page with ads (adsense many times).

    They would even include a search, and poll via yahoo/google api, html strip YOUR pages and present it without backlinks to YOUR site.

    With the recent Bourbon Dance (recent reindex/algo change is called a Dance in SEO world) it seems that some of these are gone.

    Sad is that your site can be penalized for dupe content, and it happened to me multiple times. That means from

A committee takes root and grows, it flowers, wilts and dies, scattering the seed from which other committees will bloom. -- Parkinson

Working...