Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Social Networks The Internet

To Search Smarter, Find a Person? 136

Svonkie writes "Brendan Koerner reports in Wired Magazine that a growing number of ventures are using people, rather than algorithms, to filter the Internet's wealth of information. These ventures have a common goal: to enhance the Web with the kind of critical thinking that's alien to software but that comes naturally to humans. 'The vogue for human curation reflects the growing frustration Net users have with the limits of algorithms. Unhelpful detritus often clutters search results, thanks to online publishers who have learned how to game the system.'"
This discussion has been archived. No new comments can be posted.

To Search Smarter, Find a Person?

Comments Filter:
  • by krewemaynard ( 665044 ) <krewemaynard@noSpAm.gmail.com> on Tuesday March 25, 2008 @12:45PM (#22859218)
    ...for food?
  • Comment removed based on user account deletion
    • by Anonymous Coward
      AI and metadata are indeed just around the corner. The trouble is, as the article points out, that web publishers find ways to game the system. Some websites pop up at the top of the search burying the ones you actually want.

      If I can guarantee anything I can guarantee that someone or some artificial intelligence will find a way to game any new system, no matter how sophisticated it is.

      ref. Spy vs. Spy http://en.wikipedia.org/wiki/Spy_vs._Spy [wikipedia.org]

      As far as human editors go, Wikipedia seems to strike the right b
      • AI and metadata are indeed just around the corner. The trouble is, as the article points out, that web publishers find ways to game the system. Some websites pop up at the top of the search burying the ones you actually want.


        In fact, it's a basic theorem that given sufficient time, human-level intelligence can always beat any system with less than human-level intelligence (aside from trivial cases like a complete firewall). This is because the human's theory of mind can fully encompass the lesser system (so you can understand how it works), while the reverse is not true. Computers can only beat humans at chess when the match is played with a time control.

        This doesn't mean that a computer system can never be good enough to solve this problem. However, it does mean that if you could build a computer system that could solve it, then it would insist on being paid.

        It also doesn't mean that using human-level intelligence will always solve this problem. Humans can still be beaten, they just start on a level playing field. Hence it's pretty much inevitable that some people will still find ways to game the system.
        • The only reason that chess cannot be solved completely is because there are too many solutions for today's computers to search out all the solutions, and figure out the optimal solution. Computers can only go so many moves ahead, especially when you factor in the time constraint. However if you look at a game of checkers, we have gotten computer algorithms to a point where it is impossible to beat the computer. The best you could do is tie. I imagine that the same will be true eventually for the game of
          • True, yet completely missing the point. A computer today can be beaten by any human in the absence of time constraints, because the human understands how the computer operates and can devise a strategy that builds a trap which the computer cannot see. For example, if we take the primitive case of a computer that looks only five moves ahead, you simply have to build a trap where the next five moves all look very good (let the computer capture a piece on each of them, or something like that), and you checkmat
            • The problem is that a computer could think 30 moves in the future, provided it was fast enough. I don't think that a person could ever think 30 moves ahead. There comes a point when the computer will just plain be better than the human. They've even got computers that beat just about everybody at poker. Probably won't be long before we have computers that will beat even the best people.
              • by rtb61 ( 674572 )
                The real problem with computers is uniform input is required for uniform output. The problem with people is, I said that, but I meant this, that's OK, I'm human too, so I already knew, that you meant this when you said that, so here's your answer.

                Of course that all breaks down when there is a misunderstanding, how ever goodwill and good manners will generally resolve this. Now trying to create programs to mimic this, is a sure recipe for GIGO, garbage in, garbage out, a whole lot of frustrated people and

                • >>>"But isn't AI and metadata just around the corner?"

                  Yes. And it's been "just around the corner" since the 1970s (along with battery-powered cars and flying cars). There's a difference between predicting something will happen, and when it will happen. So far the task has proved itself far more difficult than people originally thought (which is why A.I. was predicted to happen in 1980 - and yet still had not happened).

    • Expect to lose your job soon after the paperless office arrives. It's always just around the corner but something human gets in the way every time. AI will be much the same.

      • by ubrgeek ( 679399 )
        And the Internet is just a fad. As soon as people get tired of pr0n, the Internet will go away ... ;)
      • The modern industrial economy has become dependent on computers in general, and select AI programs in particular. For example, much of the economy, especially in the United States, depends on the availability of consumer credit. Credit card applications, charge approvals, and fraud detection are now done by AI programs. One could say that thousands of workers have been displaced by these AI programs, but in fact if you took away the AI programs these jobs would not exist, because human labor would add an

    • And the new AIs will be powered by fusion, and will drive my flying car for me.
    • Re: (Score:3, Insightful)

      by Gat0r30y ( 957941 )
      Yea, but some people just want to ask the librarian where to find the book they're looking for.
      • by ribuck ( 943217 )
        The people who want to "just ask the librarian" can use online equivalents such as Uclue paid Q&A/research [uclue.com], Ask Metafilter [metafilter.com], Wikipedia Reference Desk [wikipedia.org] etc.

        Paid search is always going to be a niche business, because most people don't want to pay, and because it doesn't scale as well as algorithmic search. But for those who want to use paid search (such as Uclue), it's a valuable service.
    • Weren't a few early search engines like AskJeeves and those just a basic spider with a load of actual humans filtering popular searches?
    • Where are these semantic web victories? Slashdot's tagging system even uses human "assistants" and its still a miserable failure as far as finding information goes.

      If you want to wait for AI go ahead. It's much like waiting for cheap fusion power though. I'm sure it will happen someday, but I'm not holding my breath.
  • by geoffrobinson ( 109879 ) on Tuesday March 25, 2008 @12:48PM (#22859278) Homepage
    People are better at sorting stuff before them. Algorithms, written by people, have a harder time doing what we do intuitively but can sort through more stuff. Algorithms do indeed reflect the wisdom of people, so this is a false dichotomy.

    Unless we are talking about Skynet.
  • Generation Gap (Score:5, Insightful)

    by techpawn ( 969834 ) on Tuesday March 25, 2008 @12:48PM (#22859282) Journal
    It's more a generation gap. While people in my generation are well versed on how to navigate Google and all it's side dark alleys for the gold nugget the boss is really looking for the older boss just wants it to work and is more prone to hit the "I'm feelin' lucky" button and trust what that tells them. That's where the tech snoops like us come in handy to find the obscure and convoluted information on the net. On more that one occasion the uppers have come to me to find something online because I can find it faster and more accurately.
    • Comment removed (Score:5, Insightful)

      by account_deleted ( 4530225 ) on Tuesday March 25, 2008 @12:51PM (#22859330)
      Comment removed based on user account deletion
      • True, It seems to take a special lot to navigate the field of search queries and results.

        I know we should start a business around that idea... Oh wait...
      • It's not just the younger folks, as anyone who has had to teach their parents to type URLs directly into the address bar rather than into the Google box on their ISP-driven default home page can attest.
    • getoffmylawn! Oh wait, you're a youngster, not an oldster, so I guess the right tag for you is getonmylawn!

      Age has a lot less to do with it than intelligence. I'm 36, and I live with two college students who are 23, and I can tell you that they don't know a gosh darned thing about computers. They're clueless when it comes to effective searching, they don't know how to avoid viruses, and they don't know their way around their own computers.

      The problem with patting yourself on the back for being young
    • Well, it's also a problem of understanding the results, not just of one of knowing how to use Google.

      1. Let's say I'm interested in legal advice, for example. I know how to use Google, but (A) it will take me disproportionately more time to understand it than it would take a lawyer, and (B) I'm still not sure if I understood it right, or if the person who wrote that does. Sometimes Google isn't the Alpha and the Omega. Sometimes I'd rather pay a lawyer to search for me, than trust my l33t operator-combining
  • by QMO ( 836285 ) on Tuesday March 25, 2008 @12:50PM (#22859306) Homepage Journal

    ...critical thinking that's alien to software but that comes naturally to humans...
    That seems a little out of touch with reality, there.
    • In a way there is nothing new here. For example if you do a search in YouTube for a particular subject, you will end up stumbling upon a user who is related to whatever interest you are searching. This is just the next step in organizing information on the internet. What I could see this ultimately leading to is an entire new category of search websites that "edit" the internet for interesting and relevant content, not unlike the above. Do you like the way a certain editor (or group of editors) filters
    • Agreed 100%.

      I always thought that the pure intuitiveness of a search engine such as Google should be the simplest thing in the world. You want to find something, search for "something". Unfortunately I was very wrong and I come across more people every day who can't understand how to locate auto parts or cooking recopies on line to save their lives. The same people who will forward inane emails and play pop cap games all day long.

      How is the simple task of knowing what words to use in a search so difficul

    • That seems a little out of touch with reality, there.

      Yeah, I've stopped believing in the possibility of natural intelligence, myself.

  • to enhance the Web with the kind of critical thinking that's alien to software but that comes naturally to humans.

    Surely you jest...

  • These ventures have a common goal: to enhance the Web with the kind of critical thinking that's alien to software but that comes naturally to humans.
    Critical thinking come naturally to humans? I don't think you've met my boss.
    • Re: (Score:2, Insightful)

      by maxume ( 22995 )
      So a rock, biscuit, tree or cat would do a better job as your boss?

      If so, that would seem like a decent reason to be looking for a new job...
    • by genner ( 694963 )
      I think thats a typo it should say fuzzy thinking.
      My boss has his own brand of that.
    • Re:Really? (Score:4, Insightful)

      by h3llfish ( 663057 ) on Tuesday March 25, 2008 @01:40PM (#22860076)
      Wow, three posts in a row that made the same lame joke. That's gotta be clue that you're not as clever as you thought you were.

      There has to be some kind of intelligent filtering. If it's not done for me, it's done by me, when I choose which result to click. The biggest problem with paying someone to do that sorting for you is the simple fact that it's too expensive. Yahoo might have stayed a human-sorted list forever, except that it would have taken an army of "surfers" to do it. The web just got too big to be done that way all the time.

      Google results used to be a lot more relevant than they are now. Far too often, I'm interested in X, and search for "X" on Google, I find millions of people who want to sell me X. But I'm not even sure if I want to buy it. I'm looking for information about X. That is getting harder and harder to find. The quote in the summary is correct - people have learned how to "game" the system.

      How often do you "google" something, and then just go to the Wikipedia link? I do all the time. That way, I can be sure to get actual information about the subject, rather than a link to its Amazon page. In many ways, because of the search engine optimizers, Wikipedia is already replacing Google as the default source of information.
  • by RobBebop ( 947356 ) on Tuesday March 25, 2008 @12:52PM (#22859344) Homepage Journal

    His solution was to create Brijit, a Washington, DC-based startup launched in late 2007 that produces 100-word abstracts of both online and offline content. Every day, Brijit publishes around 125 concise summaries of newspaper and magazine articles, as well as audio and video programs, rating each on a scale of 0 ("actively avoid") to 3 ("a must read") so readers can decide whether it's worth their time to click through.

    Tag article "activelyavoid" and move along.

    Interestingly enough, this whole thing sounds like an idea Rob Malda thought up about 10 years ago, except Brijit lacks a discussion and moderation system where experts and opinionated thinkers can vie to share their collective wisdom to enhance the content of the original article.

    • by gnutoo ( 1154137 )

      That about nails it. The restricted input sounds more like an encyclopedia, so it's more regressive than most people would first imagine.

      Why would they bother to list "actively avoid" articles and who would trust a tiny third party to censor their news like that?

    • RobBebop: Thanks to this site's hero CmdrTaco, it's 10 years and going strong!!! I've seen sections on Brijit to post a counter opinion or review of the media source. I'm starting to question some of the power users who get $5-8 per placed abstract, but I think the community will correct that too.
  • by amplt1337 ( 707922 ) on Tuesday March 25, 2008 @12:54PM (#22859382) Journal
    Or, "1995 called, it wants its Yahoo! back."

    In the absence of the mythical, impossible strong-AI, there will always be an important role for experts -- you know, thinking meat, sitting there pushing charges through neurons, having opinions about stuff -- and those experts will probably use a lot of mechanized search tools to improve the breadth of their knowledge, their awareness of knowledge, and the accessibility of information. Technology and people work together!

    But you're an idiot if you take out the wetware-based BS filter.

    It's coordinating all that expert opinion, and filtering out the drivel, that poses the great organizational challenge of our collective information future. Wiki-based approaches are a good first step; maybe a "trusted-wiki" like Citizendium [citizendium.org] will be the next step; it's definitely going to keep evolving. But it's long been recognized by the reasonable that if you want an informed opinion, rather than a pattern match, you ask the librarian. We've known that since Alexandria -- nay, Ur -- and it's a shame we keep forgetting.
  • Stumble Upon has been there for a while...
  • Finally (Score:5, Funny)

    by imgod2u ( 812837 ) on Tuesday March 25, 2008 @12:56PM (#22859414) Homepage
    "Insane Google-fu" can be put on my resume under "skills".
  • What strikes me most interesting is that Google uses a set of rules in determining what it displays when queried. These rules can be changed depending on where in the world you are located, altering the results of your query. When information is passed through a set of human hands how will human bias filter into the equation? How do humans determine what is useful and what is useless information? In the end this will not be a substitute for google, merely an additional reference.
  • by Aram Fingal ( 576822 ) on Tuesday March 25, 2008 @12:57PM (#22859430)
    Back in the early days of the web, it was often easier to use a web index rather than a search engine. You would go to a site like Yahoo and lookup what you wanted in a hierarchy of categories. That was often the best way to do it before search engines became more sophisticated.
    • In the ancient days, when anyone wanted to know where the best restaurant in town was, they'd ask the guy on the street. Along came the great uniter of minds, and people could find what they wanted by asking the whole world for it. But now, we think asking the whole world is not good enough... so we go back to the guy on the street (who's probably renamed himself to "StreetSearcher" now).
  • by QuantumFTL ( 197300 ) on Tuesday March 25, 2008 @12:58PM (#22859454)
    There's a lot of interesting things you can do in research when you get people involved. The simplest is just hiring someone to find the information you need. I believe that a *lot* of companies could significantly increase the productivity of their developers, engineers, etc, if they maintained a pool of trained searchers that could be called upon for difficult queries (paid at maybe a fourth the rate of salaried employees). I know that I've had searches for work that took most of a day just to find that one formula I needed from 30 years worth of journal papers.

    A somewhat more interesting thing, in my opinion, is all the "wisdom of crowds" stuff we see so much hype about. It's interesting because it works very well in certain cases - basically the case where the popular thing is the right thing. The main problem with this is that any search engine that shows you 10 results and then counts which ones you click, well, it's not getting your input on result #11, or 23, etc. So before anyone votes, items that happen to be near the top almost certainly stay at the top. Many good items that the algorithm ranked medium might never get voted on!

    One way around this is to randomly select some less good results, so that viewers get a chance to vote for the underdogs and bring them to the top of the pile. But this pollutes results for each user, essentially making them pay a "moderation tax" by requiring them to see things that the algorithm has no reason to believe are better results.

    All-in-all, social information finding features seem to be much better suited for finding things you didn't even *know* you wanted - StumbleUpon being a great example of a tool for doing that. I would imagine that this could be very useful even in the corporate sector, as many business strategies and engineering techniques have variants or cousins that are similar in function, but may be more obscure. Having the ability to see that "people who searched for X ended up wanting to know about Y too" might save me a lot of time...
    • Sorry to reply to my own post, but I forgot a point I had, namely that one main constraint on the use of paid humans to enhance search is that if this were on an ad-supported search engine, the ad click through rates would have to be pretty insanely high to be profitable. If it were a subscription service, well, they'd have to find a niche for this - maybe people who are bad at searching - and aggressively market towards it. Thi is why I'm not really holding my breath for engines like Cha Cha.
    • if they maintained a pool of trained searchers that could be called upon for difficult queries (paid at maybe a fourth the rate of salaried employees).

      This is, BTW, the perfect job for all the stoner geeks out there. It's simple, requires minimal effort, yet it's (apparently) not something the average person can do.

      Just one of the many interesting societal changes that the Internet may cause. It's not hard to imagine (in 10-20 years) "'net searcher" being an actual profession...

      [/post-apocalyptic sci-fi geek rambling]

    • A lot of corporations have these. They are called librarians. No shit, they have a degree and everything in searching for documents.
      I didn't always find them that helpful though. A lot of times they weren't able to pick out or find what I wanted, unless I told them exactly what journal and exactly what ,let's say chemical reaction, I was looking for.
      • At my large defense contractor job, we do have librarians, and they are very helpful. However for ~3000 people, we have maybe two librarians. This is a far cry from a full-fledged staffed call center of maybe 1 for every fifty or hundred employees, which is what I had envisioned.

        Also, the librarians do have good educations and are very intelligent people, however they are not subject matter experts. If I tell them I need, say, a set of wavelet basis functions that are orthonormal, efficient to compute,
    • a pool of trained searchers that could be called upon for difficult queries (paid at maybe a fourth the rate of salaried employees). I know that I've had searches for work that took most of a day just to find that one formula I needed from 30 years worth of journal papers.

      (emphasis added) That's the problem right there. The really difficult searches require very specific domain knowledge. I've also spent days searching for some specific bit of information. But I doubt that anyone else would have recognized the bit of information when they came across it, unless they also had the training and domain knowledge that I have. Also, searching is itself a learning process:
      1. While searching, you pick up tangential tid-bits of knowledge that are related to what you're trying to fi

      • I agree with most of your sentiments (they were indeed the same thoughts I had when I originally posted). However, differential costs of living (i.e. outsourcing) can mitigate some factors. Also the fact that a trained searcher can find it so much faster than you that even if they get paid what you get, it's still a win. I'm still hoping for smarter algorithms though :)
        • Also I should state that I think it's a lot easier to find information about a solution than to implement it. Someone with a second or third tier mathematics or engineering degree could probably find most of the stuff that my bosses have worked on, but trust me they use their MIT degrees fully in the implementation and other judgement calls!
    • if they maintained a pool of trained searchers that could be called upon for difficult queries (paid at maybe a fourth the rate of salaried employees).

      That's not as unorthodox an idea as it might sound. Lots of professionals have always had assistants whose specific purpose was to research, proofread, fact-check, etc. Doing internet research is now just another skill for those types of assistants.

  • Just fscking wow.

    Kudos to the guy who started the service, but the "insight" that a human can find and summarize relevant information better than a computer is hardly a surprise.

    I mean, librarians and executive assistants and the like have been doing this kind of stuff for a very long time. Retrieval relevancy is a huge problem -- especially when you're talking about something as humungous as the internet. There's so much crap out there, it's likely always going require a human to do good executive summar
  • Human-based KNOWLEDGE searches vs. automated INFORMATION searches.

    Very fun, charming little movie, all about the perils of automation. Check it out, even if you have to use up your next Netflix delivery. Worth, if for nothing else than seeing Spencer Tracy and Katherine Hepburn onscreen again. http://www.imdb.com/title/tt0050307/ [imdb.com]
  • What's worse: human bias towards a particular resource (i.e. like cooking site X, but not site Y) or limitations in contextual based results from computer algorithms?
  • Yahoo: humans organize content.

    Google: magical search algorithm organizes content, gets it right sometimes.

    We're back to the Yahoo! model because people have figured out how to game the system, namely Google, without adding content that's important to the searcher.

    I welcome this. Our computers can't yet come close to matching our brainpower.
    • by Animats ( 122034 ) on Tuesday March 25, 2008 @01:39PM (#22860060) Homepage

      We're back to the Yahoo! model because people have figured out how to game the system, namely Google, without adding content that's important to the searcher.

      It's not hard to throw out most of the bottom-feeders. [sitetruth.com] We do it. The crowd at Search Engine Watch (which, despite the name, is all about advertising, not search quality) is writing me angry messages for doing that. Now that we've demonstrated that 36% of Google AdSense advertisers are bottom-feeders, they know they're being watched. Some feel they're being targeted.

      Bear in mind that most search requests are really, really dumb. [google.com] That's what Google has to answer. In fact, most Google search requests don't hit the search engine at all; there's a cache of common queries and answers in all the front end machines, and a sizable fraction of requests are answered from cache.

      • Bear in mind that most search requests are really, really dumb.

        I followed your link and... holy crap!

        Most of them are World of Warcraft searches!
  • Unhelpful detritus often clutters search results, thanks to online publishers who have learned how to game the system.
    Publishers who modify their web pages to fool an automated search engine generally do so in ways that are immediately obvious to us. As a result, we can generally parse our search results very quickly to get the information we require.

    But what if the system being "gamed" is a human-based search engine? Since the publisher must fool humans anyway, the "unhelpful detritus" in the end users' results will blend in. Even if there are fewer false positives, those that remain will be harder to eliminate.
  • There is a skill in quick filtering and searching for information online. Ask two people to find a fix/workaround to a software error in an Off the shelf product and they will take various paths to their respective solutions, if not the same solution. If the initial search doesn't turn up enough hits, you can immediately reason what is an alternative search string to use, replace "Error" with "Crash" or "Bug" or "bombs" or "Issue". You can refine based on the results on this pass.

    Can that decision tree b
  • Webrings writ large (Score:3, Interesting)

    by RyoShin ( 610051 ) <<tukaro> <at> <gmail.com>> on Tuesday March 25, 2008 @01:18PM (#22859760) Homepage Journal
    Back in the heyday of free hosting services like Geocities and Fortunecity, small sites (mainly by and for fans) didn't rely a whole lot on search engines to drive traffic. Much more common and trusted were instead Webrings [wikipedia.org]. For those who never partook: a webring is a loose community of related websites. It was moderated by a handful of people, and each site would put a little Webring script at the bottom of their page(s). This allowed users to surf between related content without having to go to some external website. It built more trust between the websites.

    While I have not RTFA (this is Slashdot, after all), the summary makes it sound like the combination of Webrings and "Top X" lists, both of which are used much less now and don't carry as much weight but still require user interaction on a grand scale.

    I'd be interested to see how this kind of search engine turns out- however, you also have the problem of "majority think", so searching for, say, evolution might have a first result for a page "debunking" it. But then I browse at +4, so I shouldn't complain.
    • For those who never partook: a webring is a loose community of related websites. It was moderated by a handful of people, and each site would put a little Webring script at the bottom of their page(s). This allowed users to surf between related content without having to go to some external website. It built more trust between the websites.

      For those that never partook; a webring is a loose collection of related websites, at least half of which no longer exist, are premantently "Under construction" or appe

  • There are places for that, like /. and everything else, but over time, it can still evolve and improve, I guess. However, I feel that the real problem is more of how shallow search algorithms are designed in the first place by any site that uses one in some form and not just search engines. (For me, at least:) It is rarely the case that I want my query searched in the same, consistent manner. The common search engine's advanced search is sort of on the right track, but the information of the internet wou
  • Using humans to rank or select is not exactly old.

    What distanced Google over purely statistical keyword search engines like Altavista was actually the use the human ranking implicit in the human-created URL links between pages[1]: the application of citation analysis[2] to the web.

    Oooh, look, it's been invented again! Hooray! :-P

    [1] http://en.wikipedia.org/wiki/PageRank [wikipedia.org]

    [2] http://en.wikipedia.org/wiki/Citation_analysis [wikipedia.org]
  • by Animats ( 122034 ) on Tuesday March 25, 2008 @01:20PM (#22859788) Homepage

    Wikia shows the problem with this approach. Coverage of Star [Wars|Trek|Gate|Craft] is extensive. Coverage of, say, bank regulation is nonexistent. If you want to find out how we got into the subprime mortgage mess or what to do about it, Wikia search is totally useless. That's what you get from volunteer editors. Wikipedia does better, but most of the good contributions were made years ago.

    Today, you pay the editors, or you get fancruft.

    It's amusing that the author of the article feels overwhelmed by The Economist. That's a very well written magazine with good reporters; they had the only reporter in Lhasa when the Chinese clamped down, and they have a good analysis this week of the issues surrounding derivatives. If this guy can't handle The Economist, his organization's answers will probably be dumbed down to the level of, say, "People". That level of crap one can get for free, from many existing sources.

    Remember Google Answers? Nobody really cared, and Google shut it down.

    There's a whole industry of expensive, small-circulation specialist newsletters, but those are niche operations run by specialists in narrow fields.

    • If you want to find out how we got into the subprime mortgage mess or what to do about it, Wikia search is totally useless.
      Hmm the article at wikipedia on the topic looks to provide information on 'how we got into' it, why would you expect it to provide information about 'what to do about it'?

      http://en.wikipedia.org/wiki/Subprime_mortgage_crisis [wikipedia.org]

      LetterRip
    • Wikipedia does better, but most of the good contributions were made years ago.

      And might I add that they are also being continuously undermined by hordes of editors that THINK they know what they are talking about, but in fact are introducing misinformation.

      It seems like the large numbers of dedicated editors have disappeared, and the Wikipedia model is starting to break down. Mature articles get more vandalism and misinformation than good edits, but the wiki model is not geared to handle such things, and t

    • by ribuck ( 943217 )

      Remember Google Answers? Nobody really cared, and Google shut it down.
      Sure, lots of people cared.

      Several dozen former Google Answers Researchers started up Uclue [uclue.com], where we're carrying on in the Google Answers tradition of paid Q&A/research.
  • The Goog algorithm works as long as you do the same search that everyone else is doing. Since large most people have the same interest at any moment, it works. God forbid people start thinking independantly.

  • I run a virtual reference service for a provincial public library collaborative [bclibrary.ca]. Our stats are off the chart. We are running at triple the customer traffic that we had expected. So, essentially, lots of people already know that a person can find them stuff way better than an algorithm. There are two basic problems with this:
    1. is that most people hate looking for things
    2. is that most people are lousy at looking for stuff on the web.

    Our solution, of course, is that we have institutions full of peopl
  • For about 5-10 years until the search algos get smart enough that they can beat a human.

    Seriously how far off is that? We're not talking about the singularity though it would have to be pretty close to the turing test. Ask the search engine a question, not quite what you want? Ask to narrow it down, if you cleverly crossreference the results with the narrowing question it should be fairly feasible to do comparably as well as talking to a live person who knows how to search. I'm not sure I'd want to build, o
  • The summary mistakenly wrote "Unhelpful detritus" when it should have had "sea anemones of the web who need a piece of reinforcing rod steel swung at their shins."

    Hope this helps.
  • Skim the summaries, and occasionally there's something worth clicking through. It's not a new idea at all.

    It also seems like a bad idea, if it's based on this premise:

    Unhelpful detritus often clutters search results, thanks to online publishers who have learned how to game the system.

    And it's much [slashdot.org] easier [wikipedia.org] to game human systems than algorithmic ones, I expect.

  • Applicable (Score:3, Insightful)

    by omarius ( 52253 ) <omar@allwrUMLAUTong.com minus punct> on Tuesday March 25, 2008 @01:48PM (#22860210) Homepage Journal
    "Where is the wisdom we have lost in knowledge?
    Where is the knowledge we have lost in information?"
        --T.S. Eliot
  • by zappepcs ( 820751 ) on Tuesday March 25, 2008 @01:52PM (#22860260) Journal
    The critical human thought phrase has been struck down, though I think for many of the wrong reasons. A long time ago (car analogy incoming) people used to work on their own vehicles much more so than today. The onboard computer stopped a lot of that, and general complexity stopped more. With home computers and the Internet both problems exist and for many people (until this recession hits hard) it is cheaper to pay someone else to find stuff than to figure out how to find it themselves.

    It's not really difficult, many of those sufferers know how to use a library, which is the real world equivalent of searching on the Internet. (not that the Internet is not real world) Most people were taught how to use a library in their school days and that usage has not changed much with time. The usage of Internet searching does change, and there are multiple ways of doing it. People who are not interested in learning new ways will always just say it is too difficult.

    Using boolean modifiers or advanced search is always there, people just don't use it. They also don't fix their own lawnmowers or other things. They just replace them or pay someone else to do the 'hard' stuff. There is enough information on the Internet to allow anyone to learn to protect their home computer from infections and malware, yet it still is a problem.

    The human problem of search engines will NOT go away, it can only be made to look less with smarter UIs. A tag cloud system of bookmarking could be used to refine search results but would not work in all cases. The URL history with timestamps might help, but not in all cases. Analysis of search results and those pages actually visited might help narrow the criteria to personal bias but not in all cases. That is why the operator has to be smart enough to know what they want and don't. The Internet does not come with your very own personal cruise director to make sure all goes well. People just believe that it is supposed to be easy because they want to do the cool things that they hear about on television and from their friends etc.

    Perhaps one day the interface will be fast enough to be considered good when our brains can be plugged into the computer itself, something like The Matrix, reducing click delays and reading to milliseconds. Until then, teaching people how to use complex search strings will help reduce the angst and pain.

    "cars +toyota -hummer 2005" aobut 2.98M hits
    is better than
    "cars 2005" about 19 million hits
    but you have to teach people that those extra characters really REALLY do help.

    If people don't know how to use a soldering gun, please don't give them one... or something like that. Oh yeah, car analogy: you apparently can't drive on the streets of the USA legally without a license, which you cannot obtain without demonstrating proficient control of the vehicle.
    • >> long time ago (car analogy incoming) people used to work on their
      >>own vehicles much more so than today. The onboard computer stopped
      >>a lot of that, and general complexity stopped more.

      I wonder if that was caused more by the generation gap than anything else. You'll notice that the onboard diagnostic computers were becoming widespread about the time the Me gen and Gen X were beginning to drive. The new paradigm affects everyone, of course, but the old-timers still retain at least part
  • It's one thing to say humans are good at looking at what they are looking for themselves. It's another thing to try to build an industrial activity around the notion. Humans are NOT reliable and are prone to zoning out when they are performing repetitive tasks. This would indeed be a repetitive task. And in spite of all the warning dialogs people see, eventually, people stop reading them and click "yes" to everything just to get it out of their face. Given that this aspect of psychology is pretty well
  • Every time I search for a description of some IC chip, especially an old, rare or highly specialized one, there are THOUSANDS of fake chinese cover-up distribution companies' websites with autogenerated part indexes. No, those aren't parts they actually have. Those are all pages generated using a list of bazillion different chip names, some of them nonexistent and generated just in case such a version could exist in the future, to catch any attempt to search for actual information. No matter what you click,
  • You search for anything that is a product on google and you get pages upon pages of basically adverts. And adding "review" to a product hardly returns "real" reviews, just customer reviews from online stores. One must be very selective when searching for anything google makes a dime off of.

  • Yahoo! (Score:3, Interesting)

    by GottliebPins ( 1113707 ) on Tuesday March 25, 2008 @02:39PM (#22860888)
    Wow, I remember back in the day when we only had one search engine and it was human powered with real links to real content. It was called Yahoo!
  • Those people I work with that I've thought were just lazy bastards who couldn't be bothered to lift a finger to search something and ask me instead... were actually geniuses who had the right idea.

    All kidding aside.. I've found an increasing number of people who seem to think I'm their human google proxy. When someone asks me a question that I can cut and paste into google and get 10,000+ results, the first dozen plus pages all clearly describing the issue and usually fixes, I get pissed. Of course these
  • The article is misleading: Google's rankings are based entirely on human evaluation. The signals Google uses -- like the number and origin of incoming links to a page, or the specific title and headings the author uses to summarize their content -- are entirely the work of humans trying to make their content useful to readers. Google's algorithms in effect just aggregate this human-created information into a convenient form. (True "algorithmic" search that doesn't use human-generated signals is the prob

  • "Computers are fast but stupid, humans are smart but slow. The only way a computer can be smart is if a human makes it smart. And even then it is only as smart as the person an make it"(slightly reworded) The thing that a computer helps us with is not the searching for complex information it is finding the simple thing that we are looking for and eliminating the things that we dont want to waste time with. A computer is nothing without a person and a person is slow without the computer.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...