Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
Trust the World's Fastest VPN with Your Internet Security & Freedom - A Lifetime Subscription of PureVPN at 88% off. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. ×
IBM Software

IBM's Question-Answering System "Watson" Revisited 170

religious freak writes "IBM has created and made the question answering algorithm, Watson, available online. Watson has competed in and won a majority of (mock) matches against humans in Jeopardy. Watson does not connect to the Internet to answer his questions, but rather seeks answers using many different algorithms then employs a ranking algorithm to choose the best answer." We mentioned Watson last year as well.
This discussion has been archived. No new comments can be posted.

IBM's Question-Answering System "Watson" Revisited

Comments Filter:
  • by LostCluster ( 625375 ) * on Friday June 18, 2010 @12:05AM (#32609984)

    Part of the deal with Jeopardy! is that they will have as part of the 2010-2011 season be a televised episode in which record-breaking champ Ken Jennings will play against Watson, with a to-be-named-later champion in the third slot. This has been in the works since 2009, but the staff of the show finally thinks the system is ready for it's televised match.

    One key factor is how the human behavior will change when prize money is at stake. Jennings has proven in numerous appearances on GSN that he's willing to play in any test of knowledge and the fact that he knew he was Jeopardy's first millionaire in regular season play didn't stop his long Jeopardy! run. He also studied for the show, particularly alcoholic beverages (which he doesn't drink) because he had seen the Potent Potables category on TV.

    But, what about that player-to-be-named later? Will they know more than the grad students... and play the game not as if it's for points but real dollars?

  • by Quackers_McDuck ( 1367183 ) on Friday June 18, 2010 @12:39AM (#32610136)

    Chess has finally been solved to the point that there's now unbeatable AIs available to the average user (assuming it gets to move first)

    There are no unbeatable AIs for chess yet, that would imply chess is a solved game (http://en.wikipedia.org/wiki/Solving_chess). It doesn't make much of a difference who moves first, either.

  • by Animats ( 122034 ) on Friday June 18, 2010 @12:49AM (#32610184) Homepage

    Chess has finally been solved to the point that there's now unbeatable AIs available to the average user (assuming it gets to move first)...

    No, checkers has been solved [sciencemag.org] to that point. The solution is available online. [ualberta.ca] Perfect play leads to a draw.

    Computer chess is merely at the point that if you haven't been on the cover of Chess Life, you're going to get trounced. Even if you have, you're going to lose more than you win. The current situation is that Deep Rybka 2010 [chesscentral.com] has an ELO rating around 3150. That's running on a 4-core AMD-64 desktop machine. The all-time human record is 2851, which Garry Kasparov had in 1999-2000.

  • by Quackers_McDuck ( 1367183 ) on Friday June 18, 2010 @01:47AM (#32610356)

    As humans, we do exactly what physics mandates we do. Unless you're purporting that the human brain uses some sort of hypercomputation or that there's something special (ie outside of our current understanding of physics) about what neurons do, you're not being consistent.

    That said, I understand where you're coming from; most AI research is in very narrow domains and has no intention or hopes of solving the problem of achieving human-level intelligence (Watson is an example of narrow AI, as it clearly lacks a genuine understanding of the question or the english language). But the fact remains, that is how the term AI is used.

    There's a growing separation between this "Narrow AI" and the kind of AI you seem to be hoping for, Artificial General Intelligence (AGI). There are some AGI projects out there, such as the open source opencog [opencog.org]. Since there's no hope of people stopping calling things like computer chess AI, I prefer to use the term AGI whenever referencing "real" AI.

  • by Anonymous Coward on Friday June 18, 2010 @02:07AM (#32610418)

    This answer is wrong. The correct solution would have been:
    What is the answer to life, the universe and everything?

    wrong again:

    What is the answer to the meaning of life, the universe and everything?

  • by Anonymous Coward on Friday June 18, 2010 @07:45AM (#32611732)

    Neurons do execute an absolutely simple instruction over and over again, until they die. The human brain is the perfect practical proof that indeed, trivial components in sufficiently large quantities can make a qualitative difference, for example, the human and rat brains differ by just a order of magnitude, but are able to achieve qualitatively much superior results.

    We haven't yet built a nearly large enough computer to do the same things on the scale of human brains, though - and even when we do make a fully human-level general intelligence, then it will need (according to how homo sapiens operate) a couple of years of practice until the new can understand simple words, a dozen years until it starts really understanding complex problems, and twenty one years until it will be able to decide if should be drinking alcohol). Well, kidding on the last part, but pretty much so.

  • by shadow_slicer ( 607649 ) on Friday June 18, 2010 @07:50AM (#32611784)

    Computers are not intelligent because they are unable to reason. They iterate until they achieve an optimal solution to a specific set of rules.

    Could you define "reason"? The AI field has worked for a while (until the 80s) to build machines that reason. There were some successes with expert systems and things like that (which, given sufficient data could "reason" according to standard definition). The problem with them is that they need complete information, so they never really made it out of the lab. Current works have turned instead of "reasoning" systems to Bayesian inference engines which use complicated statistical methods and approximations to find the most likely answer. They build estimates of probability distributions based on training (equivalent to experience) and then can use them to make decisions or predictions. They are much more flexible than the reasoning machines that were build before and handle incomplete data appropriately.

    These probably don't "reason" by your definition, but then again, neither does the human brain: the current understanding of cognition suggests that it is also a inference engine, making probabilistic judgments based on experience in spite of limited information.

    If these methods count as AI, then AI already exists. It is used in everything from handwriting and speech recognition to ranking players on XBox Live. If you look at the tasks that AI researchers hoped to solve when research in AI began, a large number of them have been solved. So if a computer can solve a problem that was previously considered an AI problem, wouldn't it be "moving the goal posts" to say that we don't have any AI today?

  • But it may not be that necessary if the statistics are large enough.

    It's possible, though, that the statistics can never be "large enough". I remember seeing an article here about natural language speech recognition (oh, here [posterous.com] it is) and about how many companies had hoped to continually feed more and more examples of language use into a computer and, through statistical analysis, be able to develop human-level speech recognition. The article indicated that these companies found a point after which additional examples didn't help. The statistical analysis (at least the methods being used) leveled off around 40% while human recognition was up around 95%.

    Even when the statistical models included searching the rest of the sentence for context and calculating likely words, the recognition still failed. Part of the problem is wordplay-- sarcasm, puns, and unusual word usage. We use all kinds of contextual queues, and not just the word's context in the sentence, but things like facial expressions, tone of voice, and even an understanding of the speaker's personality. That's a lot of context for a computer to pick up on.

    What's more, when people listen to another person talking, we basically try to draw out "what the other person is saying" and then use that knowledge to fill in any blanks. So if I use a really strange word choice when talking about my wife, another married guy might understand more quickly what I'm saying by relating to his own feelings about his own wife. Until a computer has a wife, that's a level of context which will be inaccessible.

    (not everything I'm talking about in this post is in the article I cited)

We are not a loved organization, but we are a respected one. -- John Fisher