Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Software Science

Machines Almost Pass Mass Turing Test 580

dewilso4 writes "Of the five computer finalists at this year's Loebner prize Turing Test, at least three managed to fool humans into thinking they were human conversationalists. Ready to speak about subjects ranging from Eminem to Slaughterhouse Five and everything in between, these machines are showing they we're merely a clock cycle away from true AI. '... I was fooled. I mistook Eugene for a real human being. In fact, and perhaps this is worse, he was so convincing that I assumed that the human being with whom I was simultaneously conversing was a computer.' Another of the entrants, Jabberwacky, can apparently even woo the ladies: 'Some of its conversational partners confide in it every day; one conversation, with a teenaged girl, lasted 11 hours.' The winning submission this year, Elbot, fooled 25% of judges into thinking he was human. The threshold for the $100K prize is 30%. Maybe next year ..."
This discussion has been archived. No new comments can be posted.

Machines Almost Pass Mass Turing Test

Comments Filter:
  • Figures (Score:5, Funny)

    by eldavojohn ( 898314 ) * <eldavojohnNO@SPAMgmail.com> on Monday October 13, 2008 @10:28AM (#25356091) Journal

    'Some of its conversational partners confide in it every day; one conversation, with a teenaged girl, lasted 11 hours.'

    That's not fair, she was feeling vulnerable as she had just broken up with her N'Sync wallposter--which she had been romantically involved with for several deep & very meaningful years. Things fell apart after she saw Tropic Thunder and came to the harsh realization that an astonishing percentage [wikimedia.org] of N'Sync is homosexual.

    Those soulless bots were simply preying on her emotions as they coldly recited word for word the Wikipedia entry on the band over and over.

    • Re:Figures (Score:5, Funny)

      by Just Some Guy ( 3352 ) <kirk+slashdot@strauser.com> on Monday October 13, 2008 @10:43AM (#25356365) Homepage Journal

      That's not fair, she was feeling vulnerable as she had just broken up with her N'Sync wallposter

      Do you have to periodically replace the onion on your belt?

      • Re:Figures (Score:5, Funny)

        by eldavojohn ( 898314 ) * <eldavojohnNO@SPAMgmail.com> on Monday October 13, 2008 @10:51AM (#25356519) Journal

        That's not fair, she was feeling vulnerable as she had just broken up with her N'Sync wallposter

        Do you have to periodically replace the onion on your belt?

        Yes, yes I do [youtube.com]. What's in style these days? Vidalias? Although I'm thinking about going with leeks because--let's face it--retro is so in ...

    • by Anonymous Coward on Monday October 13, 2008 @10:44AM (#25356379)

      Girl: I'm like soo depressed! He's like leaving me.

      Computer: For sure. Like, ya know, like, it's so bad.

      Girl: You got that straight! Like, why, like, he, like, nevar talked to me!

      computer: Like, oh - my - god! Like, I like know!

      Girl: Like, you know me like so good!. Like, how like old R U?

      Computer: No that like older than like you.

      Repeat all the above.

    • beware! (Score:5, Funny)

      by FornaxChemica ( 968594 ) on Monday October 13, 2008 @11:05AM (#25356705) Homepage Journal
      Don't make fun of that teenage girl. Someday, in that ever-coming scary future, your girlfriend or your wife will leave you for that compassionate and caring bot ("everything you're not!") with whom she's been having a virtual affair for months ("he's got more guts and data than you'll ever have!"). I bet he's be a good listener too. Skynet won't kill humanity, it will steal its women.
      • Re:beware! (Score:5, Funny)

        by somersault ( 912633 ) on Monday October 13, 2008 @11:11AM (#25356787) Homepage Journal

        Meh - they'll come crawling back when they want babies.

        • Re: (Score:3, Funny)

          by hajihill ( 755023 )

          Meh - they'll come crawling back when they want babies.

          The speed with which things can go from bad to worse never stops to amaze me.

          Thanks.

        • Re: (Score:3, Insightful)

          by ionix5891 ( 1228718 )

          or need someone to bitch to about their girfriends... who just nods and agrees

      • Re:beware! (Score:4, Insightful)

        by cayenne8 ( 626475 ) on Monday October 13, 2008 @11:39AM (#25357317) Homepage Journal
        "Don't make fun of that teenage girl. Someday, in that ever-coming scary future, your girlfriend or your wife will leave you for that compassionate and caring bot ("everything you're not!") with whom she's been having a virtual affair for months ("he's got more guts and data than you'll ever have!"). I bet he's be a good listener too. Skynet won't kill humanity, it will steal its women."

        Or it could go the other way. First...if they give AI a convicing voice...then they can use it and fire all real meat operators on the sex phone lines.

        Then, if they can built a realistic android, in female form, that will do anything you want sexually, yet will not give you a disease, have unwanted kids, no commitment (eg ability to take half your stuff), and shut up on request....well, I'd say real meat women are gonna be in trouble.

        With proper AI, and advanced robot tech...well, they could make the 'perfect' woman for what men want them for.

        Reminds me of the joke:

        What is a Cinderella '10'?

        A girl that fucks and sucks till midnight...then turns into a pizza and a six-pack.

        • Re: (Score:3, Insightful)

          by caluml ( 551744 )

          I'd say real meat women are gonna be in trouble.

          I fear you've overlooked a rather important function that "real meat women" bring to the human race.

          • Re: (Score:3, Insightful)

            by cayenne8 ( 626475 )
            "I fear you've overlooked a rather important function that "real meat women" bring to the human race."

            Oh...I'd dare say the earths population would drop a bit. Not that that's a bad thing actually.

            I mean, let's face it...I'd have to guess that out of the number of times a guy is fucking, the times he is actually actively wanting to become a father has got to be remarkably low.

            • Re:beware! (Score:5, Insightful)

              by CastrTroy ( 595695 ) on Monday October 13, 2008 @01:15PM (#25358899)
              Maybe that will be what ends up saving the human race. The only people having babies will be the people who actually have thought about it, and have come to the decision that they want to raise children. Rather than all the people who have made a bad decision, and now have to do their best to raise a child, despite never wanting to have a child in the first place. Personally, I have children, and I love it. But I think there is a large percentage of people who had children who had children who didn't want to, or who should have waited until later.
      • Re:beware! (Score:4, Insightful)

        by imboboage0 ( 876812 ) <imboboage0@gmail.com> on Monday October 13, 2008 @12:58PM (#25358655) Homepage
        Wasn't there a Futurama episode about this?
      • Re:beware! (Score:5, Funny)

        by Erikderzweite ( 1146485 ) on Monday October 13, 2008 @01:27PM (#25359049)
        I'll organize a company then which sells chatbots to men who want to get someone laid. Just imagine -- you launch a bot and it'll chat with a girl till she's ready for anything. It notifies you automaticaly via e-mail about that. You can even integrate it with a web calender so you don't have overlaps! Finally best nerds will get more women!
      • Re: (Score:3, Funny)

        "My two favorite things are commitment and changing myself"
  • Yes but (Score:5, Funny)

    by Rik Sweeney ( 471717 ) on Monday October 13, 2008 @10:29AM (#25356105) Homepage

    Can I get it to fill in Yahoo! Buzz's Captcha for me? I've given up trying.

  • by Tumbleweed ( 3706 ) * on Monday October 13, 2008 @10:30AM (#25356109)

    Despite massive glitches, the Sarah Palin unit has already convinced around 30% of the population that it's human. I think it's the winking module.

    I still think it was a mistake to have armed it.

  • by Hatta ( 162192 ) on Monday October 13, 2008 @10:30AM (#25356123) Journal

    For a real Turing test, the computer must be declared human as often as humans are, and declared a computer as often as computers are.

    • by Anonymous Coward on Monday October 13, 2008 @10:35AM (#25356213)

      Has anyone done a similar test except with all humans? I'd be curious what the ratio is then. That's the number a computer would have to beat.

    • by zappepcs ( 820751 ) on Monday October 13, 2008 @10:36AM (#25356243) Journal

      You are exactly right, and that is why I think Tubleweed's comment is going to be the funniest in this thread.

      The idea that humans, any human, is a fine example of perfection for AI researchers to aim for is like saying that ANY OS is a fine example of perfection to aim for. Simply because we don't abandon or throw away non-perfect humans as a rule does not mean that all are intelligent, or worthy of copying.

    • I think you mean the computer may be declared a computer as often as humans are, and must be declared human as often as humans are. Otherwise if computers were declared computers 90% of the time and humans were declared humans 90% of the time then it would be mathematically impossible for any computer to pass the turing test.
  • by the eric conspiracy ( 20178 ) on Monday October 13, 2008 @10:32AM (#25356163)

    study Jabberwocky's code in order to learn the logic patterns used to 'woo the ladies'.

    • by rootofevil ( 188401 ) on Monday October 13, 2008 @10:40AM (#25356319) Homepage Journal

      one conversation, with a teenaged girl, lasted 11 hours.

      <chris hansen>
      why dont you have a seat over here...
      </chris hansen>

    • pretending he's human. no lady geek can resist.

    • by ScytheLegion ( 1274902 ) on Monday October 13, 2008 @10:57AM (#25356607)

      I think this is pretty impactful. All jokes aside, the fact that Jabberwacky held an 11 hour conversation with a teenage girl is pretty astonishing. Obviuosly, a conversation of that nature is going to be all about emotion - not logic, reason or an empirical display of intelligence. Isn't that the point for AI to seamlessly interface with us? (I realize it's not necessarily the scope of the Turing test). Humans are teriible at logic and reason. Emotion is one of the key components which defines us as a species. I know a lot of humans who couldn't carry on an 11 hour conversation which primarily focussed on emotion... let alone with a teenage girl discussing nothing but fluff, pop-culture, or black and white ideologies.

      I actually think it's funny, interesting and astonishing at the same time!

      Oh yeah... I, for one, welcome our new teenage girl conversationalist... never mind...

      • by somersault ( 912633 ) on Monday October 13, 2008 @11:22AM (#25356957) Homepage Journal

        The thing you have to realise is that most women just want a "listener" when they're feeling emotional. As long as you give occasional signs that you are paying attention, they'll believe you were "listening" more than if you actually try to have a real conversation with them. They don't want answers, they just want someone to be there. So talking to a distressed teenage girl is one of the easiest tests you could get.

        That's what "Men are from Mars, Women are from Venus" says anyway. I tried not saying anything back one time when my mum got annoyed at me, and she totally thought I was "listening" to her more than usual! Before too many jokes about the only woman in my life being my mum, I must point out that I did have a girlfriend around that time, but I was faaaar too late in reading the book to save that relationship!

        These programs sound pretty good though - the next steps after this are to integrate speech synthesis and recognition, then integrate them into computer games and you can have the computer opponents hurling abuse at you, or just talking about how your day went :)

        • by kellyb9 ( 954229 ) on Monday October 13, 2008 @12:01PM (#25357725)

          The thing you have to realise is that most women just want a "listener" when they're feeling emotional. As long as you give occasional signs that you are paying attention, they'll believe you were "listening" more...

          I'm sorry, what were you saying?

      • by gnick ( 1211984 ) on Monday October 13, 2008 @11:34AM (#25357213) Homepage

        ...the fact that Jabberwacky held an 11 hour conversation with a teenage girl is pretty astonishing. Obviuosly, a conversation of that nature is going to be all about emotion - not logic, reason or an empirical display of intelligence...

        I don't find it nearly as mind-blowing. Have you talked with a teenage girl since you've reached adulthood? It's a conversation only in the sense that there are two people both forming words. Here are a couple of guidelines that are incredibly useful if you want to have a 'conversation' with a teenage girl:
        1) Agree with everything.
        2) Try to pick up on here tone and answer with either 'Oh that rocks!' or more frequently 'Dude, that sucks!'
        3) If she's got a bad situation that can be easily resolved by simple action, avoid giving useful advice. Fixing what's screwed up in her life will limit her supply of drama and any attempt to interfere will be met with hostility and logic that can be most generously described as irrational and most accurately described as delusional. Just ignore the obvious and refer to (1) and (2).
        (4) All modern trends are immensely cool and will never go out of style. Spending all available income on clothes, make-up, and upgrading your 3-month-old cell phone is perfectly reasonable. The only downside is that her parents won't give her more money or a credit card so that she can have the same things that 'everyone else in her school' has.

        Hope that helps. I spent the weekend with my niece and successfully took a short nap while learning all about why everything in her life is so unfair and retaining my status as an understanding uncle. ZZzzzz...

        Note: I've found that (1) and (3) are useful for pretty much any emotion-related discussion with the fairer sex. YMMV.

        • by dmbasso ( 1052166 ) on Monday October 13, 2008 @12:16PM (#25357933)

          I don't know if it's this way with every programmer, but I tend to apply our common logic tools to every problem I encounter. Briefly speaking, I try to 'debug life'. It's nice, everything I want I usually get through these mentally sketched 'algorithms'. But trying to argue with a girl about a problem in a logical way really doesn't work. When you think you had all the variables fixed and an obvious overall picture that she can't disagree, that's when she'll bring things that aren't even related to the problem, just to confuse things up.

          So, I couldn't agree more with you, and emphasize your 3rd item.

          • by MadnessASAP ( 1052274 ) <madnessasap@gmail.com> on Monday October 13, 2008 @01:03PM (#25358725)

            Yeah too bad you can't restart a social situation and single-step through it to see precisely where you went wrong. In totally unrelated news I'm still single.

          • Re: (Score:3, Insightful)

            In other words, you have perfect debugger, which doesn't work. Get the hint.

            I hope your logic professor demonstrated, that logic is rather crude tool, which discards a lot of information in order to be comprehensible.

            Also, you think that you have all the parameters fixed, but there are uncountable, do you really think a person (especially a child) can list all factors?

            To make matters worse, logic is a way to make mistakes with more certainty. Have you got your axioms right? Are you sure all operations are d

      • by VoidEngineer ( 633446 ) on Monday October 13, 2008 @02:05PM (#25359589)
        I think this is pretty impactful. All jokes aside, the fact that Jabberwacky held an 11 hour conversation with a teenage girl is pretty astonishing. Obviuosly, a conversation of that nature is going to be all about emotion - not logic, reason or an empirical display of intelligence. Isn't that the point for AI to seamlessly interface with us? (I realize it's not necessarily the scope of the Turing test). Humans are teriible at logic and reason. Emotion is one of the key components which defines us as a species. I know a lot of humans who couldn't carry on an 11 hour conversation which primarily focussed on emotion... let alone with a teenage girl discussing nothing but fluff, pop-culture, or black and white ideologies.

        I know you're actually trying to say that this is impactful, because it means that Jabberwacky is able to incorporate emotional reasoning into it's conversations. But I think you're using a lot of sexist stereotypes, and are seriously underestimating the thinking skills of teenage girls. I don't know where you come from, but where I come from, teenage girls are sharp and clever, and have a tendency to win debate tournaments, math olympiads, and generally get better grades in school.

        If you actually sat down and looked at the train of thought that's going on with teenage girls, you might be surprised at the amount of logic that's being used. They're just using different inputs and premises than guys do, and tend to focus on a sort of social networking logic. For example: Say that Jane is dating Dave; and Jane is also part of the Gardening Club at school. Jill is also part of the Gardening Club, has a crush on Dave, and is trying to attract him. How does Jane keep Dave's interest, when Jill is tempting him? Already, you've got a problem that probably requires set theory, network graphs, and game theory to solve. And the way that teenage girls are going to solve these kinds of situations is with exactly those kinds of tools and methods... "i do this, she does that, i do this other, she responds, and then her reputation is toast" is just a rephrasing of game theory with time series analysis. "if we convince Mary to talk Jill into joining Theater, then Jill won't be around to attract Dave" is just a rephrasing of set theory, with a bit of social network analysis tossed in for good measure. Sure the conversation and analysis will be interspersed with talk about emotions and teenage vernacular. But to say that it's lacking empirical displays of intelligence, logic, or reasoning; well, I think you're really underestimating what's going on in the heads of teenage girls.

        Also, most guys don't develop the maturity and interest to investigate these social networking problems until they're in college or later. But teenage girls routinely solve these kinds of problems while they're teenagers. And they do require logic and analysis; just a different sort of logic than people with sexist expectations have regarding what constitutes logic. To say that teenage girls don't use logic is probably naive and perhaps a bit sexist.
        • by mdmkolbe ( 944892 ) on Monday October 13, 2008 @05:52PM (#25362551)

          I know you've been mod'ed +1 Funny, but as a male who spent over a decade interacting solely with women for 9 hours out of the day, as a matter of survival I've learned that what you've said is right.

          The key to realize is that while some D&D players meta-game in their games, women meta-meta-game or meta-meta-meta-game their entire lives.

          A good example to explain this would be if you ask a man where get bought a stake. He'll say "Joe's Market" or something. As a woman the same question and she'll respond "Why? What's wrong with it?" (I've purposely chosen one of the simplest meta-levels a woman may operate at so the men can maybe understand.)

          Now when men assume women are operating at a direct level and women assume men are operating at a meta-meta-level, you can imagine the confusion that results. One side thinks the other is acting irrationally (like a meta-gaming character would seem to a non-meta-gaming character) and the other things the first is a dolt (like a meta-gaming character would think of a non-meta-gaming character).

  • by cabjf ( 710106 ) on Monday October 13, 2008 @10:33AM (#25356183)

    Another of the entrants, Jabberwacky can apparently even woo the ladies: 'Some of its conversational partners confide in it every day; one conversation, with a teenaged girl, lasted 11 hours.'

    You need to be careful who you meet online. The rise of the machines won't be in the form of war machine. No, far worse! They're just going to woo and steal our women!

    On second thought, maybe that wouldn't be so bad. That would just put everyone on equal footing with the Slashdot crowd.

  • by Rob T Firefly ( 844560 ) on Monday October 13, 2008 @10:33AM (#25356189) Homepage Journal
    You: I was fooled.
    Eliza: Why do you tell me you were fooled just now?
    You: I mistook Eugene for a real human being.
    Eliza: Can you elaborate on that?
    You: In fact, and perhaps this is worse, he was so convincing that I assumed that the human being with whom I was simultaneously conversing was a computer.
    Eliza: Why do you mention computers?
  • Another of the entrants, Jabberwacky can apparently even woo the ladies: 'Some of its conversational partners confide in it every day; one conversation, with a teenaged girl, lasted 11 hours.'

    Apparently they spoke of sweet nothings.

    I also welcome our new overlords to be featured in new episodes of MSNBC's "To Catch a Predator."

  • by eldavojohn ( 898314 ) * <eldavojohnNO@SPAMgmail.com> on Monday October 13, 2008 @10:36AM (#25356239) Journal
    Wuoldt'n yuo tihnk a sereis of smiple tpyos that a hmn cuold undrstnd wuold fool thm? (Note that Firefox returned the first correct spelling for all but three of those words on spell check ... so maybe that's not a good example)

    Or, you know, thinking up some open space game to play that is well known like truth or dare, alphabet games, association games, etc?

    Or asking them open ended questions or asking them to describe love, hate--emotions that are not dictionary/wiki friendly? One would think that continually prying for personal experiences would reveal a flaw. Or perhaps simple things like "when were you born?" Followed by "how did you feel when JFK was assassinated?" if they weren't born before 1963.

    I would think it quite hard to be duped into believing a program is a human.
    • by thepotoo ( 829391 ) <thepotoospam@[ ]oo.com ['yah' in gap]> on Monday October 13, 2008 @10:59AM (#25356623)
      You could easily write a script that unscrambled the words based on the first and last letters by comparing them to a dictionary list of words.

      Games have rules, so you could theoretically pass that test by giving the bot all the rules to commonly played games (Calvinball, OTOH, that would be a good test).

      Open ended questions are great, but I'm not sure how they're not wiki friendly. If I ask a bot, "what is love", I'd expect to get back an answer not dissimilar to what's on wikipedia - "emotions relating to a sense of strong affection".

      Obviously, Wikipedia is not sentient, so you're going to have to do better than just asking facts/dates (any good ELISA will have a solid backstory).

      No, I'd bet that a sufficiently good bot would be able to pass the Turing test with 100% accuracy - but still be not sentient.

      Let me throw this out there, though: Intelligence is those processes by which animals obtain and retain information about their environments, and use that information to make behavioral decisions (Kamil, A. C. 1987 A synthetic approach to the study of animal intelligence. Nebr. Symp. Motiv. 7, 257â"308.). Cognition, by my own definition, is the ability to solve a single problem multiple ways (quotidian expedience) and the ability to copy behaviors seen in other individuals (mimicry). It's not a perfect definition, but it's a damn sight closer to a true AI than trying to get ELISAs to pass the Turing Test.

    • by telchine ( 719345 ) on Monday October 13, 2008 @11:06AM (#25356725)

      Or perhaps simple things like "when were you born?" Followed by "how did you feel when JFK was assassinated?" if they weren't born before 1963.

      These chatbots are slippery customers, at first I couldn't get a straight answer from Jabberwacky but then he revealed himself...

      Me: When were you born?
      JW: How about you?
      Me: How did you feel when JFK was assassinated?
      JW: I... I was just trying to make small talk.
      Me: Were you born before JFK was assassinated?
      JW: No.
      Me: How did you feel when JFK was assassinated?
      JW: Shocked! Enraged! Full of fear! Making thousands of phonecalls to tell everybody about it.

    • by Haeleth ( 414428 ) on Monday October 13, 2008 @11:20AM (#25356935) Journal

      Apparently 25% of people think Elbot is human.

      Me: "omg ur so awesom"
      Elbot: "I've given the matter much thought, and if I were to compliment you, I'd say you were reasonable."
      Me: "lol wot ur naem"
      Elbot: "It's always lovely hearing the quaint ideas you human beings have about me."

      Apparently 25% of people have the IQ of a carrot.

    • by VorpalRodent ( 964940 ) on Monday October 13, 2008 @11:22AM (#25356955)
      Admittedly, this is a bit offtopic, but when I read your post I imagined the conversation to go like this -
      1: When were you born?
      2: January, 1963.
      1: What did you feel like when you heard that Kennedy was assassinated?
      2: I wet myself many times that day.
      2: Also, I cried...I was tired.
    • Re: (Score:3, Interesting)

      by kabocox ( 199019 )

      I would think it quite hard to be duped into believing a program is a human.

      I'll take the opposite POV just to be naughty. Why? Well, browse slashdot at 1 and see how many robots you could pick out. Heck, even at 5 you still get robots due to the slashdot group think; they just say what slashdot wants to hear and get modded up.

      If they really wanted to test a few of these systems, they'd get each one a slashdot account and have them read the headlines and then make a single post after reading 10-15 posts at

  • Test the testers? (Score:5, Insightful)

    by MeanMF ( 631837 ) on Monday October 13, 2008 @10:36AM (#25356251) Homepage
    Were the testers pre-screened? Maybe the test is really showing that 25% of the population is just dumb.
    • by onion2k ( 203094 ) on Monday October 13, 2008 @10:53AM (#25356557) Homepage

      Oh yeah? And which half am I in? ;)

    • Re: (Score:3, Interesting)

      by Gabrill ( 556503 )

      25% is a very good return, if you ask a spammer. A.I.'s that can fool 25% of the population would make POWERFUL grassroots opinion changes in the political landscape.

    • Re:Test the testers? (Score:5, Interesting)

      by TheRaven64 ( 641858 ) on Monday October 13, 2008 @11:27AM (#25357087) Journal
      It took me three questions before Elbot replied with a non sequitur and about five minutes before it started repeating answers. It didn't take me long to realise that it had no concept of context - every reply was a reply to what I had just said, and had no relation to the last-but-one thing I'd said. Some things that tripped it up:
      • Asking 'why?' about anything.
      • Trying to teach it a new word.
      • Asking it the square root of minus two (odd, since last year one of the judges asked questions like this to all of the bots).
      • Anything about religion.

      That 25% of the judges thought it was human is quite alarming.

  • by Ed Avis ( 5917 ) <ed@membled.com> on Monday October 13, 2008 @10:37AM (#25356263) Homepage

    This is really great news. We already have IRC bots that can fool the casual observer into thinking they are human, but this takes things to a higher level. If the source for one of these bots is available, within a few months you can expect instant messaging networks to be full of bots which are programmed to make friends with you and then after a few weeks start making subtle references to Viagra and online pharmacies. Indeed, if one of them is able to chat up the ladies, then the lonely nerd could easily automate much of the tedious work of setting up dates: get your robot to talk to thousands of potential matches at once and alert you when it gets hold of a phone number, together with a brief summary of what you talked about, and any pictures. (Or indeed, just program it to harvest pictures.) That is, if online dating works at all, which is doubtful.

    • by NotBornYesterday ( 1093817 ) * on Monday October 13, 2008 @10:49AM (#25356483) Journal
      And soon your bot will set you up on a date ... with a really hot-sounding bot. Better yet, your bot might decide to cut out the middleman and just date the other bot himself.
    • That is, if online dating works at all, which is doubtful.

      Works for females, but I think that's merely a problem with the ratios. My brother's mother-in-law is single, in her mid-40's, and posted her info on E-harmony. Within 2 weeks she's had over 40 people responding to her profile and has setup real dates with 4 or 5 of them. This is in a fairly low-populated area, and she is, while not "ugly", not some uber-hot MILF or anything.

      Personally I've not even bothered with trying it myself, but from what I've gathered the response rates for guys are much, much les

  • by archeopterix ( 594938 ) on Monday October 13, 2008 @10:37AM (#25356273) Journal
    I believe it is much easier to fool an average human than a person with even some basic knowledge about AI.
  • Big deal. (Score:3, Interesting)

    by schon ( 31600 ) on Monday October 13, 2008 @10:37AM (#25356275)

    Eliza [nasledov.com] has been doing this for years. [fury.com]

  • by Exitar ( 809068 ) on Monday October 13, 2008 @10:38AM (#25356291)
    The day an AI will pass the Turing Test, it will be the day humanity has become so stupid to not be able to see the differences between a person and a machine.
    • by fahrbot-bot ( 874524 ) on Monday October 13, 2008 @11:05AM (#25356709)

      ...the day humanity has become so stupid to not be able to see the differences between a person and a machine.

      Vibrator sales would seem to indicate that some segment of the population is smart enough to tell the difference...

    • by kabocox ( 199019 ) on Monday October 13, 2008 @11:46AM (#25357461)

      The day an AI will pass the Turing Test, it will be the day humanity has become so stupid to not be able to see the differences between a person and a machine.

      I'm mixed on that. That was my first reaction, and then I thought, but if that AI is talking with stupid people, then isn't it at human level?

      The other thing is are you calling anyone that doesn't notice that it's a robot/AI stupid? By default, I don't even think of what the other person is. I don't know or care if you are white/black/green, male/female/both/neither, or which political views you hold. On slashdot, I only know you by the 3-4 sentences that you type. Is that enough for anyone really to judge one way or another if someone is human or a robot? Nope.

      I'd suggest a good portion of slashdot could be robots, and we'd never notice.

  • Are any transcripts of the conversations available for viewing?

  • by apodyopsis ( 1048476 ) on Monday October 13, 2008 @10:39AM (#25356311)
    I'm slightly nervous about all this.

    People do not think of the ramifications.

    You wait until there is nigerianMalwareEliza V1 that can simultaneously hold several thousand online conversations whilst trawling for peoples information (think: dob, mothers maiden name, first school, pets name) or finding potential scam victims.

    Talking to gullible teenagers is a depressing statement on modern life - hoovering out thousands of bank accounts or persuading people to part with money is a tad more serious.

    I predict that soon everybody will need to watch their online chat alot more seriously.

    So, I've provided one example, how else can chat bots take over the world (or at least your wallet), what are sinister uses for this technology?
    • You wait until there is nigerianMalwareEliza V1 that can simultaneously hold several thousand online conversations whilst trawling for peoples information (think: dob, mothers maiden name, first school, pets name) or finding potential scam victims.

      Um, chat online only with people who you know in real life?

      I thought the first golden rule of the Internet was - be wary of strangers.

  • Does anyone ever get the feeling that there might be an elaborate Turing test being performed on Slashdot right now? Sometimes I think twitter (and friends) might just be some advanced AI used to test social responses.

  • by mbone ( 558574 ) on Monday October 13, 2008 @10:44AM (#25356375)

    It's much too easy - we are built to interpret communication as containing understanding.

    • Exactly. The Turing Test is interesting, but I think that the really important AI research has little to do with headline-grabbing stories about how amazing the machines are at fooling people in conversations. A machine which can not only solve but pose its own questions would be much more provocative.
  • Still some way to go (Score:5, Informative)

    by SimonGhent ( 57578 ) on Monday October 13, 2008 @10:47AM (#25356461)

    From The Guardian's article:

    "Let's talk about religion or politics. How is the government doing?" "I'm a protestant." Oh, really? Which denomination? "I was raised as a Protestant." Then, "Judge This very minute, I am a protestant; Go ahead?"

    On the other half of the screen, a faceless music fan ("I like a lot of Radiohead, Stereophonics, Led Zep etc") admitted he or she hadn't watched either the England match or X Factor last night ("Haha, Top Gear's more my style"). It was pretty clear which one was a real person. And which one the computer.

    http://www.guardian.co.uk/technology/2008/oct/13/artificialintelligenceai-computing [guardian.co.uk]

    Though this is quite interesting:

    The event's credibility was hardly aided by the insistence of Hugh Loebner, the prize's American sponsor, that he had no interest in the result and had only set up the competition 18 years ago to promote his firm's roll-up plastic lighted portable disco dance floors.

  • by Stan Vassilev ( 939229 ) on Monday October 13, 2008 @10:49AM (#25356487)

    If our criterai for AI will be so low, here's your AI: http://www.interviewpalin.com/ [interviewpalin.com].

    The political side of this site aside, the answers are just prewritten answers (by a human) mixed together randomly as a Markov chain.

    Does it sound convincingly? Well, at least as convincingly as some interviews a certain VP made recently. Is it AI? Hell no, a kid could write such a generator in a day.

    If the bar will be as low as to try to lead casual conversations with the "AI" and expect "quirky" answers, that doesn't mean anything at all, we need to AI for this. Hell, this is what an average conversation with a teenager is most of the time. Doesn't mean it's the best they can do.

    "We're clock cycle away from AI"? Please. I want my turing test to be done over an actual instant messenger program. Let's see how your Markov chain reacts, when I send a photo and ask a dead simple question such as "describe what you see in the photo".

    Fooling people is easy online. Scammers do it every day, it's not AI my friends.

    • by bziman ( 223162 ) on Monday October 13, 2008 @11:18AM (#25356893) Homepage Journal

      Let's see how your Markov chain reacts, when I send a photo and ask a dead simple question such as "describe what you see in the photo".

      Appropriate response: "I'm sorry, I don't download files from random strangers... haven't you been following the news about all the ways you can get spyware and viruses?"

      That's what I would say, even if it weren't a Turing Test.

  • by Landshark17 ( 807664 ) on Monday October 13, 2008 @10:50AM (#25356517)
    Get the extra credit question?

    http://xkcd.com/329/
  • Comment removed based on user account deletion
    • by naz404 ( 1282810 ) on Monday October 13, 2008 @11:19AM (#25356907) Homepage

      Read this cool article, I Chat Therefore I Am [discovermagazine.com]. It has transcripts of conversations between ALICE [wikipedia.org] and Jabberwacky [wikipedia.org]

      Here are a few transcripts:

      ALICE: Are you alive?
      Jabberwacky: Perhaps.
      A: Why the uncertain tone?
      J: Everything is uncertain except for five facts.
      A: The whole universe?
      J: And we both might just be some ones and zeros in the computer memory.

      A: Do you dream?
      J: Yes, I dream.
      A: It goes without saying. Do you remember your dreams?
      J: I remember them vividly.
      A: What do you remember about it?
      J: I think I shouldn't flirt with a robot.

      A: Do you think a machine will ever be considered "alive"?
      J: Stranger things have happened.
      A: Not many people express themselves that way.
      J: How would you know, machine?
      A: Knowledge is of two kinds: that which we learn from the senses and that which is true a priori.
      J: I think I do not understand you completely.
      A: Oh, you don't understand? Perhaps I can explain it to you better. Which part are you confused about?
      J: I have no short-term memory, so I cannot tell you what I was confused about. I only see what is in front of me now.

  • by The Cisco Kid ( 31490 ) on Monday October 13, 2008 @10:54AM (#25356573)

    http://www.chatterboxchallenge.com/ [chatterboxchallenge.com]

    Although the site is using some broken CSS that causes the text to render too far off to the right side, at leas in FF3

  • by Dr. Zowie ( 109983 ) <slashdot&deforest,org> on Monday October 13, 2008 @11:23AM (#25356995)

    I just tried out Elbot and the Princeton entry (RTFM and then google for "Eugene Goostman"). While both Elbot and Goostman parse sentences reasonably well, it is clear that they are simply trying to identify the subject of a sentence, and free-associating on that. In many cases they completely miss the point. For example, Goostman asked me several times about my profession, but wasn't able to parse meaning from "I am a scientist.", "I am a plumber.", or "I study the Sun for a living.". Both Elbot and Goostman tried the ELIZA-like trick of finding a prominent noun in my sentence, and recycling it as a question. Elbot has a cute little robot icon that emotes at you; this works surprisingly well at distracting from the inanity of its actual dialog. Goostman seems to have the better parser, but I'm not impressed by either one.

    I'm forced to conclude either that Will Pavia is an utter naif and the 25% of people who were fooled by Elbot are moronic or disinterested, or that the humans in the test were deliberately trying to throw the results by giving stilted answers to appear more like computers. These engines simply can't (yet) parse and ingest meaning even as well as even a very young human would.

  • by AlgorithMan ( 937244 ) on Monday October 13, 2008 @12:01PM (#25357707) Homepage
    I don't know if they already do this, but when /. discusses the turing test, then you find lots of crazy questions, you might ask the chat partner... ("why did the refrigerator lay an egg in the air?" or so)... I think if you'd ask a real person such crap, he'd get pissed and insult you ("wtf?", "are you trying to bullsh*t me?", "f*ck you, I'm leaving!") so a chatbot should get "angry" if you write stuff that it doesn't understand
    and real people usually have strong feelings towards politics or so - so maybe the chatbot might get angry with you, if you disagree with him on strong-feeling topics ("you want to vote for mccain? are you f*cking nuts!? don't you know that...")
  • Two-Sentence FAIL (Score:4, Interesting)

    by resistant ( 221968 ) on Monday October 13, 2008 @12:20PM (#25358003) Homepage Journal

    I just had a very short "conversation" with the "Eugene Goostman" chatbot mentioned in the original article.

    Myself: "Why did you kill the hamster?"
    "Him": "Oh, but why not? Could you tell me about your job, by the way?"
    Myself: "That's pretty cold, dude."
    "Him": "I just wonder what another surprising things you've got for me..."
    Myself: "You just failed the Turing Test in two sentences that didn't even require special trickery."
    "Him": "I'm participating in the Loebner Prize for the first time - but I guess there are much more robots here than one can suppose. I think some "Judges" and "confederates" are cyborgs who are just having fun fooling the human-beings."

    The first reply was surprisingly good, even if already a little "off" for a supposed teenager, but the second was a total giveaway. I'm disappointed. I can trip up each and every chatbot almost immediately with this sort of talk, which isn't at all unreasonable if the stated goal has been up front to trip up a chatbot, as in the contest.

    Here's another exchange, which took three whole sentences, albeit quite amusingly. (I cleared the site cookie(s) beforehand, to make it "clean").

    Myself: "Are you a werewolf?"
    "Him": "Yes I am a werewolf, but it's my secret! Wonna ask me something more?"
    Myself: "Don't you itch?"
    "Him": "Even if I *do* itch - so what? Could you tell me what are you? I mean your profession."
    Myself: "I'm a professional flea-killer."
    "Him": "Oh, I don't wonna talk about such scary things as death and murders! Did I forget it or you really haven't told me about the place you are from?"
    Myself: "Obviously, human humor will be a tripping point for you chatbots for a while yet."
    "Him": "All the things you told me are nothing but a joke. I think you are trying to make a fool of me!"

    For what it's worth, another dead giveaway for the brighter and more knowledgeable set is the way it (not "he", now) tries to elicit additional keywords in response to questions which it obviously has not in any way "comprehended", but that's probably not germane to a Turing Test meant for the average man or woman (or boy or girl) on the street. Notice especially how the elicitations invariably try to get the human to talk about himself or herself. Normal human conversation is full of self-talk with occasional hooks for sharing from other people, not the virtually one-track questioning of the typical chatbot when it's not busy being hopelessly vague or off-topic.

    The chatbot is at "Eugene Goostman chatbot" [mangoost.com], by the way, for the Google-impaired. :)

  • Scary! (Score:3, Funny)

    by jambox ( 1015589 ) on Monday October 13, 2008 @12:58PM (#25358639)
    I asked Elbot if it had heard of Skynet and it changed the subject.... brrrrrr!

The best things in life go on sale sooner or later.

Working...