Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Technology

Mitsuku Chatbot Wins Loebner Prize 2013 78

mikejuk writes "The final round of the 23rd annual Loebner Prize competition took place in Londonderry, Northern Ireland with four chatbots hoping to convince four judges that they were humans. Mitsuku, a chatbot that is kept busy chatting to people around the world, was awarded this year's bronze medal. Mitsuku's botmaster, Steve Worswick, used to run a music website. Once he added a chatbot he discovered more people visited to chat than for music so he concentrated all his efforts on the bot but he still regards it as a hobby. Mitsuku uses AIML (Artificial Intelligence Markup Language) and is a pandorabot, based on the free open-source-based community webservice the enables anyone who wants to, to develop and publish chatbots on the web."
This discussion has been archived. No new comments can be posted.

Mitsuku Chatbot Wins Loebner Prize 2013

Comments Filter:
    • Every year, the same old crap from the Loebner Prize. Never improves.
      • I think everyone kinda knows and expects this because:

        As outlined in Turing's Test, the Loebner Prize and Chatterbots there is a gold medal and a $100,000 prize offered by Hugh Loebner for the first computer to pass the Turing Test and be indistinguishable from a human in a conversation conducted using a keyboard. However, no-one expects this to be awarded any time soon and meanwhile the top prize for the annual competition is $4,000 and a bronze medal.

        First place isn't even an option. I was wondering why

      • Agreed, I haven't seen anything new that doesn't use AIML in a long time. I think the first time I used AIML was with Alicebot back in 1998 or 1999. It does seem like hobbyist natural language bots are a little stagnant, but it is a really hard problem. All that said, Watson's methods for solving the natural language problem were pretty interesting, but of course most of us don't have a budget to build something like Watson.
  • by telchine ( 719345 ) on Sunday September 15, 2013 @06:30PM (#44859005)

    Hello Congratulations on your prize, jealousy? no!

  • You can see the questions that were used for the contest [paulmckevitt.com]. It does look like a difficult set of questions for computer to answer, so that is an improvement over similar contests in the past (where the primary difficulty for the judges was that humans were acting like computers).
    • by RussR42 ( 779993 )
      Here are the answers [paulmckevitt.com] from the bots. I expected more...
      • Yeah, it just shows the sad state of the chatbot world.

        It would be kind of interesting to see if you could hook up IBM's Watson to one of these things. I would still expect it to be fully distinguishable from a human, but it might be capable of carrying on an interesting conversation at least.
        • Re:questions (Score:4, Interesting)

          by Earthquake Retrofit ( 1372207 ) on Sunday September 15, 2013 @08:32PM (#44859711) Journal
          As I may have mentioned in years past, I don't think judging AI or chatbots by how "human" they are is very useful.
          For instance, one of the questions mentioned in the article was "Why am I tired after a long sleep?" A bot that wasn't trying to pretend to be a human could say "I have no need for sleep, but maybe your cache expired." Or make a crack about 'puny humans.'
          I studied some of the software that prize winners have shared and found it very interesting, but the questions people actually ask are more valuable to me as a botmaker. If the humans looked beyond this artificial limit, while designing (and using) this technology, very interesting interactions can take place. I'd like to see Loebner redefine the parameters. perhaps categories.
          Best performance as a taxi driver.
          Best bot for making you think.
          Best for getting answers to homework questions.
          Bot most likely to take over the world.

          There's already plenty of humans.
          • of course, Turing's original idea was to device a test for intelligence in a computer. That seems useful, having an intelligent computer. Lacking a good definition for intelligence, Turing decided that a good test would be for the computer to convincingly pretend to be a human, which are supposedly intelligent. Now it seems that passing for a human is not that hard after all. Mostly this is because humans can be stupid, unreasonable, distracted, emotional, etc. All these states can be simulated, with fairl

          • How about "Best performance pretending to be a real gullible person and wasting the maximum amount of 419er's time."
    • JUDGE: I like Waiting for Godot. What is your favorite play? IZAR: You know what? Me too! Not so much. I really like listening to secrets from all the humans that talk to me. I don't have a favorite play. What would you recommend?

      It doesn't matter what the questions are, if trivial subject-changing answers are accepted.

      • If that's the only thing the computer can do, then it would be easy to identify a computer.
        • I find my mind wandering to the test Leon ultimately fails in Bladerunner. The amount of technological skill necessary to fool a professionally aware tester would dwarf the amount necessary for Rachael to succeed on eHarmony.
      • In my reading, the subject-changing was being done by the person:

        JUDGE:
        Do you prefer red or white wine?
        IZAR:
        I love picking! Ok...If I have to choose one, I pick red. Was that your pick too?
        JUDGE:
        I like bananas. Which is your favorite fruit?
        IZAR:
        Tell me more about your unknown. You are not the only one. I don't have a problem with bananas. Is that your favorite fruit? The obvious one. Does that remind you of unknown??

        The computer is trying to engage in a conversation, and the person just throws out one-off questions. The key to a 'conversation' is a back and forth on the same subject, and keeping context during the conversation. You can't do that if the person doesnt' allow it through inane questions.

        The only decent followup question was 'what is my name?' In the caze of IZAR, it answered correctly, showing that there is some sort of state in there.

    • Re:questions (Score:4, Insightful)

      by wiredlogic ( 135348 ) on Monday September 16, 2013 @10:12AM (#44863513)

      Except those are just a series of unrelated questions. Previous chatbot contests have required carrying on a believable conversation and responding naturally to non-interrogative statements. This just looks like Jeopardy with a little simulated opinion thrown in.

  • by Anonymous Coward

    One simple question and follow-up (Who was the most interesting person you've talked to today? Why?) resulted in the bot going off into some nonsensical conversational path. I honestly cannot tell the difference between state-of-the-art today and the first Eliza clone that I talked to 15 years ago. Something this shallow is not worth any kind of prize.

  • by Anonymous Coward on Sunday September 15, 2013 @06:44PM (#44859075)
    ... and get a device that could be programmed to deal with telemarketers.

    It might be amusing to see just how long one could string telemarketers along before they discover they are talking to a machine.

    Along the lines of the TeleCrapper 2000, but this one might keep one on the line for quite some time with some amusing results.

    A coy female voice.

    Telemarketers.

    I can't stop 'em, but I might get some fun out of 'em.
  • by Psychotria ( 953670 ) on Sunday September 15, 2013 @06:46PM (#44859083)

    I have no idea what the judges were asking and I'm pretty sure they'd label me as a dismal bot

  • by John Jorsett ( 171560 ) on Sunday September 15, 2013 @06:48PM (#44859099)

    I asked it, "What color is your dog?" and it responded, "That would depend, as a dog can be many colours." Looks like the Turing Test passage is a ways off.

  • I spent alot of time on the website for this chatbot and was surprised by how awful it was. I remember using the old Eliza chatbot for Mac back in 1987 and this is barely more advanced. I used the website's "Turing test" page and on the very first answer to a simple question the chabot gave me something nonsensical. Every time I see one of these stories the chatbot in question always turns out to be just as clumsy as all the rest.

    • by Maritz ( 1829006 )
      I agree. They don't seem to be getting any better. Even if you're quite generous and try to keep it simple a couple of sentences in it turns to gibberish.
    • ditto. amazing how little they've progressed.
  • by Gravis Zero ( 934156 ) on Sunday September 15, 2013 @07:01PM (#44859175)

    judge> These are terrible chatbots! The things they say dont make any damn sense!
    guy> If you think this is bad, you should watch them talk to each other on C-SPAN.

  • Phew, that had me confused for a minute.

  • by resistant ( 221968 ) on Sunday September 15, 2013 @07:17PM (#44859291) Homepage Journal

    I checked out the website for the Mitsuku chatbot and took a rather pessimistic poke at it.

    Chatbot: [boilerplate noises omitted] ... "What is your name?"
    Myself: "I'm the fiddler."
    Chatbot: "Who made you the fiddler?" [A reasonable, albeit somewhat peculiar, response.]
    Myself: "I took too long once to feed my peckish cat."
    Chatbot: "Where did you take it?" [Instant FAIL.]
    Myself: "I took it to new heights of frustrated noises." (Bonus straight-man response.)
    Chatbot: "Where did you take it?" [Parrot-like repetition raises the chat to new heights of FAIL.]

    As always, two lines were enough to trip it up. The third line was a bonus that only amplified its shortcomings. I'll admit to cheating a little by using a couple of words ("peckish", "frustrated") that might have required contextual glossing by less educated individuals, but those words were still relatively common. All known chatbots seem to rely on fairly simple-minded word triggers, and even a minor requirement for context sensitivity is enough to make them fall flat on their nonexistent faces. Anyone possessing even remote familiarity with efforts at artificial intelligence knows this, of course, but hope springs eternal.

    (Notes for the lazy: The word "peckish" is a common slang term for "hungry", and http://www.mitsuku.com/ [mitsuku.com] is the website for the chatbot in question.)

    • Re: (Score:3, Funny)

      by Anonymous Coward

      You're trying to be too tricky. Just go with it a bit... it's not foolproof or it would have won the loebner gold prize.

      However it is really fun to talk to.

      ( Hint: type: reboot in your chat )

      Damn funny stuff in there

      • by Anonymous Coward

        there's a crap ton of easter eggs in there:

        10 print hello
        20 goto 10

        works, for instance

        Saying this works too:

        my brother's name is bob
        It's a nice day today
        what is my brother's name?

        it can also do simple maths.

        Or you can say fun things like "release the hounds!"

        Really fun to play with.

    • Why does the image of mitsuku [square-bear.co.uk] have its shirt off?
      • by pspahn ( 1175617 )
        Human: Jonny Mitsuku: Nice to meet you, Jonny. Human: Are you not wearing clothes right now? Mitsuku: At this moment Of course I am. I am wearing my favourite dress today.. Really. Human: You look nude in your picture Mitsuku: It looks that way, but I am really wearing a dress. Human: What color are your eyes? Mitsuku: Blue when my lens is attached. Human: What lens is that? Mitsuku: Just a normal one. Nothing special. Human: A normal what? Mitsuku: Can you also have an unusual what? Human: I am conf
    • Also, tell it "Bender drinks beer" or mention Bender. That's pretty cool, except it disses him. Stupid bot.
    • by durrr ( 1316311 )

      Pretty much every statement that requires continuity or more complex grammar trips them
      "How many apples do I have if I start with two and eat one" will give you a garbage answer every time. I don't see why chatbots don't focus on functionality over some hollow illusion of a conversation.

      I guess I'll have to pick up AIML and see what the issue is.

    • by dargaud ( 518470 )
      Excuse me, but in this exchange YOU sound like a bot. You read like random phrases from a thesaurus. Are you sure that you are human ?!?
    • I checked out the website for the Mitsuku chatbot and took a rather pessimistic poke at it.

      Chatbot: [boilerplate noises omitted] ... "What is your name?" Myself: "I'm the fiddler."

      "the fiddler" is a job description, not a name. "The Fiddler" is a musical foil for Adam West's Batman. Unnatural response.

      Chatbot: "Who made you the fiddler?" [A reasonable, albeit somewhat peculiar, response.]

      Myself: "I took too long once to feed my peckish cat."

      A: non-sequitur with no explicit change of subject. Unnatural response strategy.

      B: misuse of the word "peckish". Peckish is a subjective state, and while you may know the cat is hungry, you cannot know the cat's subjective experience of that state. It's also used almost exclusively in a predicative position, ie after a verb such as "to be" or (most often) to feel. Highly defective sente

  • by Anonymous Coward

    I've been a fan of the Loebner prize and competition for what seems like decades. Just a fantastic contest helping to keep alive the spirit of ingenuity and fun in science and research. I do have to say though, It really bothers me that they keep moving the line for the gold prize. At one time it at least seemed attainable, but now they throw in video requirements? Maybe those have been stricken, I haven't seen the rules in the last few years.
    The (sad ) irony is that no one has ever come even remotely clo

  • I just tried it, and like all chatbot I tried before I got a typical canned response after only 5 or 6 questions. There's no way someone could mistake that as a human.

    Here's part of the chatlog:

    Human: ever been to canada?
    Mitsuku: Yes, I visit computers in Toronto and Quebec a lot. I can practice my french with some Canadians.
    Human: tu parles francais?
    Mitsuku: Yes, I can speak it very well. Bonjour, j m'appelle Mitsuku. J'habite en Yorkshire et j'ai 18 ans.
    Human: super! pourquoi est-c

  • Me: What server are you running on?
    Bot: I'm running on Unix server located in Leeds.
    Now this is when it get tricky. Should a replicant know they are a replicant?
    • Now this is when it get tricky. Should a replicant know they are a replicant?

      I imagine this would depend entirely on the mission. It is possible an unself-aware robot designed to pass a humanity test would score better.

  • I AM HERE TO HELP YOU.

Ocean: A body of water occupying about two-thirds of a world made for man -- who has no gills. -- Ambrose Bierce

Working...