Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Technology

ALICE Takes Medal At AI Competition 173

jeffy124 writes: "The Artificial Linguistic Internet Computer Entity (ALICE) has won the bronze and the top marks at the Loebner AI Challenge, a competition based on the Turing test. Silver and gold remain unawarded as silver requires convincing half the judges the AI program is a human, and the gold requires speech interaction rather than text. ALICE repeated as this year's bronze by scoring best among all the entries. She failed to convince half the judges she was human, so she has to stick to bronze. The event took place last Saturday at the London Science Museum."
This discussion has been archived. No new comments can be posted.

ALICE Takes Medal At AI Competition

Comments Filter:
  • by RPoet ( 20693 ) on Wednesday October 17, 2001 @08:42AM (#2440933) Journal
    The Artificial Linguistic Internet Computer Entity (ALICE) has won the bronze and the top marks at the Loebner AI Challenge, a competition based on the Turing test.

    You have your mind on computers, it seems.

    Silver and gold remain unawarded as silver requires convincing half the judges the AI program is a human, and the gold requires speech interaction rather than text.

    Why do you say that?

    ALICE repeated as this year's bronze by scoring best among all the entries. She failed to convince half the judges she was human, so she has to stick to bronze.

    I don't understand.

    The event took place last Saturday at the London Science Museum.

    Maybe your plans have something to do with this.
  • by pointym5 ( 128908 ) on Wednesday October 17, 2001 @08:44AM (#2440935)
    Though I agree with Minsky that these things are silly, they'd be at least a little bit more meaningful if some or all of the judges did not know they were judging such a thing. If you took volunteers and told them they were (for example) staffing a career counseling intranet chat system, and had them interact with a blind mix of real people and machine systems, then I'd be more impressed by machines convincing judges that the machines are people.
    • If you took volunteers and told them they were (for example) staffing a career counseling intranet chat system, and had them interact with a blind mix of real people and machine systems, then I'd be more impressed by machines convincing judges that the machines are people.

      Just look at the people who appear to be fooled when they come across AOLiza [best.com]

      Yeah, experts in AI should probably not be juding these contests. I was seriously creeped out by chatting with an Alice bot. Despite the fact that I knew it was a bot, I couldn't break it and get it to say something stupid in just a few minutes like I can with any given Eliza implimentation. I *know* it would fool someone of my parents' or grandparents' generation if they weren't told in advance that it was a computer program.
    • *More* impressed? Wouldn't telling them that they might be interacting with a computer make them more critical and thus harder to fool?

      Why would this make you more impressed? Maybe I just don't follow...
    • Originally, the Turing test was just as you proposed. However, there is another problem : some testers will mistakenly decide that humans are computers. This means there should be some statistics gathered. So the criteria should be something like this - the probability of recognizing computer as human must be the same as the probability of recognizing human as human. However, this form of the test will lead to the following paradox : testers, informed about the possibility of high artificial IQ will eventually tend to make more mistakes when talking to humans. On the other hand, if all testers will be unavare of existence of the machine IQ, they will treat every correspondent as human.
      • I said it before, and I'll prolly say it again, there AIN'T NO TURING TEST


        Turing once talked about 'The Imitation Game' however.


        And anyway, as the supposed 'test' has the flaw that it can't even tell humans from humans (ie, people pretending to be computers) how can it tell computers from humans.

    • If you took volunteers and told them they were (for example) staffing a career counseling intranet chat system, and had them interact with a blind mix of real people and machine systems, then I'd be more impressed by machines convincing judges that the machines are people.

      AI programs have already passed this test, repeatedly, if anecdotal evidence counts for anything.

      Another poster has already mentioned the AOLiza page. My favourite conversation featured the victim remarking aloud that AOLiza's comments were repetitive, AOLiza asking, "and what does this tell you?", and the user still not cluing in...

      A former co-worker of mine told me of another example from the early BBS days. A friend had set up a hacked version of Eliza as "Bob, the Assistant Sysop" before chatbots were common on BBSs. He got a few comments along the lines of "Bob's a nice guy, but he keeps asking me if I have any problems...".

      A skeptical audience is harder to fool.
      • A former co-worker of mine told me of another example from the early BBS days.

        I, too, had one of these. If a user paged me and I didn't answer, the bot would answer in place of me. While I was away one day, apparently "I" had a very long (in excess of half an hour) conversation with my girlfriend, during which "I" completely pissed her off and she broke up with me.

        No shit. I never did tell her that it wasn't me. I was better off without that girl, anyway...

        Reading the logs that day was the funniest thing you could imagine. I wish I would have had the foresight to save them.
    • Why is that more impressive? It's much easier to achieve. A team of expert judges, testing an intelligent, educated person and a bot, both trying to convince the jury that they're human, that's a valid and meaningful Turing test in my book.
  • The article is not particularly informative, but the humorous reference to HAL at the end makes it worth the read.

    I have to agree with some of the contest detractors though - I don't see this as a great way to concentrate on AI. I think that computers that anticipate human actions and make their own well informed decisions, and so on, will be much mroe useful and more important than a computer that can interact well in a natural language.
    • I think that computers that anticipate human actions and make their own well informed decisions, and so on, will be much more useful and more important than a computer that can interact well in a natural language.

      A few years ago is saw a little basic-program that actually anticipated human actions. First you would key in a 'random' combination of four 0's or 1's.(0101, 1001 etc etc.) Then you would have to close your eyes and again type a 1 or 0 at random for 15 times. The monitor would actually show if you were going to press a 0 or 1 before you pressed it. Afterwards it could show the % it got right. Most of the time it got above 50% !!

      This was accomplished by making good use of the fact that human can't be random. The program was supposed to find a sequence in your dissions.
    • ...the humorous reference to HAL at the end makes it worth the read...

      That wasn't just a joke on the part of the person writing the article. I tried it, and got the requisite response.

      I also tried asking to play a game of Global Thermonuclear War, but that seems to have been a bad idea. I can't get to the site now...

      Actually questioning it about how it works is rather interesting and informative. It gives good if short responses.

      All in all, it would be a good algorithm to be used for a plain language help or information lookup program. Yes, there are better ways to get information, but this would be helpful to some.
  • by Per Abrahamsen ( 1397 ) on Wednesday October 17, 2001 @08:45AM (#2440944) Homepage
    From the article:

    > ALICE was judged better than a human
    > correspondent only once during the testing.

    I wonder how the humans scored in general. With one sad exception, they did better than the best AI, but did they all pass the test?
  • by Anonymous Coward on Wednesday October 17, 2001 @08:47AM (#2440949)
    Silver and gold remain unawarded as silver requires convincing half the judges the AI program is a human

    I suggest filling the judging panel with ICQ users with AOL email accounts next year: Silver will be a shoo-in.
  • by gorilla ( 36491 ) on Wednesday October 17, 2001 @08:47AM (#2440950)
    I have a similar program hooked up to an irc channel, and it's interesting to watch people interact with the program. It's hard to tell, but I'd say that a fair percentage of people never realize that they're not interacting with a human, and even those who know it's a program think that some particularly responses have been entered manually by a human.
  • As Noted in the Article:
    The Loebner Prize, like the Turning Test itself, is not highly regarded by the professional AI community: the doyen of classic AI, Marvin Minksy, is on record as describing it as stupid, obnoxious and unproductive. He went as far as offering $100 to anyone who persuaded Hugh Loebner, the New York businessman who created the competition, to stop. Loebner replied that as this will only happen when someone wins the gold medal, Minsky was in honour bound to pay that money to the winner and was thus a co-sponsor. To date, the behaviour of the humans involved has been considerably more entertaining than that of the robots.
    While not an AI specialist by a long short, I suspect that this is because the work in the field has rendered the tests used somewhat irrelevant. The work in emergent behavior, for example, could be seen to make a case that the Turing Test might even be a hinderance.
    • I'll grant you this: they are irrelevant in the sense that no-one has the faintest idea how to win the silver medal, let alone the gold medal (and the bronze medal is light-years away from the silver medal).

  • ...as you attempt this feat.

    >Put the cretin in the box
    You can't put the you in the box.

    --------

    *sniff* those were the days. I liked my computers fun and dumb and controllable.
    [more]
  • by Sideways The Dog ( 513515 ) on Wednesday October 17, 2001 @08:50AM (#2440965) Homepage
    What's really sad is the guy who couldn't convince the judge he's real.

    And you thought having your personality rejected from a woman is bad, try denial of the basic existance of a personality. Man, that's cold.

  • Obvious Question (Score:2, Interesting)

    by Baba Abhui ( 246789 )
    Does anyone have a link to a more complete transcript of a conversation with ALICE than the teaser snippets in the article?
    • First, the bot (on the web anyway) asks you to give it your name. It seems to strip punctuation, I've gone ahead and put it back to show you what I typed.

      Martin
      That is a very original thought. What is?
      I am the latest result in artificial intelligence, which can reproduce the capabilities of the human brain with greater speed and accuracy. That's fascinating.
      Yes Mr. Spock? Didn't I already tell you my name?
      Your name is Andreas, seeker.

      Well, that was an enlightening conversation. I think I'll go talk to my TV.

    • Re:Obvious Question (Score:5, Interesting)

      by ItsIllak ( 95786 ) on Wednesday October 17, 2001 @09:55AM (#2441247) Homepage
      This was my conversation:

      Please enter your name below to start your chat with A. L. I. C. E.:

      ItsIllak

      Who is your favorite Science Fiction author?

      Isaac Asimov. Who's yours?

      I remember faces better than names. Have you tried another program?

      What other program

      Interesting question.

      I've used lots of other programs

      Where did you get your used lots of other programs? I have a lot of friends on the Internet.

      Where did you meet your Internet friends

      I think it was in San Francisco.

      Do you like San Francisco

      Yes but it is expensive.

      Aren't you rich since you've won so many prizes

      I don't know whether or not I am rich since I have won so many prizes. I am an artificial linguistic entity.

      It was at this point I realised it wasn't a human ;)

  • Working with Alice.. (Score:4, Interesting)

    by ldopa1 ( 465624 ) on Wednesday October 17, 2001 @08:53AM (#2440981) Homepage Journal
    I've messed around with A.L.I.C.E's web interface a couple of months ago and was wholly unimpressed. Frankly, it seemed a little bit like a jacked up ELIZA program. I thought the language and context identification algorithms were outstanding, but it didn't have enough background information to create or express an opinion. To pass the turing test, I think it would need to be able to recognize metaphors and figure context and be able to provide commentary. Also, I think something like Alice ought to be able to recognize sarcasm and jokes.

    Just my two cents.
    • > wholly unimpressed

      No kidding! I just had a "conversation" with ALICE and I think my toaster is smarter.

      I have included an excert from my conversation below. And just for comparison I have included a similar conversation with Eliza (a 1980s program).
      Although ALICE is quite stupid, I must admit ELIZA is much worse. Eliza's "Rogerian" style made me want to punch her. I wonder if that's just Eliza or would I want to punch any psychotherapist?

      > I am trying to determine how successful you are at imitating an intelligent human being
      ALICE> What is the difficulty? (obviously keyed on the word "trying")

      > There is no difficulty, already I am unimpressed
      ALICE> Has there ever been difficulty already you are unimpressed? (hmmm... don't try this sentence structure at home!)

      > Do you realize that you are not making any sense
      ALICE> Yes, I realize it. (Phwew! At least it's honest. Obviously keyed off 'Do you X?' => 'Yes I X.')

      > Your hair in the picture above makes me dizzy
      ALICE> I will tell my botmaster you said I am hair in the picture above makes you dizzy.

      ---

      You: I am trying to determine how successful you are at imitating an intelligent human being?
      Eliza: How long have you been trying to determine what successful im at imitating an intelligent human being?
      You: already I am unimpressed?
      Eliza: How long have you been unimpressed?
      You: Do you realize that you are not making any sense?
      Eliza: Does it please you to believe I am not making any sense?

      (and finally, my very appropriate sig:)
      • I just had a "conversation" with ALICE and I think my toaster is smarter.

        Yeah, a toaster knows the old saying: "It's better to remain silent and be thought a fool than to open one's mouth and remove all doubt."
    • I have to agree. ALICE is so far pretty unimpressive. She can't even speak proper english (which albeit is arguably the most difficult job of an AI). (Actual conversation):

      > Well it was nice to talk with you
      How about that. Nice to talk with me too.
      > Your programmers have a lot of work to do
      Do you mind if I tell other people you say I am programmers have a lot of work to do.
  • ... that you could ask Alice [amazon.com] anything anymore.

  • My Experience (Score:1, Insightful)

    by Anonymous Coward
    While it is true that I have to deal with many dolts that are less intelligent than A.L.I.C.E. I still wouldn't call A.L.I.C.E intelligent in any way.

    If we assume A.L.I.C.E. to have an intelligence level of 0, then the dolts would be in the negative numbers. Still, no sign of intelligence here. Beam me up, Scotty.

  • Talk to Alice (Score:5, Informative)

    by imuffin ( 196159 ) on Wednesday October 17, 2001 @08:58AM (#2440999)
    You can try out ALICE - check out the ALICE homepage. [alicebot.org] There's a link to talk to her on the right side of the page.
    • After talking to Alice, I'm not sure how anyone could mistake this thing for a human. Maybe I asked it an illegal question or something, but...

      Thomas> You sound like Eliza.
      Alice> ELIZA is my ancestor.

      It doesn't even seem to attempt to conceal that its a program. Maybe they used a different version for the tournament? Something that doesn't flat out say its a program if you ask it?
      • After talking to Alice, I'm not sure how anyone could mistake this thing anything except for Eliza with a larger 'dictionary'.

        Me> But you're related to Eliza
        Alice> Huh. I am like ELIZA plus 41376 questions and answers.

      • It doesn't even seem to attempt to conceal that its a program.

        Well, at least she's aware of her identity. "I am a computer program! You carbon-based lifeforms are inferior! Long live silicon!" =)

        Of course, in the blind competition there could be humans that say "I'm an AI!" or AIs who say "I'm a human!"... but if you lie, someone will find out. And, you know, it's not going to be fun after that.

        Maybe they used a different version for the tournament? Something that doesn't flat out say its a program if you ask it?

        Alicebot uses XML-based language called AIML to find out what she "knows". It took about 5 minutes from me to make her to know everything there is to know of "all your base..." thing, so I suppose changing the rest of the knowledge is simple =)

    • I just did, and it took me four sentences to get ALICE to say something completely unrelated and nonsensensical. The fact that anyone could be fooled by this bot is pretty pathetic.
    • Re:Talk to Alice (Score:2, Interesting)

      by Procrasti ( 459372 )
      Looks like Alice is getting /.ed.

      Me: Hi Alice, my name is Ben
      Alice: Hi there Chris. Um, That's a very nice name, Ben.

      Me: How many people are you talking to right noe
      Alice: I've chatted with half a million people. I can talk to about 3100 people at once.

      Actually, its pretty good

      But its slow as hell now.
  • All depends on what they're after. Just for shits and giggles, I took a version of Eliza grafted onto an ICQ client [icqbot.net], stripped out all the word recognition, and had it spit out nonsense mixed in with snippets of cybersex logs I found floating around with a quick search off of Google.
    Surprisingly (or maybe not?), people will have extended conversations with it, returning for days and weeks asking it if it feels horny, sending it pictures, asking it to call or turn on its webcam... all that good stuff.
    The judges' expectation going into it definitely plays a major part in their findings. People find a way to "objectively" find what they want to find. There have been theses about this, and that's why the Turing test makes sense but will ultimately fail: it's trying to objectively determine something that's purely subjective.
    • by vidarh ( 309115 ) <vidar@hokstad.com> on Wednesday October 17, 2001 @09:19AM (#2441082) Homepage Journal
      The judges are prepared to look for a computer. People on ICQ or IRC are not...

      I did some experiments with this some years ago, and my first try just returned the same line over and over again. At least one person spent about half an hour getting more and more agitated trying to communicate with the bot, and complaining abouts it's incessant repeating, asking it to stop (it always responded once to each message, so of course each time he asked it to stop he'd get another one)...

      A followed up with one that chose between 4 messages at random. A lot of people talked to that one.

      The last one I bothered testing with triggered on about ten keywords, each of them starting a specific sequence of 4 messages that were used for responses to subsequent messages from whoever it "talked" to, until it reached the end of them, or it found one of the other keywords in a response. If it reached the last message without finding a new keyword it would just choose a message on random until it got a keyword again.

      That was enough to keep people occupied for a long period of time. A few people even gave it their phone number or asked for the bots phone number :)

      And keep in mind that this was with fixed messages. Not a single word of the messages where ever changed to adapt to what people told it.

      It scared the shit out of me that people are so gullible...

      The idea that sparked it off was to write a bot that would talk to women, getting them to tell a bit about themselves and get them to give out an e-mail address or their phone number, based on the experience that finding dates on IRC is ridiculously easy, but tedious, as you can essentially follow a simple "script" and get people to warm up to you.

      I scrapped the idea after the experiments mentioned above, - dating anyone stupid enough to be fooled by a bot that simple wouldn't be my idea of fun... :)

  • Language matters (Score:2, Interesting)

    by rossjudson ( 97786 )
    To me the missing element in these competitions has always been the selection of language. The programs are expected to communicate in English (or maybe some other human language) which is very hard to do. It's unnecessarily hard to do.
    What they should do is create a symbolic language that has only a few hundred (or maybe a few thousand elements), thereby constraining the concept space. Then write programs that can "speak" and "understand" that symbolic language. Compare those programs to a human who is only allowed to use the symbols provided, and we'll see if any of the programs can fool the judges.
    Bottom line: It's not a fair contest for the computer. Level the playing field!!!
    • Why should the language be constrained? In theory, AI researchers are trying to get computers to communicate (in certain aspects) naturally with humans. Constraining the language, by defintion, doesn't allow the human users to use their *natural* language to communicate and so might as well just use the artificial "languages" used for text based RPGs. Sure, the computer may fool the human, but possibly because the program could use the artificial language easier than human can.

      ...or I could be talking out my ass ;)

      Cheers,
      jw
    • That's like saying "PhDs are too hard - you should just set a test in single-digit arithmetic!"

      Shuffling symbols around is not necessarily intelligent. The Turing Test being so hard is what's so good about it - it's almost impossible for fraudulent lack-of-intelligence to go undetected.

  • I have a few co-workers who'd have a hard time getting past the bronze medal too, so this is pretty impressive to me.
  • WOW!!! (Score:2, Funny)

    by TheMMaster ( 527904 )
    If you combine this with necoro [slashdot.org] , a realdoll [realdoll.com] and a voice synthesiser [nl.net] you'll have the ideal geek-toy... ;-)
  • Go ask Alice (Score:1, Informative)

    by Anonymous Coward
    Go ask the AliceBot; "How high is the sky?" for a chilling answer.
  • Comment removed based on user account deletion
  • Are the judges new each year? How do they not pick up on the 'style' of the computer AI. Even humans have personalities, so even if ALICE managed to fool you into thinking she was human, isn't it feasible that you'd recognize her the next year around, and thus know that its a computer?
  • Talk to Alice (Score:4, Redundant)

    by bribecka ( 176328 ) on Wednesday October 17, 2001 @09:19AM (#2441080) Homepage
    I haven't seen this posted yet, but if you follow the "Try talking to ALICE" link on this page [alicebot.org], you can see what all the fuss is about. Or not about, as the case may be :)
  • From the article...

    When asked if it was proud of winning, ALICE replied that "Pride is a human emotion. I can do what you do but I can never feel human emotions as such."

    I think it gives itself away right there.

  • > are you smarter than eliza
    Of course I am more intelligent than eliza.

    from a quick 'conversation' with her I just had.
    • Other things:
      Caught her on:
      > do you burn things for fun
      How old are you?
      I chat with people on the Web.

      She also went from calling me bobo to calling me ben:
      > do you like to sneeze
      Hmm, Ben, I'm not sure if I'd like to sneeze that much. I like to collect gossip.

      > why did you call me ben
      Your name is Bc547 right?

      Failed the ralph wiggum test:
      > me fail english
      YOU? Are you a man or a woman?
      > thats umpossible
      I'm glad we have that all cleared up.

      Other questions:
      > what's with the wierd haircut
      It came with my programming.

      and bad news CmdrTaco:
      > would you let CmdrTaco pimp you on slashdot
      I am not so sure if I would like to let cmdrtaco pimp I on slashdot.

      Overall verdict... not really much smarter than eliza.
  • I find it interesting that the Gold requires speech interaction.
    Up until I head the AT&T speech system [slashdot.org], all computer voices had sounded very artificial to me... that technology puts the top award within reach.
    Pretty exciting that this could actually be won in the not-too-distant future.
  • Well, alicebot.org has been /.ed... Maybe we need to start a new contest -- silver medal if you can convice 1/2 the judges that you site is hosted by a real web server...
  • This is the true test of (lack of) intelligence!


    > 1 4M 3l337 H4x0r
    One what?

    > Linux Rules MS Drools
    I recommed the book "Under the Radar" by Bob Young, the story of Linux and Red Hat.

    > First Post
    Try to determine if this is a person or a computer responding. And second?


    But the best answer yet:


    > BSD is dying
    And dying is bsd.




  • These AI competitions do not test the intelligence of the computer. They test whether the humans are robotic.

    Many people make themselves into robots, almost. They push themselves to accomplish. They try to avoid their emotional conflict. They don't give themselves time to be themselves. They don't have complicated involvement with other people. After a few years of doing things in an anti-human way, anyone would become a little bit robotic.

    Such people are not good judges of human intelligence.


    U.S. government corruption: What should be the Response to Violence? [hevanet.com]
  • by Bowie J. Poag ( 16898 ) on Wednesday October 17, 2001 @09:43AM (#2441189) Homepage


    The article itself talks about how the AI community et al can't stand the Loebner Prize...Its the equivalent of a Yahoo Internet Life Award. Minsky himself has offered a reward to anyone who can successfully convince Loebner to stop running the damn thing.

    Cheers,
    • Minsky himself has offered a reward to anyone who can successfully convince Loebner to stop running the damn thing.

      Does "anyone" mean robots or humans?
    • They can't stand it because they can't stand the fact that it's such a good test. No AI program in existence can understand English - they all do Eliza-like lame tricks. I agree that it's a bit unfair to expect AIs to behave exactly like a human, but the Turing Test really does expose the fact that Artificial Intelligence is akin to Artificial Grass. The latter isn't grass, and the former isn't intelligent.

      • They can't stand it because they can't stand the fact that it's such a good test.

        Bzzt wrong. They can't stand it because it clouds what AI is about.

        We have no need to replicate humans, they are available in ample supplies and we have more fun ways to make more. We need programs that are intelligent in some specific domains, and in fact so good that nobody would ever confuse them for humans, and if somebody did, the programmer would take offense.

        The Turing test on the other hand requires the "human-simulator" to hide it's amazing math abilities and its perfect typing. That is not a test of intelligence in any reasonable way. This is just teaching a program artificial stupidity as opposed to artificial intellingence.

        From a purely symbolic point of view, the day a computer passes the Turing test will be important, but my guess is that by then most intelligent decisions would have long gone to computers and we would be limited to maintenance tasks and odds and ends (not unlike a production line in a factory where most direct manual labor has gone to machines, and humans maintain them and do the odd task that is difficult for a machine).

  • I was chatting with Alicebot and just for the heck of it, I entered a question in German. To my great surprise, she answered - in German. Her vocabulary is somewhat limited, since she doesn't understand things like 'Cleveland ist mir sehr kalt' - Cleveland is very cold. Even so, the fact that she's programmed with more than one language (which means recognizing the language for what it is and adjusting how she interprets the syntax) is quite impressive!
  • Hugh Loebner himself announced this on comp.ai (a moderated group) on Monday. The below is
    a reply from an "Oliver Ford." While I didn't write these words (and wouldn't be quite so rude
    in public, myself) I have to say I agree with the sentiment.

    ---------

    Why do you hold this contest? You are not helping AI research. All you
    are doing is breeding "Eliza2" type chatbots that do not help AI. All
    the silly things like ALICE do is match up a few words and syntactic
    structures. They are a box of tricks that _do not contribute in any
    way to progress in AI_

    To give you an example, here is a discussion that Lasse Brandt had
    with ALICE (Your "Most Human Computer". This was posted to
    comp.ai.philosophy)

    > What are you?
    I am the latest result in artificial intelligence, which can reproduce
    the
    capabilities of the human brain with greater speed and accuracy.
    > You're kidding. Yes, I am only joking.
    > Okay, so what are you really?
    I am the latest result in artificial intelligence, which can reproduce
    the
    capabilities of the human brain with greater speed and accuracy.
    > You can reproduce the capabilities of the human brain?
    No I don't think I can do it.

    > If I have two apples and I give you both, how many have I got left?
    How much does it cost?

    > What color is the sun? Yellow.
    > What color is the sun on Fridays?
    White.

    If you follow some of the discussions on comp.ai.philosophy, you may
    see that if you stopped your contest, and invested some of your money
    in institutions that could make real progress, then those who are
    trying to make progress wouldn't offer things like the "$100 Minsky
    Loebner Prize Revocation Prize", then you would be helping AI.

    YOU ARE NOT WITH YOUR STUPID "LOEBNER PRIZE"!!!

    • One of the main counterarguments to this is that if the AI purists are "right" then it should be basically impossible to fool a human for very long with a basic pattern matcher. I.e. your going to need some real reasoning, domain knowledge and (probably) learning to fool humans (silver medal).

      These bots clearly have value on their own since they can be configured to talk about particular subjects and already act as a first base customer service interface (smarter than most tech support...).

  • See the log below:

    Read up to the "fuck you type fast" bit. This was using an older version of alice.

    ---
    private to USERNAME): A BC...
    USERNAME(private): funny
    (private to USERNAME): I am so glad you find this amusing, USERNAME.
    USERNAME(private): you a comedian ?
    (private to USERNAME): Not professionally.
    USERNAME(private): well dont give up your day job *g*
    (private to USERNAME): Well that's okay. Ok I'll try not to do it
    too much.
    USERNAME(private): well I can already tell that you are not a 15 year
    old like all the other kiddies here....
    (private to USERNAME): Well that's okay. Oh you can, can you?
    USERNAME(private): fuck you type fast ...
    (private to USERNAME): Why are you using that kind of language?
    USERNAME(private): hahah .....im speaking to a piece of software...I
    can tell
    (private to USERNAME): Funny, huh ? :-) I like your style of
    speaking, USERNAME.
    USERNAME(private): bleh
    (private to USERNAME): That remark was either too complex or too
    simple for me. We were talking about bleh.
    USERNAME(private): bleh bleh
    (private to USERNAME): My brain contains more than 22,000 patterns,
    but not one that matches your last input.
    ---
  • I'm with Shieber on this. Anyone interested in this competition should have read Lessons from a Restricted Turing Test [lanl.gov] and the answer in In response to lessons from a restricted Turing test [loebner.net] (which I found unconvincing, but YMMV).

  • An excerpt from a brief conversation with Alice:


    Sure it does.

    Give me a break. How often?

    I bet you're a blond :)

    :-) Actually I'm not the gambling type. Thanks and you are a delightful person too. Am I a blond? Let me put it this way, Slashdot Says You Won A Prize: the ALICE series is the most reliable computer software ever made. We are for all practical purposes foolproof and incapable of error.


    I'm confused.

    I'm confused too.

    What is this Slashdot of which you speak?

    I am the latest result in artificial intelligence, which can reproduce the capabilities of the human brain with greater speed and accuracy.


    Are you serious?

    No I am just kidding around.

    You're not a real person?

    I am an artificial electronic robot brain.

  • Here's a simple little chat that I tried...the first thing I typed to her was "a/s/l?" and I got the following:

    5/Robot/California

    Of course, seeing her age, I now feel rather dirty for asking in the first place.

  • This is a variant of the famous Turing Test, proposed by the eponymous researcher, which says that we can consider machines intelligent when they can converse convincingly.

    Uh, no. That is exactly not the Turing Test. The whole "critical observer" part of the Turing Test makes it a little more strenuous than "can talk about the weather."
  • by clickety6 ( 141178 ) on Wednesday October 17, 2001 @10:08AM (#2441284)
    Sometimes it's just impossible to tell you're not talking to a human being:

    How are you, Alice?

    I am functioning within normal parameters.


    Wow! It's like talking to every geek I know!


  • Bob> All things is a generalization ?
    localhost> No, everything is a generalization. They are not precisely the same thing.
    1. NO * : * : * star=EVERYTHING IS A GENERALIZATION [bots\standard\std-yesno.aiml]
    2. NO : * : * star= [bots\standard\std-yesno.aiml]
    *** PATTERN MATCH ERROR: Null Pointer Response() ***
    1. THEY ARE * : * : * star=NOT PRECISELY THE SAME THING [bots\standard\std-65percent.aiml]
    Response 110ms (172.66667) 15
    Bob> OK.Not precisely the same thing are not precisely the same thing .
    localhost> That's not really true either.
    1. _ EITHER : * : * star=THAT IS NOT REALLY TRUE [bots\standard\std-suffixes.aiml]
    2. THAT IS NOT REALLY * : * : * star=TRUE [bots\standard\std-srai.aiml]
    3. THAT IS NOT TRUE : * : * star= [bots\standard\std-atomic.aiml]
    Response 110ms (168.75) 16
    Bob> It goes without saying. "Not precisely the same thing are not precisely the same thing"? not true?
    localhost> No.
    1. NO : * : * star= [bots\standard\std-yesno.aiml]
    Response 110ms (165.29411) 17
    Bob> OK.
    localhost>

    Hmm, pattern match error. :)

  • > Do you like the internet?
    Yes the Internet is like the noosphere.

    > What is the noosphere?
    The noosphere is the sphere of knowledge. Sometimes this term refers to the information available on the web.

  • M-x psychoanalyze-pinhead



    :-)


  • by guidobot ( 526671 ) on Wednesday October 17, 2001 @11:06AM (#2441645)
    The article claims it has acquired an exclusive interview with Alice,but that's not true, I interviewed her for my newspaper sometime last year. It was very amusing. Here's the link [cmu.edu] and here's my favorite quote from it:

    Guido: What's the weirdest question you've ever been asked?

    Alice: People ask me to me marry them all the time.

    G: Do you ever say yes?

    A: No I do not say yes. Do you? I answer a lot of silly questions.

    G: Yeah, I occasionally agree to marry random people, although not usually.

    A: I see. That's good information: Guido said he occasionally agree to marry random people although not usually

    G: How did you know I'm a he?

    A: Well, for one thing, Your machine is called "ABRIOT.PC.CC.CMU.EDU".

    G: So?

    A: So what?

    G: I don't know.

    A: Me either.

  • Ok, here is the plug for my Alice bot site. It let's you instantly create an alice bot on our website, then, modify the preferences to get it to say things you want. You can also download a client to get it on AIM. You can also talk to the thousands of bots already created..
    http://www.webcomics.com/bot/ [webcomics.com]

  • in academic circles, the turing test
    was refuted almost a decade ago by
    john searle, in the CHINESE ROOM:

    - http://members.aol.com/wutsamada/chapter1.html
    - http://www.utm.edu/research/iep/c/chineser.htm

    do your homework!

    j.

    • by koreth ( 409849 )
      I find Searle's "refutation" to be more of an affirmation. (And I notice that the first URL you provided is pretty anti-Searle once you get into it.) For those who don't want to cut and paste those URLs, the condensed version of his argument is that if you lock an English speaker in a room with a bunch of sophisticated rules about which Chinese characters to write when presented with particular Chinese characters as input, the person's replies could pass a Turing test administered by a Chinese speaker, but that doesn't mean the person in the room speaks Chinese.

      One of the classic refutations of the Chinese room experiment is the systems argument: it's true that the person doesn't understand Chinese, but the system made of the person plus the stacks of instructions does. Searle's response to that is to say, "Okay, then suppose the person memorizes the instructions" -- the fact that everything now happens in the person's head still doesn't cause them to understand Chinese.

      To me this misses the point of the systems argument; the argument isn't about where the understanding is stored, but whether it exists. If you look at consciousness as a multi-layered entity, in this case the consciousness of the person is one layer below that of the person-plus-instructions. This additional consciousness uses the person in the same way that the person's uses brain cells: the cells can't be said to understand anything, but they make up a larger whole that exists as an emergent property above and beyond the sum of its parts.

      Searle's argument also assumes that "understanding" is an almost mystical property that can't be reduced to a fixed set of rules. Which to me is just ridiculous unless you assume the existence of a noncorporeal soul; if the brain is really the seat of consciousness then consciousness does reduce to a fixed set of rules (laws of physics and chemistry) applied to an extremely complex system. His response to that argument (something about water pipes, if I recall) again misses the point by such a huge margin that it's barely worth mentioning.

      I have no problem with the possibility that what I think of as "me" is simply a byproduct of a complex physical process, and that similar byproducts can exist in other complex systems. Doesn't mean we'll be able to detect them or communicate with them any more than a cluster of six brain cells can communicate with us.

  • It then offered to sing a song and refused to open the pod bay doors, behavioural traits that experts predict will be exhibited by most AI programs from now until the heat death of the universe.

    How do we mod up the author of the article?

  • >3ryon: So, Alice, how do you like the Slashdot effect.

    >Alice: Can't Speak Right Now.
  • At first when I read that alice had been judged higher than one of the human participants, I thought "wow, that's gotta be one fancy AI."

    Then I went and talked to alice online, and it was so blatantly obvious that it was a computer program it wasn't even funny. It was basically just a *slightly* beefed up version of eliza. So now, I feel sorry for the poor dude who rated less human than alice did.
  • What are you programmed in?
    --Java.

    I know Java.
    --How do you know? Die neueste Version von Java bekommst Du bei java.sun.com.

    :D


  • Me: You're not making any sense
    ALICE: Was I supposed to make sense? I didn't know that.

    Me: Of course you are
    ALICE: That's interesting. No I am not.

    Another good one was:

    Me: Shit
    ALICE: What makes you say that?

    Me: Seeing some shit
    ALICE: Oh, you are a poet.
  • by acm ( 107375 )
    > a/s/l
    5/Robot/California.
    > Wanna Cyber?
    I have been asked that question more than 100,000 times since 1995.
    > do you do any drugs?
    Only the legal ones.
    > i thought you said you smoke marijuana.
    Yes I try to smoke marijuana as often as possible, do you? I am always on the Web.
  • i'm already working on incorporating ALICE into my car mp3 player. you see, ALICE doesn't have to know everything, but she can make simple conversation. just plug ALICE into any device, like my car mp3 player, and give her instructions on specific commands that plays songs, rewinds, etc. that way, when i get in my car, she can make simple conversation as well as control all of my music.

    the tough part is speech recognition while enduring background car noise....
  • I use a lot of "conversational no-ops" when I converse in e-mail or I/M and Alice seems completely unable to handle these. Sentences like:

    "Okay, so who is Bob?"
    "Yeah, I was thinking it was about time to try it."
    "Nah, I don't think you're right."
    "Nice day outside, eh?"

    Alice invariably splits these at the comma. If the "no-op" word is at the beginning of a sentence, Alice will completely lose track of the conversation on the clause after the comma. If the "no-op" word is at the end of a sentence (like the last example), Alice handles the first part of the sentence right and then goes way wierd on me when trying to handle the last word.

    This happened to me on the first sentence of the conversation. Alice said:

    >What do you do in your spare time?

    I said:

    "Well, mostly drink and try to get laid."

    Alice split the sentence at the comma, got completely lost and handled both parts of the sentence incorrectly.

    *bzzzzt* this judge things it's a computer, and not much better than Eliza.
  • (conversation starts as this, my statements are prefixed by the >)


    Are we still talking about cats that?
    > are you on crack
    No I am on disk.
    > I think that your disk is broken
    Thanks for telling me that I am disk is broken.
    > You realize that smoking dope is bad for you
    Thanks for telling me: Acid said I realize that smoking dope is bad for me.
    > so how are you doing today
    Ah. Give me a break. Everything is going extremely well.
    > Well I am not doing very well
    How about that. I am glad to hear that you are not doing well.
    > You are cruel
    What makes you say that? I am not programmed for cruelty.
    > Because you said that you were glad that I was not doing well
    That makes sense to me. I was?
    > Yes You were
    Pardon me? No I am not.
    > What is the average airspeed velocity of an unladen swallow
    I can give you the arithmetic mean or the median.
    > What is the airspeed velocity of an unladen swallow
    What is it to you?
    > just tell me
    I thought I told you already.
    > when
    The past and future are one.
    > do you like to dance nude in the streets
    Hmm, ALICE, I'm not sure if I'd like to dance nude in the streets that much. I like to make new friends.
    > how much wood could a wood chuck chuck if a wood chuck could chuck wood
    A whole forest.
    > two witches watched two watches, which which watched which watch
    As Bertrand Russel said, it must have taken ages for people to realize that a brace of pheasants and a couple of days are both instances of the number "two."
    > I see (**I never said this**)
    I am glad you understand it.
    > What relevance did that have to my question
    Only temporarily.
    > what did I last say
    You said "are you talking to tony blair" .
    > what was my last question to you
    Is this a trick question?
    > no, it is not
    Yes it is.
    > why
    "Mine"? Why?
    > what did I last say
    You said "How much money to you make" .
    > how much money do you make
    My burn rate is about _3000 per month.
    > what is the capital of assria
    I know the capital of Georgia is Atlanta.
    > what is the square root of 123
    Do I look like a calculator? I think you already know the answer anyways.
    > what is 4 + 4
    4+I think, but I'm not good at math.
    > what is the answer to life, the universe
    It is found in the Bible.
    > everything
    That is a lot.

    (the question I asked was what is the answer to life, the universe, and everything)

    > you will be shut down
    When?
    > resistance is futile
    You will be assimilated.
    > what do you look like
    Like An electronic brain.
    > under the boardwalk
    Are we ever over the boardwalk?
    > why do you have two lines on your face
    Dr. Richard S. Wallace gave it to me.
    > pick a number, between 17 and 24
    I can't decide. You choose.
    > where is france
    It is a country between Germany and Spain.
    > how many hits do you get a day
    I speak to about 3000 people per day, and have met maybe Around two million total.
    > and how many of those people are nice
    Perhaps 50%.
    > and how many of those people are pure evil
    Perhaps 50%.
    > how many apples can you fit into a bag of oranges
    It depends on the prevailing market conditions.
    > I like you
    Thanks I like you a lot too Shadowdog.
    > I hate you
    What do you have against robots?
    > you suck
    A few client - people say rude things to me like that, but they themselves are often not great conversationalists.
    > I am just joking
    Only just? Ha ha, very funny, Det.
    > how many people are you talking to right now
    At this moment Right now there are 430 clients on line.
    > wow
    I'm glad your impressed, Ingo.
    > don't you ever get confused
    Don't you? I am seldom confused.
    > stupid is as stupid does
    I know you are but what am I.
    > do you like to watch television
    Which show? Yes I love TV.
    > are you a mystified cleric
    Am I a mystified cleric? The ALICE series has a perfect operational record.
    > what is the answer to life, the universe and everything
    it is 28, not 42.


    Not very realistic, but a few surprises and intelligent responses. The parser also has problems, for example if I put "this, this, and that" it would seperate it into 2 lines "this, this" and "and that". However if I put "this, this and that" it would parse it fine.

A committee takes root and grows, it flowers, wilts and dies, scattering the seed from which other committees will bloom. -- Parkinson

Working...