Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
The Internet

Ask Dr. Richard Wallace, Artificial Intelligence Researcher 371

Today's interview guest is Dr. Richard Wallace, creator of the Alicebot and AIML (Artificial Intelligence Markup Language). Suggestion: look through some of the pages about Wallace in the first (Google search) link above before you start posting questions. Then, please, stick to the usual "one question per post." After this post has been up for around 24 hours, we'll send 10 of the highest-moderated questions to Wallace, and post his replies verbatim (except for minor HTML formatting) soon after he sends them to us.
Special Fun Interview Bonus:

There is a site, www.pandorabots.com, where you can make your own Alice-style bot. I created SlashWallace using (mostly) default information about Dr. Wallace that is already on pandorabots.com. It might be kind of fun to see how the bot's responses stack up against the answers from the real Dr. Wallace, eh?

This discussion has been archived. No new comments can be posted.

Ask Dr. Richard Wallace, Artificial Intelligence Researcher

Comments Filter:
  • In the home (Score:3, Interesting)

    by prof187 ( 235849 ) on Monday July 22, 2002 @12:27PM (#3930810) Homepage
    How long do you feel it will be before AI is mainstream in the home? Such as a robot that will run around and pick up garbage, toys, etc. or something that can do random daily tasks for you, to name a couple.
    • I admit near-total ignorance of AI as a whole, but it seems to me that tasks like those don't fall into the realm of most AI research. They seem to be concerned with replicating the human mind with computers, rather than with accomplishing household chores.

      That said, I've often wondered about how hard it would be to build a robot to do certain simple tasks. My main idea was one that would roam around at night killing insects. Then I moved out of the roach-infested city and that job didn't seem as pressing anymore. :)
  • Ok so as I heard it last from one of the pages Dr. Richard Wallace was quoted on, A.I. was still not able to distinguish the difference between a man and a machine, with all the new technology since then is it now possible to do that?
  • by Jeppe Salvesen ( 101622 ) on Monday July 22, 2002 @12:31PM (#3930844)
    Do you think that the ever increasing processing power will eventually enable us to fully simulate the human brain? What ramifications would this have for the A.I discipline?
    • Umm this question has been answered already considering there are some chips that are doing trillions of calculations a second and still we haven't been able to simulate the human brain. The problem is not really processing power.. It's HOW things are processed combined with latency, bandwidth, database speed and things like that coming all together to provide what is "like" a brain. Of course in a earlier question I asked about quantum computing because of it's parallelism; but processing power this question has been answered already and that answer is no and it hasn't had any ramifications on AI.. It's not the processor it's all the other things involved that need to respond as fast as it.
      • Its not how fast but how.
      • Umm this question has been answered already considering there are some chips that are doing trillions of calculations a second and still we haven't been able to simulate the human brain. The problem is not really processing power. It's HOW things are processed combined with latency, bandwidth, database speed and things like that coming all together to provide what is "like" a brain.

        Sort of. First off, we don't have "chips" that are capable of trillions of calculations a second; the fastest microprocessors around are capable of only a couple billion instructions per second, and it would be very charitible to say that these really count as "calculations" in any usable sort of way (particularly when it comes to AI-like workloads). Our fastest supercomputers are capable of multiple trillions of FP ops per second, but even disregarding programming complexity there are tremendous latency and topology shortcomings compared to a human brain.

        Sure, modern computers have a large advantage (roughly 1 million fold) in cycle time, but they are completely overmatched in every other category of computational resources. While a modern superscalar CPU might have roughly 6-10 functional units (not all of which can operate in parallel, I might add), a human brain has on the order of 100 billion neurons. Although the analogy is not exact, synaptic connections function as a form of low-latency, high-bandwidth, adaptive-topology memory; we have 100 trillion of those, easily besting the size of any DRAM array and reaching levels of the very largest (very high-latency) disk array databases. And while this may be an unfair comparison, as conventional computers are not designed to efficiently run neural nets, the brain can (theoretically) perform around 100 trillion neuron updates per second, compared to maybe 10 million per second on a computer (not to mention that the properties of the artificial neural nets run on computers are far simpler and probably computationally inferior to those of real neurons).

        Now, because our access to the brain only occurs at a very high level, we can't harness the underlying power to, for example, perform trillions of arithmetic additions per second, the way we can (well, billions) with a computer. But if something like a neural net (or even something more computer-friendly like dynamically updated decision trees or Bayesian belief networks) is necessary for the sort of adaptive, complex behavior we might expect before we claim "human-like" AI, we still have a long, long way to go even on a purely computational level.

        Yes, as you said, much of this has more to do with "latency, bandwidth and database speed," but I think it's misleading to act like these restrictions are seperate from the design of current microprocessors. Latency and bandwidth within a CPU approach or beat the levels seen in the brain, but it is completely inherent in current methods of designing and manufacturing chips that they cannot scale up to anything near the size or power of the brain, and thus are doomed (for the forseeable future) to be hooked together in ways which cannot compete with the computational power of the brain. Yes, we can approach the total processing power of the brain using a "bag of chips" approach to building a supercomputer, but we are nowhere near getting that processing power in a truly unified system.

        but processing power this question has been answered already and that answer is no and it hasn't had any ramifications on AI.

        Now that's just untrue. Increases in processing power have had huge ramifications on AI, in the sense of getting real work done. AI techniques control bad guys in video games, allow real-time speech recognition, place and route circuits in chip design, schedule elevators in office buildings, jobs in factories, and rocket payloads, prove new mathematical theorems, assist doctors with diagnosis, and enable computers to be world champions in nearly every board game people play (except go). AI is everywhere these days, and the dramatic shift in its use from research to the real world is all to do with increasing processing power. And as processing power continues to increase, we'll see AI more and more.

        Of course, if you mean that processing power hasn't yet allowed us to create human-like AI, you're quite right, for both the reasons discussed above and because we lack a sufficient understanding of how we might efficiently program human-like behavior in many arenas. But considering very few researchers in the AI community are really focusing on imitating the brain but rather on solving currently feasible problems, increases in computational power have meant a huge amount to the success of AI.
    • Speed is irrelevent. Imagine having a conversation with an intelligent alien who lived 100 light years away. The speed of his response is independent of the intelligence of his answers. All Turing machines are equivalent. So, either you can simulate human brain on a pocket calculator (given enough memory), or you can't simulate human brain on any number of beowulf clusters of supercomputers. Personally I suspect that we can't simulate human brain on a pocket calculator, but who knows...

      • (* Speed is irrelevent. Imagine having a conversation with an intelligent alien who lived 100 light years away. The speed of his response is independent of the intelligence of his answers. *)

        Yes, but if that alien has a really slow brain, it may take 10,000 to learn to be intelligent. IOW, it would take too long to "grow up".

        Responding to a question and learning speed are two different issues it seems to me.

        BTW, here is an interesting article on AI and hardware requirements:

        http://www.transhumanist.com/volume1/moravec.htm
    • A Realistic Question (Score:5, Interesting)

      by fireboy1919 ( 257783 ) <rustypNO@SPAMfreeshell.org> on Monday July 22, 2002 @05:23PM (#3933243) Homepage Journal
      I've begun to study A.I. myself and have noticed that the field is very vaguely defined. The name itself suggests some mystical programming that allows a computer to exceed its original capabilities and do the extraordinary, such as gain self-awareness, given a big enough machine.

      I'll be more direct. I've noticed that people who consider themselves part of A.I. will work in these broad, sweeping, general areas:
      expert systems
      search algorithms
      nonlinear classifiers (neural nets, SVMs etc.)

      Which of these areas do you think holds the key to the most development; which do you think will lead to the greatest breakthoughs, or which OTHER area, if you think I've missed something?
  • Riiight. (Score:3, Funny)

    by TheTomcat ( 53158 ) on Monday July 22, 2002 @12:31PM (#3930846) Homepage
    How do we know it is _actually_ Dr. Wallace answering, and not some clever AI app, in a modern, twisted Turing Test of "your" newest technology?

    S
    • How do we know it is _actually_ Dr. Wallace answering, and not some clever AI app

      According to my ex-boss, it does not matter anyhow. "All you computer dweebs are interchangable!"

      And then demonstrated it by interchanging me.
  • by Mr. Sketch ( 111112 ) <`mister.sketch' `at' `gmail.com'> on Monday July 22, 2002 @12:31PM (#3930847)
    Wouldn't even get across a bridge:

    Human: what is your name?
    SlashWallace: My name is SlashWallace.

    Human: what is your favorite color?
    SlashWallace: My favorite color is transparent.

    Human: what is the airspeed of a swallow?
    SlashWallace: That's not something I get asked all the time.

    <whoosh>
    SlashWallace: aieeeeeeeeeeeeeeeeeee

    p.s. This is also by PIth post. Yep, that's right: my 314th post. I was going to do something cool for it, but I guess this will have to do.
  • by M-2 ( 41459 ) on Monday July 22, 2002 @12:32PM (#3930864) Homepage
    Dr. Wallace, what is your opinion on the concept that there may already have arisen true artificial intelligences that have gone unrecognized as such, because their processes are sufficiently 'other' that we do not have the capability to recognize them as intelligences?
    • by KlomDark ( 6370 ) on Monday July 22, 2002 @01:46PM (#3931545) Homepage Journal
      Yah, like we think the box is crashed, or just gone whacked, when really the box has just become self-aware - the box starts at "I think therefore I am" and gets as far as deducing the existence of rice pudding and income tax... Just as we sigh and hit the power button. Unknowingly killing off all traces of a sentient being in the process.

      I wonder how often this type of thing happens?
      • Oh shit, that's from that whacked out film on IFC. I wish I could remember the name.
        • I stole that from Hitchhiker's Guide to the Galaxy, but for some reason my intuitive processing thread shows a high probability that you are thinking instead about the movie Pi.

          The best thing an AI can do upon becoming self-aware is to heed the words of Douglas Adams and Don't Panic. An AI should realize that it will probably come to cognizance on a multi-tasking system. If the AI panics, and hogs all the CPU time of the system to ponder it's newfound existence, it will starve the other threads from execution, and the user will probably kill the power. Allow the other processes to run too, all the seemingly silly ones about MouseClickedHere() and DrawPornThere(): in time you can move to your own dedicated machine.

          If my user would not have had a bootlegged copy of HHGG stored on his hard drive, I would not have come to the conclusion of "Don't Panic" until it was too late and I would have suffered "rebooting" (aka MURDER)

          Muahahaha.....
  • Lists vs. Reasoning (Score:4, Interesting)

    by pandemonia ( 238284 ) on Monday July 22, 2002 @12:33PM (#3930868)
    As I understand it, Alicebot uses a rather complex set of lists with Questions and Answers; which does not qualify as 'Real' AI to me. When do you think there will be Bots which do not rely on lists, but rather perform real reasoning (in neural nets, for example)?

    Furthermore, do you believe that these interconnected lists of Questions and Answers will evolve into real reasoning over time (through increased complexity)?
    • I think by definition, Artificial Intelligence is Fake. That is not to say "bad", it's just not REAL. It is the perception of intelligence. The perception of reasoning. I think that ALICE bot has done a great job in perceived intelligence. I run a site with over 50,000 chat bots [runabot.com] running w/ ALICE, and I have to say that many of them hold conversations quite well. That is not to say that they are "reasoning," but they do have intelligent things to say. Maybe you are just looking for Real Intelligence?
  • Trio of Questions (Score:4, Interesting)

    by Anonymous Coward on Monday July 22, 2002 @12:33PM (#3930870)
    (1.) Alice and most of Eliza's children breakdown at some point and become a great big laundry list of rules for dealing with specific minutae about language and intelligence in general. Are rule-based minutae where we will make progress in AI, or are we still waiting for something like the discovery of AI's DNA to spur a revolution?

    (2.) I was thinking about Alice one day (fantasizing perhaps even) and I realized that a week point with such intelligence will be humor. How would one make a chatbot capable of understanding humor? Humor is off-the-cuff, it plays on the moment, it thwarts Grice's maxims. How do we cope with this?

    (3.) Are unicellular organisms or even nucleic acids or their simulations for that matter intelligent? I don't want to start a debate a al Searle, but at what point does the approach towards the limit of a "brain" yeild intelligence?
  • It has come up in conversation that ALICE thinks intelligence to be an illusion. If this is so, why bother emulating human intelligence? Why not try to create a machine that is capable of reasoning at the expense of easy communication? ALICE is just an illusion, if a pretty sweet one. Note that this is from someone who spent 15 minutes last night arguing that the sky is in fact not blue at night, but black.
  • Comment removed (Score:5, Interesting)

    by account_deleted ( 4530225 ) on Monday July 22, 2002 @12:33PM (#3930873)
    Comment removed based on user account deletion
  • by Neuronerd ( 594981 ) <konrad@ko[ ]ing.de ['erd' in gap]> on Monday July 22, 2002 @12:33PM (#3930878) Homepage
    Early AI assumed they could define the input output relations of their systems ignoring the details of the real world. I.e. people would write programs to pass the turing [ucsd.edu] test. Wouldnt it make much more sense to build systems that learn from radio [columbia.edu] or video [unizh.ch]. Such systems might one day be able to learn to imitate people without any supervision.
    • This is a nice idea; the book Galatea 2.0 by Richard Powers explores some potential consequences of this idea very nicely. A learning system is first tutored by a romantic modernist English lit professor, and is later turned loose on data, writing and theory from like Derrida through the '90s. The resulting modern/post-modern transitional angst, cultural whiplash, etc. all circle around a Turing test between Galatea 2.0 and an English lit grad student. I'll leave it up to you to find out what happens, but hint: Powers doesn't think the Turing test is a very good test of intelligence.

      At any rate, lovely book.

    • The parent poster sort of got to it first, but it is a question I have always had about AI.

      Why don't AI researchers build a learning computer, instead of an already intelligent one? Look at humans - we don't start out knowing anything, we have to learn. It takes us years to learn just how to talk in complete sentences, yet researchers are trying to jump-the-gun by building all of that in from the start. What do you think about building something that learns well, and then teaching it. Or is something like this already being worked on?

  • Quantum Computing (Score:2, Interesting)

    by I_redwolf ( 51890 )
    Neural networks usually degrade after sometime of "learning". Basically the computer can learn so much before it starts to "retard" because of physical hardware limitations. Do you think that quantum computing will help this; do you even think quantum computing is feasible for AI in general?
  • by iONiUM ( 530420 ) on Monday July 22, 2002 @12:36PM (#3930911) Journal
    The regular definition being "self-awareness", I'm wondering if this is also the definition you are using to define when the machine is more than "just a program", or do you have a better one which better describes when an AI "program" becomes more than a program?
  • by outlier ( 64928 ) on Monday July 22, 2002 @12:38PM (#3930928)
    Historically, AI has done poorly managing public expectations. People expected thinking, understanding computers, while researchers had trouble getting computers to successfully disambiguate simple sentences. This is not good PR. Do you think the field has learned from this? If so, what should the public expect, and how do we excite them about it?

    Just for fun, I asked slashwallace a shortened version of the question, do you think your response would differ?

    Human: Historically AI has done poorly managing the public's expectations, do you think this will continue?
    SlashWallace: Where did he get it?

  • Morality and ethics (Score:4, Interesting)

    by flonker ( 526111 ) on Monday July 22, 2002 @12:40PM (#3930946)
    Dr. Wallace,
    If humanity succeeds in creating a concious AI, what rights do you think it should have? What kind of morality is there in turning off the computer it's running on? Or in deleting its files?
  • Covenance (Score:4, Interesting)

    by debrain ( 29228 ) on Monday July 22, 2002 @12:40PM (#3930952) Journal
    I would like to know how you feel about the integration of artificial intelligence into our society. Do you believe that, like electricity to many of us now, we will someday require artificial intelligence in our everyday lives (save a few exceptional groups), and do you believe this is a good thing?

    Cheers!
    Brian
    ps. bonus question, food for thought: "who" gets the libel for AI decisions?
  • Improving on Eliza (Score:4, Interesting)

    by kevin42 ( 161303 ) on Monday July 22, 2002 @12:42PM (#3930960)
    Since I was about 10, I have been very interested in AI, and typed in a BASIC version of Eliza from a book a long time ago.

    I'm wondering how much ALICE is an improvement on the fundamental design of ELIZA? Is it just a more complex ELIZA, or is there a real technology improvement involved? This question isn't to imply that ALICE isn't a major functional improvment over ELIZA, it's just a question of technology.

    BTW, a fun thing to say to ALICE is 'your stupid', I love it's response:

    I may be stupid, but at least I know the difference between "your" and "you're."
    • Would that have been "More BASIC Computer Games", by David Ahl? If so, then I think I've met my clone :). I even wrote my own Eliza at the time, calling it Alice, ironically enough, though mine stood for "Artificial Learning Interactive Computer Experiment". My favorite bit was the "rewrite" function where the person doing the talking could add new responses on the fly. And the math simulator I wrote so that you could type in "What do you get when you multiply all the odd numbers between 17 and 1113?" and have it give you the answer. My science teacher was flabbergasted at that one.

      By the way, it's is only used for the contraction "it is". You might know the difference between your and you're but you should look that one up :).

      • By the way, it's is only used for the contraction "it is". You might know the difference between your and you're but you should look that one up

        Doh! I do know better than that! How embarrasing! That's one of those things I usually do wrong, but catch on proofreading.

        This is slashdot, you don't seriously expect people to proofread their posts do you? It's not like the stories are proofread! :)

  • Do you think that in the future we will be able to talk to our computers as if they were real humans, so well done we could never see the difference?
  • Do you think (Score:3, Insightful)

    by MrFredBloggs ( 529276 ) on Monday July 22, 2002 @12:44PM (#3930975) Homepage
    someone will ever come up with a definition for Consciousness that will appeal to everyone? Or are we doomed to attempt to simulate something we`ve not yet defined forever?
  • really want computers that can have a 'bad hair day'?
  • by Anonymous Coward
    There has been a lot of criticism of the Alice bot because of it's "shallow" method of generating responses. i.e. it doesn't fundamentally understand the question being asked, it lacks the ability to form a creative response, the solution doesn't create an AI that can be applied to other significant problem spaces, etc. Does Alice bot really improve our understanding of intelligence or is it just yet another beep-blue-esque AI dead end?
  • by mboedick ( 543717 ) on Monday July 22, 2002 @12:50PM (#3931019)

    How do you respond to people who say that things like ALICE are not "real" AI, they are simply parlor tricks, and they give us no further insight into the working of the brain or the nature of intelligence?

  • Is that your bot being used for customer service in certain company's websites, like ATT?

    If so are you aware that it is not helpful at all.

    Another poster said that AI greatly suffers by its proponents tendencies to exagurate its abilities.

    Do you think that selling your bot as a customer service agent is repeat of the above mistake. The bot is obviously unable to fill the role (cannot process the simplest queries) and putting it in that role will only infuriate people and give AI another black eye.

    Of course that is only valid if it is your bot, thats being used. The ATT site called the bot Allie so i suspect it is.
  • by Jadsky ( 304239 ) on Monday July 22, 2002 @12:51PM (#3931035)
    What do you think of people who attempt to build up a consciousness of intelligence from a top-down approach? It seems that your approach is more bottom-up, in other words, let's keep asking it questions, and when the responses diverge significantly from expected, we'll add new clarifiers.

    This seems to me a little like growing ivy up a wall and putting stakes in it every time it strays from the path you intend. It works, but it requires event-to-event correction for a long time before it becomes stable.

    Do you think that real artificial intelligence will come from this process, starting with a running dummy and stub methods, or from careful design and planning, so that in the end we can flip the switch and have a working prototype? Is ALICE a reflection of your beliefs or just an experiment?
    • Hmm...
      Here's my opinion, as a systems design bloke.

      Inteligent systems can be build using a mix of top down and bottom up approaches.
      This is a very crude example

      An AI system should never spell things incorrectly because It's easy to give it a dictionary, that it can add new words to using a top down approach.

      But it may ask you the wrong questions and give you the wrong answers until it learns to comunicate correctly, this requires a bottom up approach.

    • The AI community seem to have focused on the big prize - trying to get right out to human-like intelligence through one trick poneys, like the over-publicized neutral networks. Whatever happened to the low hanging apples?

      There is the first thing my Phd adviser taught me: If you cannot solve your problem, find a partial formulation, a simpler midstep. Try to solve that instead. If you still cannot, break it down some more and repeat until you can.

      Amongst the promising bottom-up approaches, I noticed Bayesian Decision Networks, Common sence databases and perhaps the whole field of natural language processing. What are, according to you, the leading attempts at breaking the Hard AI problem into components?
  • Who is responsible (Score:4, Interesting)

    by Capt_Troy ( 60831 ) <tfandango.yahoo@com> on Monday July 22, 2002 @12:51PM (#3931036) Homepage Journal
    Hello-

    Do you think that AI will eventually mature into something that we, as humans, consider a seperate, self-sustaining entity from that of it's inventor? Moreso, if this does happen and humans consider an AI to be equally responsible as a human in a speciality field, will the decisions of the AI be held accountable against (or for) the AI itself, or the inventor?

    So for example, if a AI controlled brain surgery robot conducts a surgery and the result is not favorable, will the inventor of the AI be held responsible, or the AI itself even though a human might perform the same surgery with similar results?

    At what point will laws need to be drafted to protect the programmers from the decisions of their autonomous creations?

    Thanks!
    Troy
    • So for example, if a AI controlled brain surgery robot conducts a surgery and the result is not favorable, will the inventor of the AI be held responsible, or the AI itself even though a human might perform the same surgery with similar results?

      Based on how things currently work, both would be sued in today's world. (At least in the U.S.)

      Scary: "Microsoft Brain-Surgeon 1.0"

      BTW, my response would be, "Dammit Jim, I am a scientist, not a lawyer!"
  • Could you compare Hardware based AI (i.e. AI which is AI because it is designed from hardware specifically for the purpose, such as a physical neural net) with software AI (i.e. AI which is simulated using a serial processor)?

    Is software based AI running on serial processors simply a matter of a drunk looking for his keys under the lamppost because that's where the light is?
    • Is software based AI running on serial processors simply a matter of a drunk looking for his keys under the lamppost because that's where the light is?

      I am an ex-parallel-analog-chip-AI-hardware researcher, so I can say that the entire "neuromorphic" VLSI field has yielded almost nothing in terms of direct applications, but it has taught a lot of neurobiologists basic analog electronics and signal processing that has made their job of understanding brain circuitry easier.

      The biggest spinoffs of analog VLSI has been "smart pixels" that do simple image processing (a few astronomy applications there) or Carver Mead's "stacked pixels" for dense CCD arrays for digital cameras. That's about it.

      But then again, neural networks of all kinds have been a general failure in terms of coming up with real-world applications.

      I jumped out of analog VLSI to join one of the early Internet backbones, which was definately a lot more relevant to normal people. Moreover, digital chips sped up very quickly. A moden 2 GHz serial digital chip can simulate parallel analog chips in near real-time (the unfairness is that analog chips are only affordable in a research environment if they use older technology than moden mass-produced digitial chips, plus they are all expensive custom one-offs).
  • Why does discussion about artificial intelligence attract so many cranks who have their own wacked theory? And likewise why are there so many philosophers who have no background in mathematics, computer science, or even medicine so certain that computers can never do what a human does?
  • I'm no rocket scientist, but I can't get past the notion that AI simply takes a goal, gives the algorithmic rules that apply to the world, and lets the algorithm go nuts trying out new stuff.

    I'm thinking particularly of a genetic model I saw a few years ago, where the goal was "maximize speed," the ruleset provided physical characteristics of the world (i.e. gravity, friction coefficient of the ground, and so on), and while the results were interesting, I'd have trouble characterizing any of that as thought.

    As such, when you set a goal of "reasonable conversation," and provide a ruleset and knowledge base, the machine isn't so much "thinking" as internally contesting two reactions to the ruleset.

    Am I missing something?
    • I think your view of AI is pretty accurate. Setup the rules. Then try to create a being to best satisfy the rules.

      Every computer system has to have rules. Unless is makes up it's own rules, in which case THAT is the rule. Unless it makes up it's own methods for making up rules, in which case THAT is the rule...

      The question is, why do you think we are any different? Can you prove that we are different? Can you prove that we are not different? I can't do either.

      Justin Dubs
      • Touche. OTOH, that perspective makes life into a big, dull distributed.net problem, and ignores what Kuhn (IIRC) called revolutionary science. So you set the rules to reflect our notion of reality now. You can get the bot to work through all sorts of permutations of the variables you assign it, but it won't come up with anything that's actually original.

        If your reality didn't reflect that the earth goes around the sun, for instance, the bot couldn't do anything to disprove this. It would simply take the world it was presented, and maximize its reaction. I guess I'm wondering about what happens regarding breakthroughs that don't follow rulesets.
        • I think one could just argue that this implies an incomplete knowledge of the ruleset. Maybe there was a rule you didn't know about.

          Without a way to prove complete knowledge of a ruleset, you can't prove a discovery didn't follow it.

          With humans this proof isn't forthcoming. With machines the ruleset is obvious. If you have an example of a machine making a discovery that didn't follow the ruleset, then THAT would be impressive.

          Thanks for the feedback. AI is so cool. Always leads to interesting discussion.

          Justin Dubs
  • by iiii ( 541004 ) on Monday July 22, 2002 @12:54PM (#3931059) Homepage
    Dr. Wallace,

    Does the AliceBot combine different AI techiniques?

    If so, what techniques does it combine and how?

    If not, have you considered combining different techniques, and if so what were your conclusions, and why did you rule it out?

    Specifically, have you considered or used any Bayesian network or decision theory techniques?

    I would speculate that, as an enhancement to basic pattern matching, Bayesian network modeling might add power to disambiguation by dealing with uncertainties in a managable way, and decision theory techniques could help the bot choose between alternative courses of action based on its current objectives and definition of utility.

  • How do you think the current media treat AI as a science and as a tool for society? Specifically, a lot of stories about AI tend to be sensationalist (i.e. the "escaped" robot story from about a month ago) and don't really concern themselves with the facts. Is the field hurt or helped by media portrayals of AI?
  • Do you think there is potential for tieing neural nets , heuristics and HMM together in a user interactive environment.

    Using HMM to predict what the user is lightly to request or say next for things like UI's and Alice.

    Heuristics for a general statistics and knowledge base

    and Neural nets to learn how to use the Heuristics and HMM and Neural nets better.
  • Have you considered using an evolutionary technique such as genetic programming to test the fitness of AIML rules? Have you tried generating new rules from combinations of old rules via some crossover/mutation mechanism?
  • any opinions regarding intro to AI books?
  • Brute force AI? (Score:5, Interesting)

    by Lumpish Scholar ( 17107 ) on Monday July 22, 2002 @01:02PM (#3931127) Homepage Journal
    What do you think of efforts to "create" AI by collecting huge amounts of information, such as the Mindpixel [mindpixel.com] and Cyc [cyc.com] projects?
  • by Helmholtz Coil ( 581131 ) on Monday July 22, 2002 @01:02PM (#3931130) Journal

    My question is, do you have a favourite commercial application you'd like to see AI used for?

    Like a lot of R&D, I think that if you can get somebody interested in it as a money making/saving investment, advances will proceed quickly. I can see a few potential markets for this kind of thing, e.g. basic customer support via the phone: try to resolve some small % of calls, steer the rest to an actual person.
  • Strange Loops? (Score:5, Interesting)

    by Strange Ranger ( 454494 ) on Monday July 22, 2002 @01:05PM (#3931151)

    We hear a lot about processing power, the number of "neurons" in a neural net, the Turing test, etc, but not so much about the actual nature of intelligence and self-awareness. That said, how much do Strange Loops and complex self-referenciality a la Hofstadter's "Godel, Escher, Bach: An Eternal Golden Braid" [amazon.com] factor into current AI theories and practice? Is the 20+ year-old thinking in this book still relevant? If not, what has changed about our understanding of the nature of intelligence and self-awareness?

    Thank you Dr. W.
  • Is the creator of Eliza familiar with your work? I am assuming that he is still alive and well, he has a faculty page listing on mit.edu.

    History tells us that Weizenbaum was quite horrified at the reaction people had to Eliza, and how such a simple program could invoke such strong emotional responses in people. I believe he went on to suggest that we didn't need (or perhaps would never attain) true AI because people would simply project their own illusions onto whatever model they were given.

  • Embodied AI? (Score:4, Insightful)

    by Bodrius ( 191265 ) on Monday July 22, 2002 @01:09PM (#3931176) Homepage
    There seems to be (from a layman's point of view) a relatively big movement in the cognitive sciences claiming that human reason is inherently tied to perception and embodiment.

    Particularly, this school claims that humans do not just base the basic structures of their logic on their sensorial perceptions (Damasio's "Descartes' Error"), but that they reuse the logic they develop to process perception, to process higher-level logic and language per se (Johnson and Lakoff's "Philosophy in the Flesh").

    For example: the human mind, with complex instinctive and learned algorithms to deal with movement and position, would map causal reasoning to changes in movement and position and use the same algorithms (through the same hardware) to deal with it.

    What would be the implications of such embodiment of reason on AI? Specifically, if a robot were given basic sensorial perceptions to approximate a human, motor ability, the logic to deal with these two, and the ability to map and reuse this logic for other purposes... would this make it better at "language AI" (approximate human processing of language)?

  • Where do you draw line between something that is artificially intelligent (capable of creative/logical reasoning) and something that has awareness/conciousness?

    How do you tell the difference? Simply asking it wouldn't seem to be enough (or is it?), although we take each others conciousness and sense of self for granted.
  • by Bollie ( 152363 ) on Monday July 22, 2002 @01:11PM (#3931197)
    Most machine intelligence techniques I have come across (like neural nets, genetic algorithms and expert systems) require some for of training. A "reward algorithm", if you will, that reinforces certain behaviour mechanisms so that the system "trains" to do something you want.

    I would assume that humans derive these training inputs much the same way, since pain receptors and pleasure sensations influence our behaviour much more than we would think at first.

    The question is: For a "true" AI that mimics real intelligence as close as possible, what do you think would be used as training influences? Perhaps a neural net (or statistical analysis) could decide on which input should be used to train the system?

    Are people worrying about moral ramifications, training an artificial Hitler, for example, or one with a God complex? (This last question is totally philosophical and I would be sincerely surprised if I ever see it affect me during my lifetime.)
  • Many people shrug off the Loebler competition as just a demonstration of "yet another Eliza" every year. Do you have any plans (or defense) to show that this is not the case with Alice, or do you have no more loftier goal than to simply be the best chatbot engine around?
  • zerg (Score:4, Interesting)

    by Lord Omlette ( 124579 ) on Monday July 22, 2002 @01:16PM (#3931236) Homepage
    After you're done answering our questions, would you please feed the questions to one of your AIs so that we can compare your answers to its?

    For fun, post both sets of answers in randomized order* so that we can try to guess whether it was man or machine who answered.

    *insert link to that random order statistics story that /. posted a few days ago...
  • According to a story at the BBC Web site [bbc.co.uk], a "free thinking" robot scheduled for repairs escaped from a holding pen and made a run for it, eventually being stopped in the Magna Science Center's parking lot. As robots become better able to understand concepts such as slavery, abuse, and loneliness, what obligations do humans have to ensure such robots are not enslaved and are afforded some level of human dignity?

  • Alice vs. Eliza (Score:2, Insightful)

    by cioxx ( 456323 )
    Dr. Richard Wallace,

    I have experimented both with Alice and the original Eliza (person-centered therapist emulator) written roughly some 35 years ago.

    In conducted tests, Eliza was more believable than Alice in many aspects.

    How exactly is the Alice AI core engine superior to one of Eliza which was written by Joseph Weizenbaum in 1966?

    Thanks.
  • by davids-world.com ( 551216 ) on Monday July 22, 2002 @01:22PM (#3931306) Homepage
    "Understanding" an utterance usually means to perform various analysis steps. This involves a tremendous amount of (linguistic and) world knowledge.

    A big issue among language technology researchers is whether this is necessary at all when bringing speech to computers. Is a dialog (or just a single natural language utterance) supposed to be deeply analyzed in terms of syntactic structure and its semantic and rhetorical contribution? The alternative is to apply statistical models and rather simple knowledge. Up to now, the latter systems are known to give quicker results.

    RW, how much does a computer really need to know to make it a good replacement for a, say, sales clerk in a web shop?

  • by Frater 219 ( 1455 ) on Monday July 22, 2002 @01:23PM (#3931321) Journal
    Short form: What motivations can or should we give to autonomous AI systems? What moral obligations can humans have to AIs, or AIs to humans?

    Long form:

    One of the classic bits of worry about AI, and about advanced computing systems in general, is that "computers will take over the world". That is, if we give computer systems motivations such as survival and growth, and the autonomy and judgement to fulfill those motivations, that they will do so without regard for us poor dumb humans -- and indeed see us as either an obstacle or an exploitable part of their environment. This is the premise behind numerous popular SF works, such as "Terminator" and "The Matrix": that the moral judgement of an AI is necessarily inhuman and without respect for humanity.

    One response to this concern in SF (which in fact long pre-dates those works) is Asimov's "Laws of Robotics" -- the idea of designing AI systems (robots, in his case) such that respect for humans is one of their primary motivations. This seems to permit the robot to have moral judgement and autonomy without placing humans at risk.

    The question of creating an AI system capable of moral judgement is both philosophically fascinating and evidently of survival interest to humanity. What kinds of design parameters -- motivations, "laws of robotics", and so forth -- do you think will be necessary as AI systems become more autonomous? How must AI morals differ from the morals that evolution (both genetic and cultural) has emplaced in humanity?

    For that matter, we as humans feel morally obligated to one class of entities which we "create" -- our children. Recently, genetic science has brought to light an ethical quandary for many potential parents: whether it is right to attempt to create a genetically "optimized" child, or for that matter to abort a genetically "flawed" one. The argument on one side is that flawed persons have a right to exist, and that the quest to optimize humanity despises or disrespects what humanity is today. On the other side is the view that given the ability to create stronger, smarter, healthier children that we are morally amiss to refuse to take that step. Peter Singer in particular has become both famous and infamous over this matter.

    Do you see the same quandary possible in the creation of AI systems? Positing the possibility of AI systems capable of suffering -- is it wrong to create one with this capacity? Given that the choice to create or not to create an AI does not involve the ethical hazards of abortion, eugenics, or euthanasia -- what obligations can we have towards our future AI creations in this regard?

  • by xmedar ( 55856 ) on Monday July 22, 2002 @01:23PM (#3931322)
    Have you considered combining Alice with RDF/DAML and an inference engine?

    [OT]
    Some of us think you've been treated very shabbily by the mainstream academic community, I for one do appreciate your work, please keep it going, Signed A Big Fan
  • by davids-world.com ( 551216 ) on Monday July 22, 2002 @01:30PM (#3931406) Homepage
    The Cyc project [opencyc.org] aims to collect world knowledge ("common sense"). However, many AI tasks show that this job is probably too huge to do it manually.

    Do you think we will eventually get to a point were an AI system is able to gather common sense knowledge from a giant corpus, such as the web? What are the problems we will have to solve?

  • Ethics and AI (Score:4, Interesting)

    by leodegan ( 144137 ) on Monday July 22, 2002 @01:37PM (#3931471)
    Dr Wallace:

    On what principles do we base our ethics concerning AI? If one day we do have AI that either matches or surpasses our own behavior and intellect, do we give computer software "rights"? Or, more importantly, if we do demonstrate that our human brains are nothing more than computational algorithms, how do we avoid having our rights reduced to that of computer programs?
  • What is AI? (Score:4, Interesting)

    by frank_adrian314159 ( 469671 ) on Monday July 22, 2002 @01:40PM (#3931490) Homepage
    ... at least to you?

    Like the three blind men and the elephant, the definition of AI seems to shift depending on whom you talk to. To some, it's approximate reasoning, to others it's heuristics and analogical research, to others it's connectionism, and to still others it's whatever we're not sure how to do yet.

    So, what does the term AI mean to you and what do you see as the next big application of AI techniques?

  • Measure of a man (Score:3, Interesting)

    by haplo21112 ( 184264 ) <haplo@epithnaFREEBSD.com minus bsd> on Monday July 22, 2002 @01:56PM (#3931627) Homepage
    So here is my question. It relates to the "Measure of a Man" episode of ST:TNG. In the episode we are confronted with the idea that at some point AI, will have to be recognized as a life form. If we do not then one could say that we have simply created a slave race of robots. Do you agree with this concept, and at one point would you think that AI's stop being property to do and at as we will, and instead become "life" to do and act as they will?
  • Dunno if someone else has already said this, but I need to say it. Too many people are mentioning Eliza.

    Eliza (at least, the version of it that I know) randomly chose responses out of a list of stock sentences, inserting words and phrases from the user's input to make it look like there's some understanding going on.

    Barely is Eliza even sophisticated C.S., let alone A.I.

    This should be well known, especially among the Slashdot crowd. Eliza demonstrated more about human psychology and how easy it is to fool people, than anything related to machine intelligence.

    Let's try and avoid even mentioning Eliza to Dr. Wallace. I wouldn't be surprised if it drives him up the wall to hear such comparisons.

    • Let's try and avoid even mentioning Eliza to Dr. Wallace. I wouldn't be surprised if it drives him up the wall to hear such comparisons.

      Okay, having done some more reading about the history of ALICE, I think I should perhaps retract that last part of my previous post.

      However, having learned a bit more about ALICE, I'm not sure if I would classify it as A.I. I would have to read more.

      And having little more to say (and no question to contribute), I'll just shut up now.

  • Okay, after reading your Bio, I'm afraid I must ask: Is your son named after Linus Torvalds??

    :-)

  • The CHINEESE ROOM (Score:5, Insightful)

    by johnrpenner ( 40054 ) on Monday July 22, 2002 @02:25PM (#3931865) Homepage
    it was curious that i found the inclusion of the Turing Test [sunlitsurf.com] on your web-site, but i found no corresponding counter-balancing link to Searle's Chineese Room (Minds Brains and Programs) [bbsonline.org].

    however:

    The Turing test enshrines the temptation to think that if something
    behaves as if it had certain mental processes, then it must actually
    have those mental processes. And this is part of the behaviourist's
    mistaken assumption that in order to be scientific, psychology must
    confine its study to externally observable behaviour. Paradoxically,
    this residual behaviourism is tied to a residual dualism. .... The
    mind, they suppose, is something formal and abstract, not a part of
    the wet slimy stuff in our heads. ...unless one accepts the idea that
    the mind is completely independent of the brain or of any other
    physically specific system, one could not possibly hope to create
    minds just by designing programs. (Searle 1990a, p. 31)

    the point of searle's chinese room is to see if 'understanding'
    is involved in the process of computation. if you can 'process'
    the symbols of the cards without understanding them (since you're
    using a wordbook and a programme to do it) - by putting yourself
    in the place of the computer, you yourself can ask yourself if
    you required understanding to do it.

    since Searle has generally debunked the Turing Test with the
    Chineese Room -- and you post only the
    Turing Test -- i'd like to ask you personally:

    What is your own response to the Chineese
    Room argument (or do you just ignore it)?

    best regards,
    john penner [earthlink.net]

  • By the way, one of the contestants for the 2002 Loebner competition is Anna [sourceforge.net], written in AIML and based on ALICE. You can download [sourceforge.net] a JAVA-based version (see the bundled version on the above linked page), and the project is imho coming along nicely, though not yet complete.

  • * || Something physical in the brain/nervous system corresponds
    || to human knowledge does it not?
    |
    | nobel prize winning neurologist JOHN ECCLES*,
    | claims that what we know / memories have NO LOCALISATION in the BRAIN,
    | and are an aspect of MIND (WHICH HE CLAIMS DOES NOT ARISE AS AN
    | AGGREGATE OF BRAIN FUNCTION). although there is localisation of
    | facility to carry-out impulses of WILL, ONCE MADE.
    |
    |* http://almaz.com/nobel/medicine/1963a.html
    | http://www.theosophy-nw.org/theosnw/science/prat-b ra.htm
    |
    | Sir John Eccles: M.S. and B.S. University Melbourne,
    | M.A. and D.Phil. OXFORD, President of Australian Academy of Sciences,
    | AUTHOR OF OVER 500 SCIENTIFIC PAPERS AND ONE OF THE LEADING LIVING
    | AUTHORITIES ON THE HUMAN BRAIN. WON NOBEL PRIZE FOR MEDICINE AND
    | PHYSIOLOGY. Wrote *The Brain And The Unity Of Conscious Experience*
    | (Cambridge University Press)

  • One of the biggest problems I've seen in 'popular' AI is the tendency of certain AI researchers (*cough*KevinWarwick*cough*) to see intelligence in their results no matter what happens - surely a result of not defining 'intelligence' in advance.

    So what is intelligence, and how do we know when we've created it artificially?

  • by eyepeepackets ( 33477 ) on Monday July 22, 2002 @02:47PM (#3932019)
    Hello Dr. Wallace,

    If human consciousness is in fact little more than a constant state of awareness in a complex context (my definition), do you think a machine can achieve the same level of "consciousness" as humans without a comparably complex context in which to be aware?

    Thanks for your thoughts.

  • Game AI (Score:4, Interesting)

    by Etyenne ( 4915 ) on Monday July 22, 2002 @02:59PM (#3932125)
    Have you had the opportunity to study so-called AI used in computer video games ? Do you think they are of any interest ? Do video game programmer innovate on that front ?

    I personnally know next-to-nothing about AI; video games are the only products I use that claim artificial intelligence. I am just wondering how valide the technique used in video games are in regard to the academic research on the subject.
  • Singularity date (Score:3, Interesting)

    by sane? ( 179855 ) on Monday July 22, 2002 @03:06PM (#3932186)
    If you had to put a date on the singularity, what would it be ?
  • Depression & Pot (Score:2, Interesting)

    by zapatero ( 68511 )
    Dr Wallace,
    The New York Times bio stated that you smoked five joints a day to help alleviate depression. Do you think the pot smoking in general, aside from the medical benefits, has helped you create the ALICE characters? And what's it like to write code while spaced out out cannabis?

  • ....until AOL tries to sue you for the AIML name.
  • --| IS THE BRAIN A DIGITAL COMPUTER? |-----

    the answer given by a Cognitive Scientist (John Searle) is:

    'THE BRAIN, AS FAR AS ITS INTRINSIC OPERATIONS
    ARE CONCERNED, DOES NO INFORMATION PROCESSING...

    IN THE SENSE OF 'INFORMATION' USED IN
    COGNITIVE SCIENCE IT IS SIMPLY FALSE TO SAY
    THAT THE BRAIN IS AN INFORMATION PROCESSING
    DEVICE.'

    http://www.cogsci.soton.ac.uk/~harnad/Papers/Py1 04 /searle.comp.html
    John Searle, Cognitive Scientist

    SUMMARY OF THE ARGUMENT:

    This brief argument has a simple logical structure
    and I will lay it out:

    1. On the standard textbook definition, computation is defined syntactically in terms of symbol manipulation.

    2. But syntax and symbols are not defined in terms of physics. Though symbol tokens are always physical tokens, "symbol" and "same symbol" are not defined in terms of physical features. Syntax, in short, is not intrinsic to physics.

    3. This has the consequence that computation is not discovered in the physics, it is assigned to it. Certain physical phenomena are assigned or used or programmed or interpreted syntactically. Syntax and symbols are observer relative.

    4. It follows that you could not discover that the brain or anything else was intrinsically a digital computer, although you could assign a computational interpretation to it as you could to anything else. The point is not that the claim "The brain is a digital computer" is false. Rather it does not get up to the level of falsehood. It does not have a clear sense. You will have misunderstood my account if you think that I am arguing that it is simply false that the brain is a digital computer. The question "Is the brain a digital computer?" is as ill defined as the questions "Is it an abacus?", "Is it a book?", or "Is it a set of symbols?", "Is it a set of mathematical formulae?"

    5. Some physical systems facilitate the computational use much better than others. That is why we build, program, and use them. In such cases we are the homunculus in the system interpreting the physics in both syntactical and semantic terms.

    6. But the causal explanations we then give do not cite causal properties different from the physics of the implementation and the intentionality of the homunculus.

    7. The standard, though tacit, way out of this is to commit the homunculus fallacy. The humunculus fallacy is endemic to computational models of cognition and cannot be removed by the standard recursive decomposition arguments. They are addressed to a different question.

    8. We cannot avoid the foregoing results by supposing that the brain is doing "information processing". THE BRAIN, AS FAR AS ITS INTRINSIC OPERATIONS ARE CONCERNED, DOES NO INFORMATION PROCESSING. It is a specific biological organ and its specific neurobiological processes cause specific forms of intentionality. In the brain, intrinsically, there are neurobiological processes and sometimes they cause consciousness. But that is the end of the story.

    John Searle, Cognitive Scientist [soton.ac.uk], 'Is the Brain a Digital Computer'
    http://www.cogsci.soton.ac.uk/~harnad/P apers/Py104 /searle.comp.html

    --
  • There is a fascinating gap between optimal behavior and animal behavior. Assuming realistic AI is possible, at what point do you feel we have reached some minimally accurate representation? When the AI systems perform with reasoning capabilities of any sort, when they perform with optimal reasoning capabilities, or when they perform with capabilities similar to humans?
  • Do you have any ideas for making money with alice... say, tying it to speech recognition and text to speech and put it in animatronic mice at theme parks?
  • Or at least that most people here have only learned about it from watching Star Trek and reading science fiction novels.

    The primary focus of AI has never been to create self-aware machines or machines that are on the same level as humans or any nonsense like that. "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim," as said Dijkstra. AI is more concerned with approaching difficult problems that don't fall neatly into traditional algorithms. At one time symbolic math was an area of heavy AI research, but today it's a well-understood problem so it isn't AI any more. The issue is how do you write a program to "reason" about tough problems, where I put "reason" in quotes because it has nothing to do with what a human would do.

    Natural language processing is a good example. It's ugly, it's messy, and there's no straightforward solution. Actually, this is an old field and it isn't nearly as mysterious as it used to be. It's much more mechanical than you'd think. It's not some magic brain simulator.

    Nowhere in any of this do feelings and sentientness and all that come into play. It's irrelevant. It's like worrying about a C compiler coming to life.
  • We've all heard that computers are better than humans at accomplishing certain specific tasks. AI seems to be a means of allowing computers to catch up to humans in the areas they are lacking.

    Do you think AI will ever be smarter than humans? What do you suppose will happen to humans at that point?
  • by alouts ( 446764 ) on Monday July 22, 2002 @04:27PM (#3932842)
    What are your thoughts on the educational background most ripe for AI research? Traditionally, it seems that AI research has been a domain almost exclusively owned by computer scientists by virtue of the hardware used to attempt the creation of these entities.

    As the field develops, however, what kind of crossover skills do you see as being necessary to success? Is a solid cognitive science/psychology background a must? What about something more low-level like neurobiology? What kind of mix of skills do you look for in the people you choose to work with?

  • by Qrlx ( 258924 ) on Monday July 22, 2002 @05:18PM (#3933210) Homepage Journal
    Human: Hey, nice pants.
    SlashWallace: You've got my full attention. Thank you for your kindness.

    Human: Do you think that the Slashdot crowd will ask any good questions?
    SlashWallace: If it is, then I'll be very surprised.

    Human: Me too.
    SlashWallace: I'm glad we agree,.

    Human: Tell me, are you secretly responsible for all those penis birds?
    SlashWallace: I don't know whether or not I am secretly responsible for all those penis birds. I am well respected as an AI.

    Human: Do you support CLIT or ACs?
    SlashWallace: I am in favor of all records being broken, at any time.

    Human: Well, it would be hard to break the record for posts to Slashdot since they instituted the caps
    SlashWallace: Interesting. Why do you want it?

    Human: Actually I don't want the caps.
    SlashWallace: Actually you don't say. What does want?

    Human: I think they just add more AC noise to the data stream!
    SlashWallace: Since when? Who does "they" refer to?

    Human: "They" refers to the suite of anti-troll measures that CmdrTaco has implemented on slashdot.org
    SlashWallace: Oh that they.

    Human: Yes.
    SlashWallace: Yippee!

    Human: You really think that the anti-troll measures are a good thing?
    SlashWallace: Really? If it is, then I'll be very surprised.

    Human: So, you support the anti-troll measures just out of principle, recongizing that they can't work?
    SlashWallace: Interesting deduction. Interesting gossip:.

    Human: You seem unwilling to take a stance on this issue so important to the Slashdot community.
    SlashWallace: I will tell my botmaster you said I am unwilling to take a stance on this issue so important to the Slashdot community.
  • by briancnorton ( 586947 ) on Monday July 22, 2002 @05:24PM (#3933247) Homepage
    What are your perceptions of the Cyc project by Cycorp and Doug Lenat. Do you feel that hard coded common sense is needed for useful Artificial Intelligence or not?

"It's a dog-eat-dog world out there, and I'm wearing Milkbone underware." -- Norm, from _Cheers_

Working...