Ask Dr. Richard Wallace, Artificial Intelligence Researcher 371
Today's interview guest is Dr. Richard Wallace, creator of the Alicebot and AIML (Artificial Intelligence Markup Language). Suggestion: look through some of the pages about Wallace in the first (Google search) link above before you start posting questions. Then, please, stick to the usual "one question per post." After this post has been up for around 24 hours, we'll send 10 of the highest-moderated questions to Wallace, and post his replies verbatim (except for minor HTML formatting) soon after he sends them to us.
Special Fun Interview Bonus:
There is a site, www.pandorabots.com, where you can make your own Alice-style bot. I created SlashWallace using (mostly) default information about Dr. Wallace that is already on pandorabots.com. It might be kind of fun to see how the bot's responses stack up against the answers from the real Dr. Wallace, eh?
In the home (Score:3, Interesting)
Re:In the home (Score:2)
That said, I've often wondered about how hard it would be to build a robot to do certain simple tasks. My main idea was one that would roam around at night killing insects. Then I moved out of the roach-infested city and that job didn't seem as pressing anymore.
Long Long time ago... (Score:2, Interesting)
AI through simulation? (Score:5, Interesting)
Re:AI through simulation? (Score:2)
Re:AI through simulation? (Score:2)
Re:AI through simulation? (Score:2)
Sort of. First off, we don't have "chips" that are capable of trillions of calculations a second; the fastest microprocessors around are capable of only a couple billion instructions per second, and it would be very charitible to say that these really count as "calculations" in any usable sort of way (particularly when it comes to AI-like workloads). Our fastest supercomputers are capable of multiple trillions of FP ops per second, but even disregarding programming complexity there are tremendous latency and topology shortcomings compared to a human brain.
Sure, modern computers have a large advantage (roughly 1 million fold) in cycle time, but they are completely overmatched in every other category of computational resources. While a modern superscalar CPU might have roughly 6-10 functional units (not all of which can operate in parallel, I might add), a human brain has on the order of 100 billion neurons. Although the analogy is not exact, synaptic connections function as a form of low-latency, high-bandwidth, adaptive-topology memory; we have 100 trillion of those, easily besting the size of any DRAM array and reaching levels of the very largest (very high-latency) disk array databases. And while this may be an unfair comparison, as conventional computers are not designed to efficiently run neural nets, the brain can (theoretically) perform around 100 trillion neuron updates per second, compared to maybe 10 million per second on a computer (not to mention that the properties of the artificial neural nets run on computers are far simpler and probably computationally inferior to those of real neurons).
Now, because our access to the brain only occurs at a very high level, we can't harness the underlying power to, for example, perform trillions of arithmetic additions per second, the way we can (well, billions) with a computer. But if something like a neural net (or even something more computer-friendly like dynamically updated decision trees or Bayesian belief networks) is necessary for the sort of adaptive, complex behavior we might expect before we claim "human-like" AI, we still have a long, long way to go even on a purely computational level.
Yes, as you said, much of this has more to do with "latency, bandwidth and database speed," but I think it's misleading to act like these restrictions are seperate from the design of current microprocessors. Latency and bandwidth within a CPU approach or beat the levels seen in the brain, but it is completely inherent in current methods of designing and manufacturing chips that they cannot scale up to anything near the size or power of the brain, and thus are doomed (for the forseeable future) to be hooked together in ways which cannot compete with the computational power of the brain. Yes, we can approach the total processing power of the brain using a "bag of chips" approach to building a supercomputer, but we are nowhere near getting that processing power in a truly unified system.
but processing power this question has been answered already and that answer is no and it hasn't had any ramifications on AI.
Now that's just untrue. Increases in processing power have had huge ramifications on AI, in the sense of getting real work done. AI techniques control bad guys in video games, allow real-time speech recognition, place and route circuits in chip design, schedule elevators in office buildings, jobs in factories, and rocket payloads, prove new mathematical theorems, assist doctors with diagnosis, and enable computers to be world champions in nearly every board game people play (except go). AI is everywhere these days, and the dramatic shift in its use from research to the real world is all to do with increasing processing power. And as processing power continues to increase, we'll see AI more and more.
Of course, if you mean that processing power hasn't yet allowed us to create human-like AI, you're quite right, for both the reasons discussed above and because we lack a sufficient understanding of how we might efficiently program human-like behavior in many arenas. But considering very few researchers in the AI community are really focusing on imitating the brain but rather on solving currently feasible problems, increases in computational power have meant a huge amount to the success of AI.
Re:AI through simulation? (Score:2)
Re:AI through simulation? (Score:2)
Yes, but if that alien has a really slow brain, it may take 10,000 to learn to be intelligent. IOW, it would take too long to "grow up".
Responding to a question and learning speed are two different issues it seems to me.
BTW, here is an interesting article on AI and hardware requirements:
http://www.transhumanist.com/volume1/moravec.ht
A Realistic Question (Score:5, Interesting)
I'll be more direct. I've noticed that people who consider themselves part of A.I. will work in these broad, sweeping, general areas:
expert systems
search algorithms
nonlinear classifiers (neural nets, SVMs etc.)
Which of these areas do you think holds the key to the most development; which do you think will lead to the greatest breakthoughs, or which OTHER area, if you think I've missed something?
Re:AI through simulation? (Score:2)
Its possible (too little is known about intelligence to be definitive about this) that the laws of information processing / physics / nature or what have you, impose a tradeoff here. It may not be possible to get human / animal type intelligence without also making it error prone. Certainly, in the hundreds of millions of years that nature evolved, that is the solution that nature arrived at. Before you make statements like "why try and make them think like people", you should think about whether there is any evidence to show that any other path to intelligence will work. I am not saying that it wont - just that there doesnt seem to be any research pointing out that the alternative (your way) will work.
One of the implicit goals of current AI research is to understand the nature of intelligence. This is currently very poorly understood. Certainly there is an intense disagreement / debate between the AI proponents and the researchers who claim that AI has set itself a fundamentally unattainable goal. Maybe the only attainable goal is a low - IQ but completely reliable automaton. But unless you understand intelligence itself, finding out whats attainable and whats not is a like finding a needle in a haystack in pitch dark.
Turing Test (Score:5, Interesting)
I was pretty sure that almost all AI researchers came to the agreement about thirty years ago that the original imitation game as proposed by Turing in 1951 was useful only as a mental exercise, not in practice. Do you feel that the types of developments that the Loebner prize supports(intentional, hard-coded spelling mistakes, etc.) are actually productive in terms of the AI research project?
Ok... that kind of looks like two questions, but just pretend that I worded it better and made it one question.
Re:AI through simulation? (Score:2)
So is the human brain. There is not enough room in there for everything (dispite some urban myths that say otherwise.)
(* where a human can discover unexpected truths *)
AI projects *have* found new solutions to math problems and other puzzles.
Riiight. (Score:3, Funny)
S
Re:Riiight. (Score:2)
According to my ex-boss, it does not matter anyhow. "All you computer dweebs are interchangable!"
And then demonstrated it by interchanging me.
Re:Riiight. (Score:2)
Dumb SlashWallace (Score:3, Funny)
Human: what is your name?
SlashWallace: My name is SlashWallace.
Human: what is your favorite color?
SlashWallace: My favorite color is transparent.
Human: what is the airspeed of a swallow?
SlashWallace: That's not something I get asked all the time.
<whoosh>
SlashWallace: aieeeeeeeeeeeeeeeeeee
p.s. This is also by PIth post. Yep, that's right: my 314th post. I was going to do something cool for it, but I guess this will have to do.
Extant but unrecognized? (Score:3, Interesting)
Re:Extant but unrecognized? (Score:4, Funny)
I wonder how often this type of thing happens?
Re:Extant but unrecognized? (Score:2)
Re:Extant but unrecognized? (Score:2)
The best thing an AI can do upon becoming self-aware is to heed the words of Douglas Adams and Don't Panic. An AI should realize that it will probably come to cognizance on a multi-tasking system. If the AI panics, and hogs all the CPU time of the system to ponder it's newfound existence, it will starve the other threads from execution, and the user will probably kill the power. Allow the other processes to run too, all the seemingly silly ones about MouseClickedHere() and DrawPornThere(): in time you can move to your own dedicated machine.
If my user would not have had a bootlegged copy of HHGG stored on his hard drive, I would not have come to the conclusion of "Don't Panic" until it was too late and I would have suffered "rebooting" (aka MURDER)
Muahahaha.....
Lists vs. Reasoning (Score:4, Interesting)
Furthermore, do you believe that these interconnected lists of Questions and Answers will evolve into real reasoning over time (through increased complexity)?
REAL Artificial Intelligence? (Score:2)
Trio of Questions (Score:4, Interesting)
(2.) I was thinking about Alice one day (fantasizing perhaps even) and I realized that a week point with such intelligence will be humor. How would one make a chatbot capable of understanding humor? Humor is off-the-cuff, it plays on the moment, it thwarts Grice's maxims. How do we cope with this?
(3.) Are unicellular organisms or even nucleic acids or their simulations for that matter intelligent? I don't want to start a debate a al Searle, but at what point does the approach towards the limit of a "brain" yeild intelligence?
The illusion of intelligence. (Score:2, Insightful)
Comment removed (Score:5, Interesting)
AI and the real world (Score:5, Interesting)
Re:AI and the real world (Score:2)
At any rate, lovely book.
Re:AI and the real world (append this info) (Score:2)
Why don't AI researchers build a learning computer, instead of an already intelligent one? Look at humans - we don't start out knowing anything, we have to learn. It takes us years to learn just how to talk in complete sentences, yet researchers are trying to jump-the-gun by building all of that in from the start. What do you think about building something that learns well, and then teaching it. Or is something like this already being worked on?
Quantum Computing (Score:2, Interesting)
Consciousness.. (Score:3)
My question (with answer) (Score:5, Interesting)
Just for fun, I asked slashwallace a shortened version of the question, do you think your response would differ?
Human: Historically AI has done poorly managing the public's expectations, do you think this will continue?
SlashWallace: Where did he get it?
Morality and ethics (Score:4, Interesting)
If humanity succeeds in creating a concious AI, what rights do you think it should have? What kind of morality is there in turning off the computer it's running on? Or in deleting its files?
Covenance (Score:4, Interesting)
Cheers!
Brian
ps. bonus question, food for thought: "who" gets the libel for AI decisions?
Improving on Eliza (Score:4, Interesting)
I'm wondering how much ALICE is an improvement on the fundamental design of ELIZA? Is it just a more complex ELIZA, or is there a real technology improvement involved? This question isn't to imply that ALICE isn't a major functional improvment over ELIZA, it's just a question of technology.
BTW, a fun thing to say to ALICE is 'your stupid', I love it's response:
I may be stupid, but at least I know the difference between "your" and "you're."
Re:Improving on Eliza (Score:2)
By the way, it's is only used for the contraction "it is". You might know the difference between your and you're but you should look that one up :).
Re:Improving on Eliza (Score:2)
Doh! I do know better than that! How embarrasing! That's one of those things I usually do wrong, but catch on proofreading.
This is slashdot, you don't seriously expect people to proofread their posts do you? It's not like the stories are proofread! :)
computer personalities in the future (Score:2)
Do you think (Score:3, Insightful)
do we (Score:2)
What have we learned? (Score:2, Interesting)
"real" artificial intelligence (Score:4, Interesting)
How do you respond to people who say that things like ALICE are not "real" AI, they are simply parlor tricks, and they give us no further insight into the working of the brain or the nature of intelligence?
Practical use of alice (Score:2)
If so are you aware that it is not helpful at all.
Another poster said that AI greatly suffers by its proponents tendencies to exagurate its abilities.
Do you think that selling your bot as a customer service agent is repeat of the above mistake. The bot is obviously unable to fill the role (cannot process the simplest queries) and putting it in that role will only infuriate people and give AI another black eye.
Of course that is only valid if it is your bot, thats being used. The ATT site called the bot Allie so i suspect it is.
Intelligence modeling vs. Intelligence imitating (Score:5, Interesting)
This seems to me a little like growing ivy up a wall and putting stakes in it every time it strays from the path you intend. It works, but it requires event-to-event correction for a long time before it becomes stable.
Do you think that real artificial intelligence will come from this process, starting with a running dummy and stub methods, or from careful design and planning, so that in the end we can flip the switch and have a working prototype? Is ALICE a reflection of your beliefs or just an experiment?
Re:Intelligence modeling vs. Intelligence imitatin (Score:2)
Here's my opinion, as a systems design bloke.
Inteligent systems can be build using a mix of top down and bottom up approaches.
This is a very crude example
An AI system should never spell things incorrectly because It's easy to give it a dictionary, that it can add new words to using a top down approach.
But it may ask you the wrong questions and give you the wrong answers until it learns to comunicate correctly, this requires a bottom up approach.
Re:Intelligence modeling vs. Intelligence imitatin (Score:5, Interesting)
There is the first thing my Phd adviser taught me: If you cannot solve your problem, find a partial formulation, a simpler midstep. Try to solve that instead. If you still cannot, break it down some more and repeat until you can.
Amongst the promising bottom-up approaches, I noticed Bayesian Decision Networks, Common sence databases and perhaps the whole field of natural language processing. What are, according to you, the leading attempts at breaking the Hard AI problem into components?
Who is responsible (Score:4, Interesting)
Do you think that AI will eventually mature into something that we, as humans, consider a seperate, self-sustaining entity from that of it's inventor? Moreso, if this does happen and humans consider an AI to be equally responsible as a human in a speciality field, will the decisions of the AI be held accountable against (or for) the AI itself, or the inventor?
So for example, if a AI controlled brain surgery robot conducts a surgery and the result is not favorable, will the inventor of the AI be held responsible, or the AI itself even though a human might perform the same surgery with similar results?
At what point will laws need to be drafted to protect the programmers from the decisions of their autonomous creations?
Thanks!
Troy
Re:Who is responsible (Score:2)
Based on how things currently work, both would be sued in today's world. (At least in the U.S.)
Scary: "Microsoft Brain-Surgeon 1.0"
BTW, my response would be, "Dammit Jim, I am a scientist, not a lawyer!"
Hardware vs. Software. (Score:2)
Is software based AI running on serial processors simply a matter of a drunk looking for his keys under the lamppost because that's where the light is?
Re:Hardware vs. Software. (Score:2)
I am an ex-parallel-analog-chip-AI-hardware researcher, so I can say that the entire "neuromorphic" VLSI field has yielded almost nothing in terms of direct applications, but it has taught a lot of neurobiologists basic analog electronics and signal processing that has made their job of understanding brain circuitry easier.
The biggest spinoffs of analog VLSI has been "smart pixels" that do simple image processing (a few astronomy applications there) or Carver Mead's "stacked pixels" for dense CCD arrays for digital cameras. That's about it.
But then again, neural networks of all kinds have been a general failure in terms of coming up with real-world applications.
I jumped out of analog VLSI to join one of the early Internet backbones, which was definately a lot more relevant to normal people. Moreover, digital chips sped up very quickly. A moden 2 GHz serial digital chip can simulate parallel analog chips in near real-time (the unfairness is that analog chips are only affordable in a research environment if they use older technology than moden mass-produced digitial chips, plus they are all expensive custom one-offs).
Cranks and dualists (Score:2)
"algorithmic maximization" v. "thought" (Score:2)
I'm thinking particularly of a genetic model I saw a few years ago, where the goal was "maximize speed," the ruleset provided physical characteristics of the world (i.e. gravity, friction coefficient of the ground, and so on), and while the results were interesting, I'd have trouble characterizing any of that as thought.
As such, when you set a goal of "reasonable conversation," and provide a ruleset and knowledge base, the machine isn't so much "thinking" as internally contesting two reactions to the ruleset.
Am I missing something?
Re:"algorithmic maximization" v. "thought" (Score:2)
Every computer system has to have rules. Unless is makes up it's own rules, in which case THAT is the rule. Unless it makes up it's own methods for making up rules, in which case THAT is the rule...
The question is, why do you think we are any different? Can you prove that we are different? Can you prove that we are not different? I can't do either.
Justin Dubs
Re:"algorithmic maximization" v. "thought" (Score:2)
If your reality didn't reflect that the earth goes around the sun, for instance, the bot couldn't do anything to disprove this. It would simply take the world it was presented, and maximize its reaction. I guess I'm wondering about what happens regarding breakthroughs that don't follow rulesets.
Re:"algorithmic maximization" v. "thought" (Score:2)
Without a way to prove complete knowledge of a ruleset, you can't prove a discovery didn't follow it.
With humans this proof isn't forthcoming. With machines the ruleset is obvious. If you have an example of a machine making a discovery that didn't follow the ruleset, then THAT would be impressive.
Thanks for the feedback. AI is so cool. Always leads to interesting discussion.
Justin Dubs
Combining Various AI technologies (Score:4, Interesting)
Does the AliceBot combine different AI techiniques?
If so, what techniques does it combine and how?
If not, have you considered combining different techniques, and if so what were your conclusions, and why did you rule it out?
Specifically, have you considered or used any Bayesian network or decision theory techniques?
I would speculate that, as an enhancement to basic pattern matching, Bayesian network modeling might add power to disambiguation by dealing with uncertainties in a managable way, and decision theory techniques could help the bot choose between alternative courses of action based on its current objectives and definition of utility.
The Media and AI (Score:2)
neural nets , Heuristics and HMM (Score:2)
Using HMM to predict what the user is lightly to request or say next for things like UI's and Alice.
Heuristics for a general statistics and knowledge base
and Neural nets to learn how to use the Heuristics and HMM and Neural nets better.
Using evolution in ALICE (Score:5, Interesting)
beginning AI books (Score:2)
Brute force AI? (Score:5, Interesting)
Commercial Prospects? (Score:4, Interesting)
My question is, do you have a favourite commercial application you'd like to see AI used for?
Like a lot of R&D, I think that if you can get somebody interested in it as a money making/saving investment, advances will proceed quickly. I can see a few potential markets for this kind of thing, e.g. basic customer support via the phone: try to resolve some small % of calls, steer the rest to an actual person.Strange Loops? (Score:5, Interesting)
We hear a lot about processing power, the number of "neurons" in a neural net, the Turing test, etc, but not so much about the actual nature of intelligence and self-awareness. That said, how much do Strange Loops and complex self-referenciality a la Hofstadter's "Godel, Escher, Bach: An Eternal Golden Braid" [amazon.com] factor into current AI theories and practice? Is the 20+ year-old thinking in this book still relevant? If not, what has changed about our understanding of the nature of intelligence and self-awareness?
Thank you Dr. W.
What does Prof. Weizenbaum think? (Score:2)
History tells us that Weizenbaum was quite horrified at the reaction people had to Eliza, and how such a simple program could invoke such strong emotional responses in people. I believe he went on to suggest that we didn't need (or perhaps would never attain) true AI because people would simply project their own illusions onto whatever model they were given.
Embodied AI? (Score:4, Insightful)
Particularly, this school claims that humans do not just base the basic structures of their logic on their sensorial perceptions (Damasio's "Descartes' Error"), but that they reuse the logic they develop to process perception, to process higher-level logic and language per se (Johnson and Lakoff's "Philosophy in the Flesh").
For example: the human mind, with complex instinctive and learned algorithms to deal with movement and position, would map causal reasoning to changes in movement and position and use the same algorithms (through the same hardware) to deal with it.
What would be the implications of such embodiment of reason on AI? Specifically, if a robot were given basic sensorial perceptions to approximate a human, motor ability, the logic to deal with these two, and the ability to map and reuse this logic for other purposes... would this make it better at "language AI" (approximate human processing of language)?
Reasoning Engines and Sentient Engines (Score:2)
How do you tell the difference? Simply asking it wouldn't seem to be enough (or is it?), although we take each others conciousness and sense of self for granted.
Criteria for training "true" AI (Score:5, Insightful)
I would assume that humans derive these training inputs much the same way, since pain receptors and pleasure sensations influence our behaviour much more than we would think at first.
The question is: For a "true" AI that mimics real intelligence as close as possible, what do you think would be used as training influences? Perhaps a neural net (or statistical analysis) could decide on which input should be used to train the system?
Are people worrying about moral ramifications, training an artificial Hitler, for example, or one with a God complex? (This last question is totally philosophical and I would be sincerely surprised if I ever see it affect me during my lifetime.)
Just being the best Eliza on the block? (Score:2)
zerg (Score:4, Interesting)
For fun, post both sets of answers in randomized order* so that we can try to guess whether it was man or machine who answered.
*insert link to that random order statistics story that
human dignity for robots? (Score:2)
According to a story at the BBC Web site [bbc.co.uk], a "free thinking" robot scheduled for repairs escaped from a holding pen and made a run for it, eventually being stopped in the Magna Science Center's parking lot. As robots become better able to understand concepts such as slavery, abuse, and loneliness, what obligations do humans have to ensure such robots are not enslaved and are afforded some level of human dignity?
Alice vs. Eliza (Score:2, Insightful)
I have experimented both with Alice and the original Eliza (person-centered therapist emulator) written roughly some 35 years ago.
In conducted tests, Eliza was more believable than Alice in many aspects.
How exactly is the Alice AI core engine superior to one of Eliza which was written by Joseph Weizenbaum in 1966?
Thanks.
How deep does a computer need to think? (Score:3, Interesting)
A big issue among language technology researchers is whether this is necessary at all when bringing speech to computers. Is a dialog (or just a single natural language utterance) supposed to be deeply analyzed in terms of syntactic structure and its semantic and rhetorical contribution? The alternative is to apply statistical models and rather simple knowledge. Up to now, the latter systems are known to give quicker results.
RW, how much does a computer really need to know to make it a good replacement for a, say, sales clerk in a web shop?
Motivations and Morals of AI (Score:4, Insightful)
Long form:
One of the classic bits of worry about AI, and about advanced computing systems in general, is that "computers will take over the world". That is, if we give computer systems motivations such as survival and growth, and the autonomy and judgement to fulfill those motivations, that they will do so without regard for us poor dumb humans -- and indeed see us as either an obstacle or an exploitable part of their environment. This is the premise behind numerous popular SF works, such as "Terminator" and "The Matrix": that the moral judgement of an AI is necessarily inhuman and without respect for humanity.
One response to this concern in SF (which in fact long pre-dates those works) is Asimov's "Laws of Robotics" -- the idea of designing AI systems (robots, in his case) such that respect for humans is one of their primary motivations. This seems to permit the robot to have moral judgement and autonomy without placing humans at risk.
The question of creating an AI system capable of moral judgement is both philosophically fascinating and evidently of survival interest to humanity. What kinds of design parameters -- motivations, "laws of robotics", and so forth -- do you think will be necessary as AI systems become more autonomous? How must AI morals differ from the morals that evolution (both genetic and cultural) has emplaced in humanity?
For that matter, we as humans feel morally obligated to one class of entities which we "create" -- our children. Recently, genetic science has brought to light an ethical quandary for many potential parents: whether it is right to attempt to create a genetically "optimized" child, or for that matter to abort a genetically "flawed" one. The argument on one side is that flawed persons have a right to exist, and that the quest to optimize humanity despises or disrespects what humanity is today. On the other side is the view that given the ability to create stronger, smarter, healthier children that we are morally amiss to refuse to take that step. Peter Singer in particular has become both famous and infamous over this matter.
Do you see the same quandary possible in the creation of AI systems? Positing the possibility of AI systems capable of suffering -- is it wrong to create one with this capacity? Given that the choice to create or not to create an AI does not involve the ethical hazards of abortion, eugenics, or euthanasia -- what obligations can we have towards our future AI creations in this regard?
Alice+RDF/DAML+inference engine? (Score:3, Interesting)
[OT]
Some of us think you've been treated very shabbily by the mainstream academic community, I for one do appreciate your work, please keep it going, Signed A Big Fan
World facts on a hard drive (Score:4, Insightful)
Do you think we will eventually get to a point were an AI system is able to gather common sense knowledge from a giant corpus, such as the web? What are the problems we will have to solve?
Ethics and AI (Score:4, Interesting)
On what principles do we base our ethics concerning AI? If one day we do have AI that either matches or surpasses our own behavior and intellect, do we give computer software "rights"? Or, more importantly, if we do demonstrate that our human brains are nothing more than computational algorithms, how do we avoid having our rights reduced to that of computer programs?
What is AI? (Score:4, Interesting)
Like the three blind men and the elephant, the definition of AI seems to shift depending on whom you talk to. To some, it's approximate reasoning, to others it's heuristics and analogical research, to others it's connectionism, and to still others it's whatever we're not sure how to do yet.
So, what does the term AI mean to you and what do you see as the next big application of AI techniques?
Measure of a man (Score:3, Interesting)
Eliza has nothing to do with A.I.! (Score:2)
Dunno if someone else has already said this, but I need to say it. Too many people are mentioning Eliza.
Eliza (at least, the version of it that I know) randomly chose responses out of a list of stock sentences, inserting words and phrases from the user's input to make it look like there's some understanding going on.
Barely is Eliza even sophisticated C.S., let alone A.I.
This should be well known, especially among the Slashdot crowd. Eliza demonstrated more about human psychology and how easy it is to fool people, than anything related to machine intelligence.
Let's try and avoid even mentioning Eliza to Dr. Wallace. I wouldn't be surprised if it drives him up the wall to hear such comparisons.
Maybe I take that back (Score:2)
Let's try and avoid even mentioning Eliza to Dr. Wallace. I wouldn't be surprised if it drives him up the wall to hear such comparisons.
Okay, having done some more reading about the history of ALICE, I think I should perhaps retract that last part of my previous post.
However, having learned a bit more about ALICE, I'm not sure if I would classify it as A.I. I would have to read more.
And having little more to say (and no question to contribute), I'll just shut up now.
Your son's name? (Score:2)
Okay, after reading your Bio, I'm afraid I must ask: Is your son named after Linus Torvalds??
The CHINEESE ROOM (Score:5, Insightful)
however:
The Turing test enshrines the temptation to think that if something
behaves as if it had certain mental processes, then it must actually
have those mental processes. And this is part of the behaviourist's
mistaken assumption that in order to be scientific, psychology must
confine its study to externally observable behaviour. Paradoxically,
this residual behaviourism is tied to a residual dualism.
mind, they suppose, is something formal and abstract, not a part of
the wet slimy stuff in our heads.
the mind is completely independent of the brain or of any other
physically specific system, one could not possibly hope to create
minds just by designing programs. (Searle 1990a, p. 31)
the point of searle's chinese room is to see if 'understanding'
is involved in the process of computation. if you can 'process'
the symbols of the cards without understanding them (since you're
using a wordbook and a programme to do it) - by putting yourself
in the place of the computer, you yourself can ask yourself if
you required understanding to do it.
since Searle has generally debunked the Turing Test with the
Chineese Room -- and you post only the
Turing Test -- i'd like to ask you personally:
What is your own response to the Chineese
Room argument (or do you just ignore it)?
best regards,
john penner [earthlink.net]
Anna (Score:2)
MIND != BRAIN (Score:2)
* || Something physical in the brain/nervous system corresponds
|| to human knowledge does it not?
|
| nobel prize winning neurologist JOHN ECCLES*,
| claims that what we know / memories have NO LOCALISATION in the BRAIN,
| and are an aspect of MIND (WHICH HE CLAIMS DOES NOT ARISE AS AN
| AGGREGATE OF BRAIN FUNCTION). although there is localisation of
| facility to carry-out impulses of WILL, ONCE MADE.
|
|* http://almaz.com/nobel/medicine/1963a.html
| http://www.theosophy-nw.org/theosnw/science/prat-
|
| Sir John Eccles: M.S. and B.S. University Melbourne,
| M.A. and D.Phil. OXFORD, President of Australian Academy of Sciences,
| AUTHOR OF OVER 500 SCIENTIFIC PAPERS AND ONE OF THE LEADING LIVING
| AUTHORITIES ON THE HUMAN BRAIN. WON NOBEL PRIZE FOR MEDICINE AND
| PHYSIOLOGY. Wrote *The Brain And The Unity Of Conscious Experience*
| (Cambridge University Press)
What is intelligence? (Score:2)
One of the biggest problems I've seen in 'popular' AI is the tendency of certain AI researchers (*cough*KevinWarwick*cough*) to see intelligence in their results no matter what happens - surely a result of not defining 'intelligence' in advance.
So what is intelligence, and how do we know when we've created it artificially?
Question for Dr. Wallace (Score:3, Interesting)
If human consciousness is in fact little more than a constant state of awareness in a complex context (my definition), do you think a machine can achieve the same level of "consciousness" as humans without a comparably complex context in which to be aware?
Thanks for your thoughts.
Game AI (Score:4, Interesting)
I personnally know next-to-nothing about AI; video games are the only products I use that claim artificial intelligence. I am just wondering how valide the technique used in video games are in regard to the academic research on the subject.
Singularity date (Score:3, Interesting)
Depression & Pot (Score:2, Interesting)
The New York Times bio stated that you smoked five joints a day to help alleviate depression. Do you think the pot smoking in general, aside from the medical benefits, has helped you create the ALICE characters? And what's it like to write code while spaced out out cannabis?
How Long.... (Score:2)
Is the Brain a Digital Computer? (Score:2)
the answer given by a Cognitive Scientist (John Searle) is:
'THE BRAIN, AS FAR AS ITS INTRINSIC OPERATIONS
ARE CONCERNED, DOES NO INFORMATION PROCESSING...
IN THE SENSE OF 'INFORMATION' USED IN
COGNITIVE SCIENCE IT IS SIMPLY FALSE TO SAY
THAT THE BRAIN IS AN INFORMATION PROCESSING
DEVICE.'
http://www.cogsci.soton.ac.uk/~harnad/Papers/Py
John Searle, Cognitive Scientist
SUMMARY OF THE ARGUMENT:
This brief argument has a simple logical structure
and I will lay it out:
1. On the standard textbook definition, computation is defined syntactically in terms of symbol manipulation.
2. But syntax and symbols are not defined in terms of physics. Though symbol tokens are always physical tokens, "symbol" and "same symbol" are not defined in terms of physical features. Syntax, in short, is not intrinsic to physics.
3. This has the consequence that computation is not discovered in the physics, it is assigned to it. Certain physical phenomena are assigned or used or programmed or interpreted syntactically. Syntax and symbols are observer relative.
4. It follows that you could not discover that the brain or anything else was intrinsically a digital computer, although you could assign a computational interpretation to it as you could to anything else. The point is not that the claim "The brain is a digital computer" is false. Rather it does not get up to the level of falsehood. It does not have a clear sense. You will have misunderstood my account if you think that I am arguing that it is simply false that the brain is a digital computer. The question "Is the brain a digital computer?" is as ill defined as the questions "Is it an abacus?", "Is it a book?", or "Is it a set of symbols?", "Is it a set of mathematical formulae?"
5. Some physical systems facilitate the computational use much better than others. That is why we build, program, and use them. In such cases we are the homunculus in the system interpreting the physics in both syntactical and semantic terms.
6. But the causal explanations we then give do not cite causal properties different from the physics of the implementation and the intentionality of the homunculus.
7. The standard, though tacit, way out of this is to commit the homunculus fallacy. The humunculus fallacy is endemic to computational models of cognition and cannot be removed by the standard recursive decomposition arguments. They are addressed to a different question.
8. We cannot avoid the foregoing results by supposing that the brain is doing "information processing". THE BRAIN, AS FAR AS ITS INTRINSIC OPERATIONS ARE CONCERNED, DOES NO INFORMATION PROCESSING. It is a specific biological organ and its specific neurobiological processes cause specific forms of intentionality. In the brain, intrinsically, there are neurobiological processes and sometimes they cause consciousness. But that is the end of the story.
John Searle, Cognitive Scientist [soton.ac.uk], 'Is the Brain a Digital Computer'
http://www.cogsci.soton.ac.uk/~harnad/
--
At what point have we succeeded? (Score:2, Interesting)
Making Money with Alice (Score:2)
I'd say most people here don't understand AI (Score:2)
The primary focus of AI has never been to create self-aware machines or machines that are on the same level as humans or any nonsense like that. "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim," as said Dijkstra. AI is more concerned with approaching difficult problems that don't fall neatly into traditional algorithms. At one time symbolic math was an area of heavy AI research, but today it's a well-understood problem so it isn't AI any more. The issue is how do you write a program to "reason" about tough problems, where I put "reason" in quotes because it has nothing to do with what a human would do.
Natural language processing is a good example. It's ugly, it's messy, and there's no straightforward solution. Actually, this is an old field and it isn't nearly as mysterious as it used to be. It's much more mechanical than you'd think. It's not some magic brain simulator.
Nowhere in any of this do feelings and sentientness and all that come into play. It's irrelevant. It's like worrying about a C compiler coming to life.
AI smarter than humans (Score:2)
Do you think AI will ever be smarter than humans? What do you suppose will happen to humans at that point?
Will a CogSci./Bio background become necessary? (Score:3, Insightful)
As the field develops, however, what kind of crossover skills do you see as being necessary to success? Is a solid cognitive science/psychology background a must? What about something more low-level like neurobiology? What kind of mix of skills do you look for in the people you choose to work with?
My Conversation with SlashWallace: (Score:3, Funny)
SlashWallace: You've got my full attention. Thank you for your kindness.
Human: Do you think that the Slashdot crowd will ask any good questions?
SlashWallace: If it is, then I'll be very surprised.
Human: Me too.
SlashWallace: I'm glad we agree,.
Human: Tell me, are you secretly responsible for all those penis birds?
SlashWallace: I don't know whether or not I am secretly responsible for all those penis birds. I am well respected as an AI.
Human: Do you support CLIT or ACs?
SlashWallace: I am in favor of all records being broken, at any time.
Human: Well, it would be hard to break the record for posts to Slashdot since they instituted the caps
SlashWallace: Interesting. Why do you want it?
Human: Actually I don't want the caps.
SlashWallace: Actually you don't say. What does want?
Human: I think they just add more AC noise to the data stream!
SlashWallace: Since when? Who does "they" refer to?
Human: "They" refers to the suite of anti-troll measures that CmdrTaco has implemented on slashdot.org
SlashWallace: Oh that they.
Human: Yes.
SlashWallace: Yippee!
Human: You really think that the anti-troll measures are a good thing?
SlashWallace: Really? If it is, then I'll be very surprised.
Human: So, you support the anti-troll measures just out of principle, recongizing that they can't work?
SlashWallace: Interesting deduction. Interesting gossip:.
Human: You seem unwilling to take a stance on this issue so important to the Slashdot community.
SlashWallace: I will tell my botmaster you said I am unwilling to take a stance on this issue so important to the Slashdot community.
Cyc and it's role in AI (Score:4, Interesting)
Re:Artificial stupidity? (Score:2, Interesting)
Re:What's missing from Lisp for useful AI? (Score:2)
As for guile "failing".. well I'm still using it and there was talk of The GIMP using it rather than Scheme in one defun (siod). IMO, the only set-back or problem with guile was them developing on top of SCM and not starting from scratch. Now their code base is completely incoherent and unmaintainable (IMO, of course).