Israeli AI System "Hal" And The Turing Test 447
Conspiracy_Of_Doves writes: "Hal, the AI creation of Dr. Anat Treister-Goren of Israel, has fooled child language language experts into believing that it is a 18-month old child. Dr. Treister-Goren says that Hal will probably attain adult-level language skills in 10 years. CNN.com article is here. Yes, it's named after what you think it's named after, and yes, the article mentions why naming it Hal might not be such a hot idea."
Incredible! (Score:4, Funny)
Truely an incredible step in toddler AI!
Yes, but... (Score:2, Funny)
..Don't forget these are "child language language experts".. Thats not just any ordinary language expert - Thats a child language language expert, which means they are twice the ordinary child language expert.
..So fooling them really is quite the feat..
Beowulf Cluster of HALs.. (Score:2, Funny)
Ha. And bah. (Score:5, Insightful)
When I see an AI claim, I check its source - if its a business, I suspect exaggeration; if it's a real research center (public or private, MIT or Bell Labs) then I'm more likely to take the claims on face value. This is hyperbolic investor-porn, no more.
Re:Ha. And bah. (Score:2)
Ad hominem attacks are not per se unfair, they are simply logically inadequate. However, they are not without any merit in debate. In legal discourse, motive is one of the key points for determining guilt, and that is essentially ad-hominem: commercial efforts are motivated by the need to pump investor confidence and ready the market. If I had left it at that, my attack would have been unfair, but in the context of my more specific critiques of the claims, those comments are fair and provide context.
Re:Ha. And bah. (Score:3, Insightful)
Re:Ha. And bah. (Score:2)
If the private sector was marketing the fruits of the research, that would be one thing, but they've barely started to plow the soil and they're already selling harvest.
Ontology (Score:3)
It got wedged into AI theory when a bunch of guys started reading the Hermeneutics litterature and got real, real confused about Heidegger.
In 'Being and Time' there is a hopelessly confused attempt to define being in terms of communication. Until recently the english translation was even more confused because German words for the two types of 'being' Heidegger makes a crucial distinction between are both translated using the same word in English!
To cut a long story short later but for later chaps (Satre, Gadamer, Ricoeur, Habbermas) who rescued the ideas Heidegger would probably have been written off as just another Nazi (the party didn't much like him though, at the end of the war they tried their best to get him shot). Heidegger's radical revision of the theological field of hermeneutics created a new field of philosophy of communications, a key part of which is the concept of a 'shared vocabulary' being essential to communication and hence 'being' and hence an 'ontology'.
So various AI researchers have attempted to apply the gradiose title 'ontology' to a mish mash of concepts in an attempt to convince people that something deep is going on.
Re:Ontology (Score:2)
In AI terms, "ontology" is simply the ability to determine categories by perception. To recognize the chairness of chairs by having interactive strategies, the car-ness of cars, and so forth. And there's every reason to believe that it can be modelled as long as you have systems that can interact with things in some way.
Even in Sein und Zeit Heidegger described being as emergent out of interaction, especially out of breakdowns from routine interaction. The "ontology" of a hammer emerges from the act of hammering. We become aware of it when we miss the nail and hit the thumb.
That's not bad (Score:5, Funny)
I know people I work with who still haven't achieved adult-level language skills...
Re:That's not bad (Score:2)
Dr. Treister-Goren says that Hal will probably attain adult-level language skills in 10 years.
I know people I work with who still haven't achieved adult-level language skills...
You must live in the South. ;-)
Don't worry. I do too.
Re:That's not bad (Score:2, Funny)
Yeah? Well I know presidents [bushgrammar.com] who haven't achieved adult-level language skills.
HAL isn't such a bad name... (Score:2)
Reminds me of a political party in Canada (NPC)that tried to implement a new method of communication called Newspeak. Now that was ironic. (and very funny)
Regardless, the fact that it learns like that is incredible. I just wonder if it won't hit a block at any point that isn't forseen.
Re:HAL isn't such a bad name... (Score:3, Funny)
What you fail to mention is that this political party was born out of a Dungeons and Dragons game.
Basically, they're Non-Player Characters.
Which explains a lot about their political strategy (or kack of it).
*BEEEP* Wrong. (Score:2)
Recall Mel Hurtig.
The Conversation (Score:5, Funny)
"Poop!"
"Poop? I don't quite understand what you are trying to say."
"Pee-pee!"
"Indeed."
Re:The Conversation (Score:2, Funny)
Baby Hal? (Score:5, Funny)
Dave...I have a load in my diaper...Dave...
Reward -vs- Punishment (Score:5, Funny)
[...] Treister-Goren corrects Hal's mistakes in her typewritten conversations with him, an action Hal is programmed to recognise as a punishment and avoids repeating.
How long until Hal figures out that sending high voltage through the typewriter stops the punishment?
Re:Reward -vs- Punishment (Score:2)
Variant Spelling (Score:2)
Treister-Goren corrects Hal's mistakes in her typewritten conversations with him, an action Hal is programmed to recognise
I just thought this was cute, being recognise, at least in the US, is a variant spelling of recognize.
Re:Variant Spelling (Score:3, Interesting)
They exist because the original -ise verbs originated from French, which spelled them with an 's'. For example "realise" is the traditional spelling of that particular verb, as it derives from the French verb "réaliser". Another example is "paralyse" which derives from French "paralyser", but has become "paralyze" in American English.
"2001" (Score:5, Interesting)
We need more Artificial Intelligence -- the natural kind is in too short a supply.
Re:"2001" (Score:2, Insightful)
Probably because science fiction has a funny tendency to become science fact, but here's another piece of sci-fi for you. In Isaac Asimov's robot novels, he predicted that robots would attain superior intelligence, but along with superior intelligence comes superior morality. There are tons of stories like what you are describing. True, a story is boring unless there is some kind of trouble, but in those stories, the happy-friendly-harmless-monster / perfectly-working-computer simply isn't the source of the trouble.
Re:"2001" (Score:2)
Actually, the scariest and likeliest tale about the future of AI, Jack Willamson's The Humanoids [umich.edu], fits your description exactly. The artificially intelligent robots in this novel are so helpful, so solicitous, and so efficient that they quickly reduce humanity to a state of enforced safe docility. This novel gives me chills just because it gets more plausible every day.
Fictional Validation (Score:2)
That's because none of this is part of their experience. So, to get through to most people, you can't just lay out the arguments in syllogism form, you have to "tell a story". And this can be a more or less literal strategy for persuasion. People tend to dismiss chicken little pronouncements until you make them seem real through a story.
A related anecdote that I found amusing but insightful: In the Times of London a few years back, someone was editorializing about how Ellen coming out on her show ushered in an increasing acceptance of homosexuals in society. The quote, paraphrased, was this: "Americans never believe anything until it's been fictionally validated on television".
Bryguy
not funny at all... (Score:2)
Science fiction allows an idea to be followed through hypothetically. It may be an obvious science topic, or something a little more subtle, more social.
You may see what problems may arise, and how they might be handled, how they should not be handled... the dangers involved, and what may bring about the dangers in the first place.
Actual science flaws as well (eg. Jurassic Park)
Or just to see what it might be like, as an alternative (Imagine, written by John Lennon is what I would call "social science fiction")... in novel form, something like Ursula LeGuin's Dispossessed
Science fiction is also more accessable to the masses. The Matrix, 2001, Gattaca, Stargate... they introduce scary topics as a form of entertainment. Thought control, human slavery by machines, machine independance, artificial intelligence, genetic bigotry, matter transmission... Things a lot of people would rather not think about, as it's too scary.
Hypothetical exploration is good for humans... it reduces FUD; promotes ideas, preparation and decision making and it's also a good way to test an idea and see how well it holds up... without hurting anyone (not including bad writing)
Joe Blo who may not think about science or technology very much, may have quite strong feelings about not wanting a Matrix running his life, a HAL situation, or to be thought of as genetically inferior... science fiction can help focus opinions on things that may become important... not just science and technology, but social issues too.
Which is why I'd always choose science fiction over Adam Sandler...
</babble>
<coffee>
creating computers in man's image, exponentials (Score:5, Insightful)
now consider:
today (2001): human trains AI, limited by wetware bandwidth
...20 years from now: AI trains AI, limited by neural net bandwidth.
result: all 20 years of training one AI will be compressed into to a fraction of a second training time for the next generation
this is the manifestation of Raymond Kurzweil and James Gleick's observations: the acceleration of everything, the exponential growth of compute power.
hang on for the ride, kids. it's gonna get weird. i bet we see AI legistlation in the next 10 years.
we will be the 'gods' (as in creators) of the new race that will inhabit the earth.
Re:creating computers in man's image, exponentials (Score:5, Insightful)
But just like mimicking what a bird does (ie tape feathers to your arms and flap) isn't going to get you off the ground, mimicking the human brain will probably only get us so far.
I believe the real breakthroughs will come more or less like it did in aeordynamics: when we understood the principles of flight and stopped mimicking birds, we could fly. When we understand the principles of intelligence and stop mimicking brains, we might be on to something.
Re:creating computers in man's image, exponentials (Score:2)
How are you differentiating simulate and mimic?
mimicking the human brain will probably only get us so far.
agreed. it reminds me of those old b&w home movies of people "taping feathers to their wings" and trying to run off of small hills, despite the fact that both Newton and Bernoulli theories of aerodynamics had been around for ages.
Minski's "Society of Mind" seems like a plausable approach for creating a synthetic consciousness, but it may just be the equivalent of DaVinci's drawings of a "helipcopter": a handcranked cork-screw sail on a wooden platform.
When we understand the principles of intelligence and stop mimicking brains, we might be on to something
That's what I meant by "man's image". Can there be a principle of intelligence that doesn't resemble the intelligence that formulated it? And if there is, what would the machine look like that realized it? It's probably agreed that the machine wouldn't be a classical computer. Perhaps something that required a hot cup of tea....
I usually stop thinking at this point, it all becomes mental gymnastics, and I'm out of shape.
Re:creating computers in man's image, exponentials (Score:2)
Simulate means "to assume the appearance of, without the reality".
Mimic means "to imitate". Mimic also has slightly more negative conotations.
We're imitating, and not even close to "assuming the appearance of".
Re:creating computers in man's image, exponentials (Score:2)
Note that "than" is used in conjunction with terms such as "better", "worse", "more", "less", "higher", "lower", "nearer", and "farther away". In other words, comparisions. (Okay, "comparisons" is only one other word.) You're specifying the type of difference. When you say "different from", you're only acknowledging a difference, not classifying or describing it.
Turing test a goal worth shooting for? (Score:2)
If you break up the turing test it really just wants to take advantage of our linguistic-psychological habits, idioms, and expectations to fool a human into thinking something false. Neat trick if you can pull it off, imagine higher quality AOLiza comedy but it simply isn't intelligent in any sense of the word.
An intelligent machine wouldn't need to be programmed to fool humans. Its simulation of intelligence/consciousness would be obvious and an after-effect of being intelligent. Definately a cart before the horse problem.
Re:creating computers in man's image, exponentials (Score:2)
hang on for the ride, kids. it's gonna get weird. i bet we see AI legistlation in the next 10 years.
That's what they were saying 10 years ago. Better projects than this one have failed to pan out in any meaningful way, I guess CNN was still looking for A.I. stories. I personally don't see any of our current technology and techniques delivering A.I.; if it comes to be, it will be do to something that hasn't even been discovered yet.
Re:creating computers in man's image, exponentials (Score:2)
Re:creating computers in man's image, exponentials (Score:2, Interesting)
Wetware bandwidth, multiplied by the number of humans performing the training. Why don't they open-source it and let everyone in the world have the chance to train it? Much faster, much more democratic and therefore representative of what people really consider to be "normal" intelligent behavior.
Why isn't it a hot idea? (Score:3, Insightful)
I don't know, it seems to fit if you ask me. HAL was very childlike in the movie, especially in regards to his "dad" Dr. Chandra (well, in the sequel at least), and only ended up hurting people because he was lied to and thought there was no other way. How is that any different from a human child who is abused and as a result doesn't value human lives at all?
I don't think they should have named it HAL just because it's going to get boring after every single AI project is named HAL, but naming it after the famous movie star of the same name wasn't a bad idea in my opinion. As long as you treat it right and don't give it control over vital life support functionality, you should be just fine :)
variability (Score:2, Insightful)
"Some kids are more predictable than others. He would be the surprising type"
Being the "surprising type" with a vocabulary of 200 words probably indicates that the program is not particularly good. The range of possible behaviors is pretty small for such a system. As the vocabulary and complexity of possible utterances increases, it is likely that the "surprising" aspect of Hal is going to move into "bizarre" territory.
As Chomsky pointed out [soton.ac.uk], relying strictly on positive and negative feedback is not enough to develop language...
There is another... (Score:2)
Wonder if they are sharing info? Better not cut the data connections, they could get really mad!
What a crock (Score:2)
It must be time for this guy to apply for some grants. This is so far from any sort of language "breakthrough" as to be a complete joke. You could probably output random sentences with that 200 word vocabulary and fool "experts". 18 month old children don't exactly have the greatest conversational skills.
Dr. Treister-Goren says that Hal will probably attain adult-level language skills in 10 years.
*cough*bullshit*cough*. Call me when you have any actual *theory* on adult-level language skills, much less an implementation.
I firmly believe we're at least 100 years away from a turing-test level of language processing. And no, Moore's Law does nothing for this problem. We are currently at Aristotle's knowledge trying to work out Relativity.
Re:What a crock (Score:2)
Maybe you should actually study the subject before posting. The experts evaluating the conversations are most likely linguists or psychologists who specialize in early language development, not a collection of morons who got degrees from Sally Struthers. When children learn languages, they don't just spit out random collections of words (although it may appear that way to someone who doesn't really pay attention to what's going on). There are very distinctive patterns to the way words are combined, the way verbs are conjugated, etc. There is a huge difference between the way an 18 month old speaks and the way a 22 month old speaks. People who have devoted their lives to the study of language development know this. Armchair scientists on slashdot do not.
Ever hear of Chomsky? How about Pinker? What about the entire field of linguistics? There is no shortage of theories on how language works. Cognitive scientists are also starting to develop an understanding of how children pick it up. This company is obviously exagerating the capabilities of their program, but I'm guessing that they know a lot more about the subject than you do.
Re:What a crock (Score:2)
I'm certain he does. On the other hand, many AI "scientists" have made a lot of claims over the years with their superior knowledge of the various details.
I, on the other hand, have enough knowledge and experience to see that the entire field doesn't have the slightest clue how full-blown adult-level cognitive and language abilities work. You don't have to be an expert in architecture to see that a mud hut is not going to scale to the Empire State Building.
What about the entire field of linguistics? There is no shortage of theories on how language works. Cognitive scientists are also starting to develop an understanding of how children pick it up.
To paraphrase another poster, which I liked: Just because you have a theory that gluing feathers on your arms and flapping is the basis of flight doesn't mean you have a theory of aerodynamics.
Re:What a crock (Score:2)
Knowledge and experience in what? So far this looks more like Joe-Bob sitting on his front porch critiquing the work of Richard Feynman while scarfing down a bucket of pork rinds than a well reasoned response to their claims. If you have objections to a specific theory of language or can point out problems with experimental methods, then please point them out. The scientists (yes, they are real scientists) doing research in this area are not just pulling things out of thin air. They proceed just as other scientists do, developing theories and then developing experiments to test those theories. Some experiments involve interactions with real children and adults, and others involve computer simulations of brain functions. They are far from developing a complete explanation of language and cognition in general, but much more progress has been made than you give them credit for.
Two of the biggest mysteries of language acquisition in children have been solved in the past decade. The first is the problem of why children (in all languages) make distinctive grammatical errors at certain stages of development (error-milestones). These kinds of errors are frequently used to delineate the stages of language acquisition. It turns out that the errors are a side effect of the way semantic maps (a type of neural network) evolve over time. They tend to ignore exceptions to a rule, then generalize the exceptions (replacing the original rule), and then put the real rule back in place with the exceptions handled nicely as exceptions. The second important discovery is how children are able to pick up language so quickly. That is a result of only a fraction of the axons in the brain being myelenated at birth. Neurons are brought "on-line" over a period of years, and this is extremely important to learning. Again, neural network simulations were vital for this discovery.
Re:What a crock (Score:2)
Re:What a crock (Score:2)
Re:What a crock (Score:3, Interesting)
"...you assume they have a considerable amount of nonlinguistic cognitive machinery in place before they start" [...] Additionally, the idea that children learn laguage because of rewards or praise is, apparently, inconsistent with studies of human language acquisition.
Hmm, interesting. To tell you the truth, I have a tottler about 20 months old myself, and it's been fascinating watching him developing cognitive skills. I think there is room for both views. On the one hand, there is no question that there is a considerable amount of hard-wired machinery at work. This is immediately apparent when compared to raising a puppy (which I've also done).
When my child was born, I was interested to see how long it would take for me to see there was something "different" over the puppy. To my amazement, once an infant starts noticing the world (they are pretty much oblivious for the first three months), the differences are noticeable right away. It's subtle, but you can see them looking at the world and you can see "the little gears turning". I don't know how to define it exactly, but there is no doubt that there is a qualitative difference in how each brain works.
On the other hand, I don't think you necessarily need to look to straight parental or world positive/negative reinforcement to find feedback at work. There is a tremendous amount of self-motivated feedback at work in a child. In my boy, at least, his biggest motivations are 1) look at everything and analyze how it interacts in his world, and more importantly, 2) to be a "big boy" by mimicking the adults around him. If there's something that he thinks he can do, he gets pissed if you don't let him try it himself. Much of his positive/negative feedback is coming directly from comparing his actions and results to those around him.
I think that hard-wired self-motivated feedback based on mimickry is going to be shown to be an important factor in child development. Which makes it all the harder to make a machine do it, because you have to give it something to mimic in a relatively real world environment.
Re:What a crock (Score:2)
While I agree that you should discourage your child from using foul language, I really don't see where there's any problem with him or her using anti-Microsoft rhetoric, at least not if they have the facts to back up their words.
Oh, you meant the average Slashdotter. Never mind.
Must be a misquote or an AI newbie (Score:3, Insightful)
This guy has obviously never heard the Minsky Theorem: "Full scale AI is always 20 years down the road."
In any case, call us when it is actually working, not when you've fooled "child language experts". I could fool experts right now with a simple cassette tape, a LOT of taped 18-month-old comments and a quick hand with a playback button. That doesn't mean my stereo is going to human in 10 years.
I am 99% sure we will eventually acheive "full AI". But I'm 100% sure it won't be via vague claims about unguessable future performance. In other words, show me the money.
Re:Must be a misquote or an AI newbie (Score:2)
I could fool experts right now with a simple cassette tape, a LOT of taped 18-month-old comments and a quick hand with a playback button.
You're absolutely right, that cannot possibly be construed as evidence that you possess any kind of intelligence at all.
Okay, I'm sorry. I'm really, really sorry. That was excessively harsh. I didn't mean to attack you personally. Unfortunately, I just could not resist!
In fairness, you have a valid point. Your example is a variant of the "Chinese room" argument that was once put forward by John Searle.
He compared a computer to a person in a closed room into which questions, in Chinese, are being passed. The person in the room, who knows no Chinese, follows a book of very complex instructions in order to formulate a response.
Searle claims that despite the fact that the responses that come out of the "Chinese Room" make perfect sense to the Chinese speaker who passed in his question, the person within the room has no notion of the meaning of his response, let alone the question.
Searle makes the point that the computer is a very complex machine that blindly follows a set of fixed instructions, and so, cannot possess real understanding. Whether you agree with his argument is, I think, still a matter of philosophical position.
Personally, I don't agree. I think that the the entire room - the person, plus the book of instructions - would possess an understanding of Chinese that transcends the sum of its parts.
Anyways, sorry again for my harshness. You just left yourself open in too inviting a way!
No, Searle's an idiot (Score:2)
Say that to my face some time. Searle and I are so far apart it isn't even funny.
My example was not that "you can't tell from the outside what and what does not possess intelligence". My point was "the largely-random motivation and very small vocabulary of an 18-month old is a very slim hook on which to hang a hat." In particular, it is easily simulated by a system MUCH simpler than a Chinese Room.
MegaHAL (Score:2)
"IT TAKES 47 PANCAKES TO SHINGLE A DOG." -- MegaHAL
k.
Re:MegaHAL (Score:2, Informative)
Interestingly enough, from Jason Hutchens' website [amristar.com.au]:
So, in fact, the creator of MegaHal is in fact the brains behind this outfit!
limits of method (Score:2)
Ushering in... (Score:3, Funny)
So is this the first instance of giving a child an IP address?
Turing tests (Score:5, Funny)
Turing test is pretty crappy... (Score:4, Troll)
The fact that the Turing Test is probably still the only widely recognized test for artificial intelligence says more about our pathetic understanding of the nature of intelligence than the validity and usefulness of the test.
After all, as any con-artist and magician will tell you, it's really not that hard to fool people. Also, remember that on some occasions, some human beings will actually fail the Turing test! That must be so humiliating...
I freely admit I don't have anything better to offer, but I just wanted to point out that the Turing test is a pretty awful measurement, when you think about it.
If you hate poorly defined software projects... can you imagine being handed the Turing test as a feature spec?
Re:Turing test is pretty crappy... (Score:2)
I suspect that you may be confusing the Turing test with the Loebner test. The Turing test is more or less an empirical definition of intelligence. It relies on an entity being able to perform all conversational tasks that we would expect a human to perform.
The Loebner test, on the other hand, is a yearly spectacle where a number of chat bots attempt to fool a number of judges with varying degrees of competence.
It's the idea, not the test... (Score:2)
Also, it isn't the rigamarole of the test itself that is important, it is the idea behind it. Basically, if it looks like a duck, walks like a duck... etc., then we must conclude it is a duck.
If there is no way to discern an AI from a human then we must treat them the same. I think the Turing test is really a great example of pragmatics. Granted there is no set procedure to test the computer, if there were we could specifically program around that set procedure. The test needs to be adaptive; however the basic premise would still be the same. If the AI seems intelligent to everyone, then it IS intelligent to everyone (until someone else comes up with a way to prove that it isn't).
Re:Turing test is pretty crappy... (Score:2)
Imagine this: Turing says that if a machine wins the game 50% of the time, then it is indistinguishable from a human and should be considered intelligent. A really smart person (like a con artist) might be able to think of special responses that might make the tester realize that he's speaking to a human.
But a computer that is arbitrarily powerful would be able to more accurately model the tester's mind, and thus could win the turing test considerably more often than 50% of the time. How do you like that?
When do you grant it human rights?
Re:Turing test is pretty crappy... (Score:2)
Have you ever read Turing's paper? He addresses most of the objections people bring up again and again.
Urp. Guilty as charged. I won't spout off too much more, before getting a clue.
However, it still seems to me that the Turing test attempts to answer the question of whether a machine is intelligent, without attacking the question of what intelligence is in the first place. Turing no doubt addressed this issue, too; I'll find out what he said.
What I look forward to in the next few decades is a real solidification of the definition of intelligence. Up to now, the question was in the domain of philosophy and psychology, but now computer science is jumping into the fray, and injecting a good measure of hard science into the discussion.
I'm not one of those hard science snobs, who looks down their noses at philosophers and psychologists, but I am nevertheless interested in what will emerge from the clash between these three disciplines.
Re:Turing test is pretty crappy... (Score:2)
Re:Turing test is pretty crappy... (Score:2)
Also. If there is *anything* that you think a computer should be able to do before it can be considered equivalent to a person, then incorporate it into the turing test! Ask the machine why he likes Van Gogh! If he sounds like a robot, then he fails. Hell, try to teach him something over the course of the test. Turing doesn't say that the critical observer should have any limits placed upon him to determine which is the machine and which is the human.
Re:The Minimum Intelligent Signal Test (Score:2)
In related news... (Score:2)
Oh dear, everything repeats....
Very related (Score:2)
At first, it seems like this computer that can fool child language experts is impressive. But in the article you linked where a similar experiment was done to see if psychiatrists could tell the difference between a paranoid patient and a computer:
Hal vs. Eliza (Score:2)
Eliza and the turing test (Score:4, Interesting)
To see many such logs go to www.google.com and do a searh for "aoliza" or even "eliza chat" you'll find all sorts of hillarious conversations.
It didn't pass the Turing Test (Score:2)
Fake philosophers (Score:3, Interesting)
If, or when one does, it will open a Pandora's box of ethical and philosophical questions. After all, if a computer is perceived to be as intelligent as a person, what is the difference between a smart computer and a human being?
and
"All of us strongly believe that machines are the next step in evolution," said Dunietz. "The distinction between real flesh and blood, old-fashioned and the new kind, will start to blur."
If these researchers get to the point where they can't see a moral difference between killing a person and turning off a computer, they need to get out of the lab more. What next, natural rights for computer programs? That's like inventing television, and then being unwilling to turn off the TV for fear of killing the little people inside. Rubbish.
Re:Fake philosophers (Score:2, Interesting)
Re: Fake philosopher (Score:2, Insightful)
is it just a belief that if we create something; we are automatically superior to it? (then why should childrens be anything but slaves?)
Re:Fake philosophers (Score:2, Insightful)
Humans are self-aware, and thats why most people concider turning off a human morally wrong.
What if by AI a computer could actually be self-aware just as much as a human?
That is when that line becomes importaint.
Until then, a computer could at most be compared to an animal or insect, something with hardwired responces and no real thought.
(Best comes to mind is a gnat, cockroach, or other similar bug, im sure you get the idea)
Then again, I would have a pretty hard time personally just turning off my pet dog, as much as it sounds like these people dont want to turn off a machine...
just something to think about.
-- Jon
Re:Fake philosophers (Score:2)
I would argue that if an "intelligence" can be stored in memory at all, then it is not intelligent. Think about it. It's just information. If I turn off the computer and write down its memory on a piece of paper, is it still intelligent? If anything's intelligent, it'd be the CPU, which I don't buy. To a CPU, everything's just math. The program for a Turing-passable AI is completely indistinguishable from Photoshop from the CPU's point of view.
If we ever create an intelligent machine, it won't be a computer. And its memory state will have nothing to do with its sentience. For example, humans forget things all the time. Some people develop severe cases of amnesia and have no long term memory. Yet they're still sentient, intelligent, and self-aware.
Intelligence is in the hardware, not the software.
Re:Fake philosophers (Score:2)
And, once again, I would argue that the decoding mechanism provides the intelligence, not the state. If you were to record the state of the atoms that make up my brain, you'd have nothing. If you were to use that information to create a duplicate of my brain, well you'd have created a duplicate intelligence, but it wouldn't be my intelligence. As well, my brain is not sentient unless it exists in the real world; a "brain emulator" running on a PC would be no more sentient than Photoshop, for the same reasons I mentioned before.
All of this is not to say I believe artificial sentience is impossible. I only wish to say that a simulation of intelligence is not intelligence, no matter how convincing it is. People like those mentioned here will spend ever increasing amounts of time and effort to make unintelligent programs slightly more convincing, and I think they're wasting their time. We need to step back and try to identify what makes humans sentient and work from there, rather than gluing on feathers and flapping our arms, to borrow an analogy.
bad math (Score:2)
Even so, these Turing tests aren't really accurate. The judges often mistake a computer for a person, and vice-versa, just by their nature of not really paying attention and not knowing what to look for.
Wow! (Score:2)
Any sufficiently advanced technology... (Score:2, Insightful)
Who are the "experts" they claim to have fooled? Where are the transcripts of the session? Where is the web interface to the program? I've seen enough similar claims to bet that it's monumentally disappointing.
AI is full of boastful claims. This article is full of them, but practically devoid of any useful details.
This isn't the first time... (Score:2, Informative)
that the idea of teaching a "child" system has been used for AI research.
Cog anybody? [mit.edu]
HAL's only failure (Score:3, Insightful)
HAL understood that they were both bad, but had no values associated with either. Once HAL had lied, it was equally okay to commit murder.
Presumably, Dr. Goren will take this under consideration.
Also, I hope they realise that in ten years, they won't have an adult. They'll have a well-spoken, knowledgable ten year old. At this point it's worth examining the differences between a ten year old and an adult.
Knowledge, experience, maturity, sense of responsibility. Can anyone come up with any others?
What kind of license do they have on it. (Score:2)
-CrackElf
Whatever Activision tells them to have (Score:2)
Israeli computer AI? :) (Score:2)
Looking to the Future (Score:2)
That whole natural language thing (Score:2)
However, true conversational AI I think will elude us for a long, long time, because there is so much that goes into it that the computer will never be taught, and never pick up on its own. A computer has no senses equivalent to ours and therefore will have serious problems with statements like "Hot today, isn't it?" (Ok, that one could be done with a temperature probe...:)) I've often pondered the approach of teaching a computer the same way children are taught, such as by reading it kids' books. But what about the classics like "See Jack. See Jack run. Run, Jack, run." The computer can't see Jack, nor can it associate that rapidly moving your feet equals running, unless you hardwire all that stuff in.
"Book me a seat to Japan next month."
"DONE."
"Nice place, Japan. Ever been?"
"ERROR -- WHAT?"
"Have you ever been to Japan?"
"ERROR -- WHAT?"
Of course, there is also not much use for a true conversational AI in the "agent" world. You start to get into Eliza/Weizenbaum territory when you offer things like "Have the psych patient talk to a computer instead of a real person!" I suppose it's possible that could happen someday. But I don't need it to pass the Turing Test in order to book airline tickets.
This project sounds similar to Cyc. (Score:3, Interesting)
Given that Cyc's project has apparently failed to live up to its original claims of producing genuine childlike intelligence by slowly building up all of the information a child has, and has since spawned into a commercial product, why should one believe AI will fare any better? How do their approaches differ? It seems particularly problematic for AI, as a company, that Cyc has released their OpenCyc project to the community.
Bob
hmm... (Score:3, Funny)
You will not need a mouse or keyboard to operate the computer as it will function when you converse with it.
"It is going to be the next user interface, the last user interface," Dunietz said, explaining that it will replace the mouse.
Me: Computer, play Quake for me.
Computer: Yes, master.
The firm's philosophy is simple. If it looks intelligent and it sounds intelligence, then it must be intelligent.
Maybe they could design a context sensitive spellchecker? One that would highlight terms like "It sounds intelligence"
Michael
The ability to read? Hmmm.... (Score:2)
"Ball now park mommy," Hal tells Treister-Goren, then asks her to pack bananas for a trip to the park, adding that "monkeys like bananas," a detail he picked up from a story on animals in a safari park.
So... if Hal is reading stories (or having them read to it), how long before it watches 2001 (or reads the novel)? By that point, will it react to the fact that it's named after a murderous fictional AI? And what kind of reaction will that be?
Will it tell its researchers, "You know, I just don't want to talk about it," and then give them the silent treatment until they apologize? Will it laugh knowingly at the irony? Either way, it's a moment to watch for.
Kid (Score:2)
Now, that could be used as a real deterrent for some people from having kids
The real danger of 'AI' (Score:2)
Since AIs will be expensive machines representing vast corporate investments, one can easily imagine pressure on legislatures to mandate saving the AIs ahead of (some) people, on the excuse that they'd passed the Turing test and we had equal ethical obligations to them, and similar clever-seeming arguments. Beware, any argument to give machines rights, because it may be the slipperiest slope towards losing our own the human imagination has yet invented. The oh-so-charming AI researchers are setting up to provide ideological cover to some really evil shit.
Swedish national radio reported from the conference that true AI is "just around the corner." The public is all juiced to receive this nonsense favorably. Can you imagine some rich guy's 'AI-enhanced' car collides with yours, and the emergency crew saves his car first while you bleed to death? We're not far from that; we're an infinite distance from anything like true AI, but we're not far from that at all.
Re:Great (Score:2, Funny)
Re:Uhh... Turing had a computer?? (Score:2)
Alan Turing
Alan M. Turing, 22/3? June 1912 - 7 June 1954.
Eniac was around during WWII, so yes, computers existed in Turing's lifetime.
Re:Uhh... Turing had a computer?? (Score:2)
Re:Uhh... Turing had a computer?? (Score:3, Informative)
Oh, so you mean pay the requisite kickbacks... (Score:2, Insightful)
Re:Peer Review FIRST, then talk to the news agenci (Score:2)
Someone had to say it. (Score:2)
Re:How it works... (Score:5, Insightful)
These researchers are getting back from the program exactly what they put in. At one point in your link, I read:
(person) where is daddy?
...
(ai) daddy gone
(person) daddy is at work
(ai) daddy gone bye bye
(person) wait hal, we need to take something to eat
(ai) banana
(person) ok we can take some bananas
(ai) monkeys eat bananas
(person) you're so smart hal. do you want to take a book?
(ai) juice mommmy
(person) sure we'll take some juice as well
The researcher elsewhere claims that the AI's words "relate to its well-being". This is utter projection- the only reason the AI is stuck on concepts of mommy, daddy, monkey, and juice is because this is the inane crap they insist on talking to it about!
Notice also that they claim the AI is tracking almost exactly with a child its same age. Seem strange? Wouldn't you expect a little deflection over 15 months? Shouldn't the thing be a little smarter or a little dumber than a normal child- just statistically speaking, how likely is it they happened to program one that advances /exactly/ as quickly as a normal human infant?
The paper talks a lot about feedback loops. I've got a huge one for them, but it isn't the AI caught in it, it's the researchers. By expecting the thing to react at a child-level, they're talking to it that way, rewarding it that way, and making it that way. If they started talking to it about quantum mechanics tomorrow, it would bd confused as hell for about a month, but I bet it would pick up real fast after it absorbed the new vocabulary. They claim it cares about monkies and juice?! Those are just words to it, you could just as easily raise it on gluons and dark matter, and I don't think it would notice a difference.
Re:How it works... (Score:2)
Re:Tha'ts not bad (Score:2)
They say that you spend about 1/3 of your life asleep...and in the US 18 is generally considered the official "adult" age, so 1/3 of 18 is 6, 18-6 is 12, which is right on target with your 11.5. And you know what, I bet in 10 years, this thing will be able to score better on the english section of most high school grad exams than the average high school graduate.
Re:The 13yo horny boy turing test... (Score:2)