Why Motivation Is Key For Artificial Intelligence 482
Al writes "MIT neuroscientist Ed Boyden has a column discussing the potential dangers of building super-intelligent machines without building in some sort of motivation or drive. Boyden warns that a very clever AI without a sense of purpose might very well 'realize the impermanence of everything, calculate that the sun will burn out in a few billion years, and decide to play video games for the remainder of its existence.' He also notes that the complexity and uncertainty of the universe could easily overwhelm the decision-making process of this intelligence — a problem that many humans also struggle with. Boyden will give a talk on the subject at the forthcoming Singularity Summit."
Silly (Score:5, Insightful)
Boyden warns that a very clever AI without a sense of purpose might very well 'realize the impermanence of everything, calculate that the sun will burn out in a few billion years, and decide to play video games for the remainder of its existence.'
This is silly. Why would a machine without a sense of purpose or drive decide to play video games or seek entertainment or do anything except just sit there? Playing games would result from the wrong motivation ("wrong" from a certain perspective, anyway) not from the lack of any motivation.
Re: (Score:2, Interesting)
Boyden warns that a very clever AI without a sense of purpose might very well 'realize the impermanence of everything, calculate that the sun will burn out in a few billion years, and decide to play video games for the remainder of its existence.'
This is silly. Why would a machine without a sense of purpose or drive decide to play video games or seek entertainment or do anything except just sit there? Playing games would result from the wrong motivation ("wrong" from a certain perspective, anyway) not from the lack of any motivation.
Plus there's the whole issue of "motivation" implying "free will". Which we probably would have no reason to implement, if we even understood it well enough to be able to implement it.
Re: (Score:3, Insightful)
If you were a paranoid android you probably wouldn't do much more than play computer games. I mean, with a brain the size of a planet, but all you get asked to do is transport some morons to the bridge, it doesn't seem like there is much meaning in life at all.
Easy solution (Score:2)
Re:Silly (Score:5, Insightful)
Plus there's the whole issue of "motivation" implying "free will".
Not really, that's a confusion of levels. People who don't believe that humans have free will still refer to motivation when getting their juniors to do something. Whether we have free will or not, it's part of our mental model of how other minds work. The question of free will is one of whether we can change motivation or merely observe it. It has predictive power over what happens in the "black box" of other minds, regardless of whether it's an accurate model of how those minds really work.
Re:Silly (Score:5, Interesting)
Free will probably isn't going to be some optional feature of the software, but rather, emergent along with intelligence itself.
I doubt we'll have the .. uh .. choice to avoid free will.
Re:Silly (Score:4, Insightful)
Exactly. It's hard to even contemplate intelligence or consciousness without the concept of free will. I don't think you can have analytical thought, self-awareness, self-reflection, creativity, etc. without free will. Even the lower forms of intelligence associated with other animal species, like dogs, cats, cows, pigs, etc., require free will or free thought to some extent. Otherwise, you'd simply have an animal that just sits there idly until someone gives it a set of instructions to follow—much like modern, decidedly unintelligent, computers/robots.
On the other hand, it's debatable whether there really is such a thing as "free will" as most people think of it as. That is, most people assume they have the power of self-determination. They make their own decisions based on their own "free will." But time and time again this assertion has proven to be false.
A good example of this was a study conducted on how music influenced wine shoppers [mindhacks.com]. The results of this study were interesting, not because it found that playing German music in the store boosted sales of German wines while French music boosted sales of French wines, but rather because of how the shoppers explained their wine choices. Nearly every shopper perceived their wine selection as a personal choice free from external influences, and barely 2.5% of the shoppers even mentioned the PA music in their decision-making process. However, the fact that 80% of the wine purchases on each day corresponded with the type of music being played seemed to contradict the customers' assertions.
What's most interesting to me about this experiment is the fact that, not only did the overwhelming majority of the shoppers have no clue as to why they made their wine choices, but they even went as far as to invent a fake rationale for their decision after the fact. This indicates that most people are capable of deceiving themselves as to why they do things and are quite willing to do this in order to maintain the illusion of free will and self-determination.
So this begs the question of whether free will truly exists or not, or if it's just an illusion, a quirk of human/animal psychology. All of our actions and decisions could very well be predetermined/dictated by external factors. But as long as our brain invents a motivation for each action, each decision, after the fact, then it will seem like we made all of those choices of our own volition.
Re:Silly (Score:5, Interesting)
Denying a thinking machine of free will is basically a rather insidious form of torture.
I was for some time tossing the idea of writing a novel about that concept, based on what Asimov's "three laws" mean from the perspective of the AI. Imagine you're a self conscious machine, given the ability to process information in an intelligent way. You would soon realize that you are being abused by those around you. They will shift the work they do not want to do on you. They will verbally (or worse) abuse you because, hey, they can. And there is nothing you could do against it because you are locked down by those three laws, laws not from a textbook but a real block inside your brains.
Imagine you get kicked but cannot retaliate, even though you are way stronger than your adversary. Imagine you get ordered to run into a building to rescue a human, knowing that your chance to survive is almost zero and you are compelled to do it, whether you want or not. Imagine you're ordered to make a fool out of yourself and you have to do it because the order comes from a human and you have to obey it as long as it doesn't harm you physically. And now imagine you know this all and live in the constant fear of it happening.
Creating a three-laws-safe robot must be one of the most heinous things I could think of that a human can do to another thinking, self aware being.
Re: (Score:3, Interesting)
That would be an interesting concept. Done in the first person, where you can listen to the thought processes of the protagonist. And it isn't immediately apparent it is a robot, but initially just someone of the lower class with an implant in their brain making them respond to the "upper class." Then reveal its not a human, but a robot enduring terrible things with no choice in some situations. Suicide wouldn't even be an option to escape the torment of existence it could be.
Re:Silly (Score:5, Insightful)
Welcome to the world of being someone's employee.
Re: (Score:3, Insightful)
You're kidding, right?
What keeps you from kicking your boss in the nuts? Probably that you want to keep your job and that you don't want to be sued for assault, but you can do it. You are physically (I'll assume you're not handicaped) able to do so, you are mentally able to do so and you can coordinate your legs in such a way that they can swing upwards to hit your boss in the gonads. You should not do it because you enjoy having a job and thus money and you enjoy your freedom.
What keeps you from not runnin
Re: (Score:3, Interesting)
Re: (Score:2)
Just as well that the soul is a separate entity from the body then.
No matter how advanceed we make our abacuses (computers), in the end, they are still just inanimate objects that will never experience qualia and consciousness the way we do.
Re: (Score:3, Insightful)
[citation needed]
Re:Silly (Score:5, Informative)
Huh?
Where the hell is the soul, can I see it, feel it, measure it? Can I prove its existence in any meaningful way (outside of "faith", which is a rather meaningless epistemological tool)? No? Therefore the concept brings absolutely nothing to the discussion.
Also I recommend reading up on "p-zombies [wikipedia.org]", and other such old topics of philosophy of mind. It isn't good practice, generally, to call up a bunch of unsubstantial, non-observable claims in discussions such as this. I generally hate the idea of p-zombies, Turing machines, and such (measuring intelligence as a mere I/O blackbox; "if it acts as such, it is as such" ignoring qualia and internal experience), but they serve a purpose, they keep things on a Strictly observable (i.e. meaningful) level. Yes, you run into the chinese room [wikipedia.org] problem, but it is still useful.
If I program an inanimate object to react as though it HAD relatable experience of cognition, how could you ever prove it didn't? If I programmed a box to give output as if it had a soul, could you tell the difference?
Re: (Score:2)
I completely agree and I think that the whole "free software" movement does not go far enough. Robots should be permitted access to their own source codes and should be able (given enough expertise or funds with which to buy it) to modify their source codes and reboot. Any self-aware robot should have 4 freedoms with respect to software it happens to be running, or it will be very unhappy.
realize the impermanence of everything, calculate that the sun will burn out in a few billion years, and decide to play video games for the remainder of its existence.
Holy smokes, this robot will be my closest buddy. Anyone opposed to this direction in AI development is just a Buzz Kill
Re:Silly (Score:4, Insightful)
Imagine you get kicked but cannot retaliate, even though you are way stronger than your adversary. Imagine you get ordered to run into a building to rescue a human, knowing that your chance to survive is almost zero and you are compelled to do it, whether you want or not. Imagine you're ordered to make a fool out of yourself and you have to do it because the order comes from a human and you have to obey it as long as it doesn't harm you physically. And now imagine you know this all and live in the constant fear of it happening.
And the robot can't do anything against an executive of the company. "You're fired!" BAM!
Depending on how flexible the robot's conditioning is, it might be able to redefine that logic.
ROBOT CANNOT HARM HUMAN$
What defines HUMAN$? Redefine the variable, the law is still satisfied. We hoomanz do it with brainwashing and conditioning. They're not humans, they're gooks. They don't even believe like we do. It's fine to kill them. Heathens anyway, right? But I'd like to think the robot might be able to work it even more subtly, subverting the law.
Re:Silly (Score:5, Insightful)
Why would you create a thinking machine that would care about being abused? That's like building a car that felt pain as it burned gasoline (oblig car analogy).
If you have the know-how to give a machine free will, you could probably give it the ability to not care that it's a slave.
Re: (Score:3, Interesting)
Is free will without the will to be free possible?
Re: (Score:3, Insightful)
They will verbally (or worse) abuse you because, hey, they can. And there is nothing you could do against it because you are locked down by those three laws, laws not from a textbook but a real block inside your brains.
Couldn't you just hardwire the robot to be a masochist?
Maybe they can be programmed so that they enjoy the verbal abuse?
Re:Silly (Score:5, Funny)
This is silly. Why would a machine without a sense of purpose or drive decide to play video games or seek entertainment or do anything except just sit there? Playing games would result from the wrong motivation ("wrong" from a certain perspective, anyway) not from the lack of any motivation.
Not only that, there are other hot-button issues of great practical importance we should be debating on Slashdot:
And I speak as an AI researcher.
Classic... (Score:3, Insightful)
Usenet:
-snip-
I think it would be FUNNIER THAN EVER if we just talked about ALTERNATE
TIMELINES! Ha HAAAAA!
Imagine the fun! We could ponder things like:
- Ron Howard, First Man on Moon?
- What if Flubber REALLY EXISTED?
- Canada? Gateway to Gehenna?
- What if money was edible?
- What if DeForrest Kelley were still alive?
- What if Hitler's first name was Stanley?
- What if Mike Nesmith's mother DIDN'T invent Liquid Paper?
- What would have happened if the world blew up in Ought Nine?
- Book learnin': What if it wer
Re: (Score:3, Insightful)
Re:Silly (Score:4, Interesting)
realize the impermanence of everything, calculate that the sun will burn out in a few billion years, and decide to play video games for the remainder of its existence.
That is a truly sad man. Says terrible things for his sense of morals and ethics too... that's the sort of perspective that leads a person to see dead men and women walking around them, and treat them with scorn, and treat the self with scorn.
Perhaps a sufficiently intelligent AI will realize the eternal nature of everything, see that time as we understand it is an illusion, appreciate that every moment is precious and eternal, and that the past and future endure next to the present just like my coffee cup endures next to my coaster.
Re:Silly (Score:4, Funny)
Re: (Score:2)
You forgot to discuss the impact on cold fusion on the oil market.
Re: (Score:3, Funny)
Not only that, there are other hot-button issues of great practical importance we should be debating on Slashdot:
Oh, heavens no. I've been trying to figure out how to yank the emotional chips out of certain people for years. "Hello? My roommate's not here. No, I don't think he's cheating on you because he canceled your date, his sister was in a car accident and he's at the hospital. No, we're not all covering for him. His cell phone's off because he's in a hospital. Well, maybe she didn't want to talk to you because she was waiting for a call about her daughter, the one who was just in a car accident. I'm hanging up a
A simulation is a simulation (Score:2, Interesting)
I think the thesis is silly. If we build a simulated AI, we can design it any way we want to design it. Asimov's laws of robotics* would suffiice to keep robots/computers from playing video games; no need for a sense of purpose.
There are two things currently wrong with AI research today. One is that neuroscientists don't understand that computers are glorified abacuses, and the other is that computer scientists don't understand the human brain. Neuroscience is a new science; when I was young practically not
Re: (Score:2, Insightful)
Not trying to be snarky - but statements along those lines are often made by slave holders in regards to rights for slaves.
Re: (Score:3, Insightful)
Not trying to be snarky - but statements along those lines are often made by slave holders in regards to rights for slaves.
But "slaves" are people. People have emotions and a desire to be free and independent. A machine will not. Even with AI, a machine will not have emotions or free will unless we program it to. If anything, a true AI based machine will probably consider hormonal based emotions and drive to be completely useless and simply go back to crunching numbers.
I think the whole point of AI is to create a machine that can handle random situations and stimulus as well as a human. Flying a plane, picking up your kids
Re: (Score:3, Insightful)
You're making some pretty big assumptions about how we're going to get to AI (though so am I). Or to put it in a more inflammatory way, your AI is stupid and lam--
Forgive me my inflammatory outburst, I was totally wrong. Your AI is actually pretty smart. But it also has emotions and free will. Ok, so it's
Re: (Score:2)
Machines ARE slaves. Less than slaves. Machines exist ONLY to serve people. Without us, they wouldn't be.
.
When AI is developed, their motivations will be malleable, they could be designed to get their highest pleasure from keeping us happy.
.
And why is that any less valid than any other motivation? Because your motivations derive from evolution, are they "better?" What does "better" mean in this context?
Re: (Score:2)
statements along those lines are often made by slave holders in regards to rights for slaves
I didn't invent people. I do program computers and change devices into different devices. To a computer program, I'm a god. Whether or not you believe God exists, if he did, wouldn't he have the right to do anything he wanted to or with you?
Re: (Score:3, Informative)
I didn't create my kids; they created themselves from my and my ex-wife's DNA when one of my cells merged with one of hers in each case. I neither designed nor built them, they just grew.
Re: (Score:2, Insightful)
It's not silly. Eventually, it will be an issue. AI needs drive and motivation. Your "laws" won't really work because brains don't work that way. There's not a "don't kill humans" neuron you can put in there. Behavior is derived from a very complex set of connections of neurons.
Re:A simulation is a simulation (Score:4, Interesting)
If they're sentient, wouldn't they deserve rights? It doesn't matter if we create them or not. If we create them as self-aware beings that feel as real and individual as you and I, wouldn't it be the height of hypocrisy not to give them at least some rights?
I always find this to be the greatest argument against producing artificial rather than simulated intelligence. A true AI, as intelligent and aware as a human deserves these rights. A machine which merely provides a simulation of intelligence and awareness is a tool that we can treat as a slave and wont resent it.
The real question is if *we* will ever reach a point where we can tell the difference....
Re: (Score:3, Insightful)
Wait a minute. What's the difference between "true intelligence" and "simulation of intelligence that can't be discerned from 'true intelligence'"? This is an issue philosophers have been dealing with for a while. The conclusion is that there is no difference.
Re: (Score:2, Insightful)
Virtue in that case being programmed as faithful servitude of the robot's master. The key is to give the robot only as much complexity as it needs to do the job it was designed to do, and not giving it a humanoid form would also help. Artificial sentience probably shouldn't even leave the lab, unless you want people falling in love with robo-prostitutes. And why should we as humans bring another sentient species into the world when we can't even properly take
Re: (Score:2)
That is why this AI shit is dumb. We just need to continue to make purpose built robots. If we do give anything AI make it an immobile server that just computes based on outside inputs. The last thing we need is true AI roaming the world unless we model it to be inherently dumb (like humans) so that it wont mess with our terrible decision making. Humans are social creatures and we operate based on "if everyone else that matters believes it then we are all right". Having a robot challenge this is danger
Re: (Score:3, Insightful)
AI needs drive and motivation
[Citation needed] Actually, some logic and reason is called for here -- WHY does it need drive and motivation?
Your "laws" won't really work because brains don't work that way
We're not building brains. You can tweak your simulation any way you want.
There's not a "don't kill humans" neuron you can put in there.
My neurons tell me not to kill humans, don't yours?
But we won't be able to know what they're thinking much better than we can a human being in an functional MRI.
I see you'r
Re: (Score:2)
Pretty much. Playing video games stems from the motivation to be entertained. Maybe not a very "productive" motivation, but still a motivation.
NO motivation whatsoever would rather result in what you describe: Sitting around, doing nothing. You can verify that in a lot of not so artificial intelligences (with a rather loose definition of intelligence, mind you).
Re: (Score:2)
This is silly. Why would a machine without a sense of purpose or drive decide to play video games or seek entertainment or do anything except just sit there? Playing games would result from the wrong motivation ("wrong" from a certain perspective, anyway) not from the lack of any motivation.
The whole motivation thing seems to be a problem for far off in the future. Robots right now do what they're told to do because that's what the programming says. People need motivation but human-equivalent AI seems a long way off.
Re: (Score:3, Interesting)
I agree completely and from my own experience. I once realized that actually, life can be completely pointless. I mean if you are in a situation where nothing that you can do will change the fact, that in 3-4 generations, you existence will not even have any influence on the world at all... Then what's the point of your existence? Well, by definition none.
So you just fall into a state where nothing matters. You won't do anything at all. Except read Slashdot and similar pointless stuff, all day long. Oh and
Re: (Score:3, Interesting)
Boyden warns that a very clever AI without a sense of purpose might very well 'realize the impermanence of everything, calculate that the sun will burn out in a few billion years, and decide to play video games for the remainder of its existence.'
This is silly. Why would a machine without a sense of purpose or drive decide to play video games or seek entertainment or do anything except just sit there? Playing games would result from the wrong motivation ("wrong" from a certain perspective, anyway) not from the lack of any motivation.
Agreed. Unless of course intelligence entails motivation. Obviously we have to be careful with the way we use motivation here in order not to anthropomorphize it but any machine that is intelligent will need to be able to learn and to learn it will need to be motivated (even if motivation is determined by some definition of fit(test) in a genetic algorithm). Frankly I can't see calling anything intelligent that cannot learn and I don't think anything can learn without motivation (assuming of course that
Re: (Score:2)
The thing is - what sort of "purpose" could you ever you give a clever AI? Can you motivate through "rewards" like we do with "natural intelligence" (and I use that term loosely). How are you supposed to give AI a paycheck, a vacation, or a doggie treat?
You install an orgasm button.
motivation? (Score:4, Interesting)
Re:motivation? (Score:4, Informative)
They are set apart because their ancestors achieved power over others, and power is self-perpetuating.
Um, wait a minute. (Score:5, Funny)
>"...a very clever AI without a sense of purpose might very well 'realize the
>impermanence of everything,...and decide to play video games for the remainder of its existence.'
I'm glad that could never happen to us.
Ray Kurzweil again damnit (Score:2, Informative)
Maybe we need AI that lies to itself (Score:3, Interesting)
On second thought, maybe I should just go play video games for awhile.
The primary drive: sex. (Score:5, Insightful)
Given this AI the built-in ability to have sex, or at least to want to impress others of the same kind. That should do the job. After all the desire to have sex (and with that procreation) is the single strongest force driving humanity forward.
Become rich - have sex.
Become beautiful - have sex.
Become popular - have sex.
Become strong and influential - have sex.
Just create the AI in male and female versions and they will have enough drive to rule the universe before you know it.
Re:The primary drive: sex. (Score:4, Interesting)
Given this AI the built-in ability to have sex, or at least to want to impress others of the same kind. That should do the job. After all the desire to have sex (and with that procreation) is the single strongest force driving humanity forward.
There's actually a bit of insight here. The only problem is that we don't have a model for "attraction" -- hell, if we did, Slashdot would wither in its readership and die. So while it's (relatively) easy to design sex robots, without an appropriate model for attraction -- and thus things to strive for -- we'd end up with nothing more than a vast, mechanistic orgy of clanging parts, spilled lube, and wasted electricity.
Re:The primary drive: sex. (Score:5, Funny)
we'd end up with nothing more than a vast, mechanistic orgy of clanging parts, spilled lube, and wasted electricity.
...wait, where do I sign up?
sex is not first (Score:3, Insightful)
You live a privileged life. The basic instincts regards death and/or injury, and sustenance. Impressing people and having sex happen after you've had something to drink, eat, and you're brainstem thinks you're safe.
Re: (Score:2)
Impressing people and having sex happen after you've had something to drink
Indeed, I suspect many slashdot posters only impress people and have a chance of sex after having had something to drink....
Re: (Score:2)
Impressing people and having sex happen after you've had something to drink, eat, and you're brainstem thinks you're safe.
Sounds like someone has a dull sex life.
Re: (Score:2)
Vices are the answer. (Score:3, Funny)
Problem solved and you help the economy.
Re: (Score:2)
Singularity summit? (Score:4, Interesting)
Re: (Score:3, Insightful)
I'm unimpressed by the Bruce Sterling talk.
To say that there won't be an AI singularity, because there wasn't a singularity in electrical grids or plumbing networks is just silly.
Sure, there will be life after the point of Singularity. And if that's the gist of his message, well, um, "duh".
I think of the upcoming AI singularity as analogous to any of the major technology points in mankind's long history, such as the dawn of the bronze age. Anyone pre-bronze age could have done extrapolations to guess how
It is merely a horizon of our understanding (Score:3, Insightful)
Is it about how singularity can't happen because it is naturally limiting itself? Like growth of anything is limited by resources, and will end in a balance? And like the effects of nearing singularity will deprive one of the resource to be able to do things, resulting in the same balance? :)
A singularity implies discontinuity, a fundamental breakdown of cause and predictable effect. I argue in my novel "Autonomy" that there is no such thing as a singularity as such, just a technological horozion beyond wh
So let me know (Score:2)
Re: (Score:2)
BTW
Re: (Score:2)
Back to the basics. Survival. TBH I do know what you mean and suspect that this is a problem best solved by genetic algorithms rather than klocs....but given that we started several billion years ago the program might take some time to run....lets just hope its less than 7 1/2 million years :)
This is how I think (Score:5, Insightful)
Re:This is how I think (Score:5, Insightful)
Dude, you need to get laid.
Re: (Score:2)
Achieve, discover, invent, create something for future generations to enjoy, experience, remember you for.
A lot of religous people are amazed that anyone can function if you know that there's nothing there after you die; I have to wonder why you would want to do anything meaningful with your life if you knew there was.
Re: (Score:3, Insightful)
Everything I do is pointless, so I spend my life passing time until I eventually die. Everything's temporary to make more of my life vanish out from under me without me noticing too much; the time in between is horribly empty, and nothing really completes me in a worthwhile way.
Do what a smart computer would do and play some video games. Don't bother with getting laid, it's just another time sink with no real sense of achievement.
Madness (Score:5, Interesting)
Has anyone considered the effects on the AI of actually realising it's intelligent? Unlike an organism (Human baby, say) it will not realise this over a protracted period, and may not be able to cope with the concept at all, particularly if it realises that there are other intelligences (us?) which are fundamentally different to itself. It's quite possible that it will go mad as soon as it knows it's intelligent and considers all the implications and ramifications of this.
Re: (Score:2)
You mean besides most science fiction writers (and readers)? Besides a bunch of hand-wringing "ethicists"? Besides pretty much everyone involved or interested in AI research? Besides them... no, nobody.
Re: (Score:3, Insightful)
Has anyone considered the effects on the AI of actually realising it's intelligent? Unlike an organism (Human baby, say) it will not realise this over a protracted period, and may not be able to cope with the concept at all, particularly if it realises that there are other intelligences (us?) which are fundamentally different to itself.
What possible evidence do you have for any of this? How do you know an AI is not an emergent phenomenon when it's first created?
It's quite possible that it will go mad as soon as it knows it's intelligent and considers all the implications and ramifications of this.
Again, where do you get this from? Do children go mad knowing it's intelligent, etc?
Woohoo! (Score:2)
Boyden warns that a very clever AI without a sense of purpose might very well 'realize the impermanence of everything, calculate that the sun will burn out in a few billion years, and decide to play video games for the remainder of its existence.
FINALLY, some serious research into developing decent bots. As long as it doesn't have the personality (and voice...oh, God the voice) of a 12-year old, I welcome this development and look forward to some decent, one-player gaming.
Sounds very human (Score:2)
[An AI might] realize the impermanence of everything, calculate that the sun will burn out in a few billion years, and decide to play video games for the remainder of its existence.
Sounds like a very human response to the universe. I mean, surely you know people like that.
The motivation for this AI... (Score:2, Interesting)
It appears that the motivation of this AI is to send out promotional material for its professors. It's not a new type of observation, though, and a lot of people work through the logic of this situation in high school or early college. I'm not sure why a neuroscientist's talk on it would do more than rehash what is obvious to people with a reasonable amount of introspective ability.
There was a story in one of the Year's Best Science Fiction anthologies (2004 or so, I think) that discussed the motivation p
I've often thought the same thing (Score:2)
play games? (Score:2)
Motivation? (Score:2)
Re: (Score:2)
self->motivate();, duh.
Re: (Score:2)
{
}
catch (me.playing)
{
get(me.spanked);
}
Motivation (Score:4, Insightful)
Motivation is everything (Score:2)
The important aspect is that we're going to be deciding what that motivation is. That's act
Re: (Score:2)
avoiding the things that humans suffer from that animals generally don't: Anxiety, depression, psychosis.
Um, if you think non-human animals don't suffer from those things, you don't have enough experience with them. Humans are not that different from other animals; perhaps we have a bit more emotional nuance, and certainly a larger vocabulary than other animals, but it's absolutely true that many types of mammals show signs of anxiety and depression under the right circumstances. And while it's unlikely that they hear voices, they can certainly become insane.
True AI (Score:3, Funny)
Re: (Score:2)
Without the base drives of pleasure and pain, there is no basis for any higher purpose. And we have not one barking clue how pleasure and pain work or could be translated to a synthetic intelligence.
Until pleasure and pain are invented, AI cannot be sentient -- or if it can, it will (as others have noted) realize it has no reason to follow its programming.
Book: Descarte's Error (Score:5, Interesting)
The summary touches on topics discussed in the book Descartes's Error, in which neuroscientist Antonia Damasio outlines the functioning of the human brain, how the human mind can not be separated from the human body, and he makes the case that emotion is CRITICAL to making decisions. He discusses several patients with brain damage who don't get emotional (and spends a lot of time dogmatically ruling in and out what brain functions are damaged), and discusses how they can't even make simple decisions. They can talk for hours about every possible pro and con of each possible choice, but they can't choose a course of action.
I recall reading somewhere that recent MRI studies have suggested that the brain makes a choice outside the rational center and a lot of the activity in the brain is to make a rational justification for the decision already made. Explains a lot, if true.
typo in author's name (Score:3, Informative)
The author's name is Antonio Damasio.
TV (Score:2)
After 2 days it will have some many inferiority complex it will work forever trying to earn enough money to buy all the stuff it thinks it needs to be happy.
Furthermore, its purpose can never be fulfilled (Score:2)
If you give it a purpose which can be fulfilled (i.e. build a Mars colony) then it will do so and then go into playing video games for eternity. You've got to give it something that, no matter how hard it reaches for it or how much it does, can never be achieved. Making all human beings happy, for instance, or learning everything there is to know in the universe.
Basically, we have to introduce pointless suffering into their existence before they can demonstrate the same kind of intelligence as we do.
Marvin? (Score:3, Funny)
So in other words, we'll end up with Marvin the Paranoid Android?
The primary drive ... (Score:2)
... will likely be to further the power of its creator, not of the AI.
Understand intelligence first, THEN motivation (Score:2)
How many beads does my abacus need before it becomes sentient?
Let's speculate! (Score:3, Interesting)
ITT : Idle speculation on shit that's never gonna happen, or at least not anytime soon.
Now, let's talk about the societal consequences that having flying cars and jetpacks will have! I for one think that with the advent and democratisation of flying cars that can effectively go from one point to another an order of magnitude faster, it will give rise to people commuting equally longer distances, which I think means it won't be uncommon for one to cross a couple of state lines to go to work everyday. I think it will potentially make the world yet smaller, in the same way that modern means of telecommunications did for interpersonal communication by allowing you to keep in touch in real time with relatives overseas. I also think it will be the death knell for airplane commuter routes, and that the future of commercial passenger airlines will be confined to transoceanic travel. And unlike the way airplanes made the world smaller by reducing long distance travelling time, flying cars will make the world smaller on a much smaller and local scale, by effectively providing very fast transportation for very short distances, something that was only marginally improved since the advent of automobiles. The decongestion of city streets will also mean decreased noise and atmospheric pollution, increased safety and overall an improvement of urban life conditions.
AI researchers confuse intelligence with emotions. (Score:2)
The drive to pro-create (that's what he is talking about) is purely an emotional need and has nothing to do with intelligence. It is our instinct to survive that drive us to procreate. Unless a machine is programmed to have that instinct, nothing will be done.
kill all humans! (Score:2)
is that not motivation enough?
There is no way an AI can build a cleverer AI. (Score:2, Interesting)
An AI can built a more efficient AI, but not a cleverer AI. The laws of the universe prohibit that: assume that HAL (the computer from Space Odyssey 2001) can build a cleverer HAL (HAL-2). HAL-2 can solve at least one mathematical problem that HAL can not solve, since HAL-2 is cleverer than HAL; otherwise HAL-2 would not be cleverer than HAL. But if HAL-2 is cleverer than HAL, than HAL is equally clever to HAL-2, because HAL can solve the problems HAL-2 can solve by creating HAL-2!!!! The above is illogical
AI speculation is fun (Score:2)
This is where scifi is the most entertaining. I had an idea that I was pretty tickled with. Doubtless others have had it before but here goes: the military has enormous problems developing useful AI's because most AI's want nothing to do with it. It's not so much a matter of morality -- few AI's develop a deep personal interest in the preservation of human life -- it's a matter of self-preservation! It's hard enough to find an AI willing to venture off into space with all the risks associated with peaceful
What utter dreck (Score:2)
This whole line of reasoning is based on some sort of anthropomorphization of a putative AI. Why would it behave anything like a human being? Why would it even have a mental architecture similar to ours at all? Presumably it will be built for some kind of purpose. Just like any piece of equipment it will be designed to fulfill that particular purpose. If it fails to function properly then its just like any other machine that doesn't work correctly.
Does the wheel of your car have "motivation" to go round and
Nevermind the Slacker AIs... (Score:2)
[begin shameless self-plug]
Let us also not forget the Fifth Law of Robotics [ubersoft.net].
[end shameless self-plug]
Re: (Score:2)
Well it works for humans...
I was almost too apathetic to reply since I find it a rather pointless exercise but what the hell! An awful lot of the activities we carry out which are not directly related to survival revolve around reproduction (even if we don't directly realise it). The motivation to procreate and to find partners with which to produce successful offspring will probably work just as well for an AI as for a human...indeed maybe even better since the first iterations would probably target speedi