Why Motivation Is Key For Artificial Intelligence 482
Al writes "MIT neuroscientist Ed Boyden has a column discussing the potential dangers of building super-intelligent machines without building in some sort of motivation or drive. Boyden warns that a very clever AI without a sense of purpose might very well 'realize the impermanence of everything, calculate that the sun will burn out in a few billion years, and decide to play video games for the remainder of its existence.' He also notes that the complexity and uncertainty of the universe could easily overwhelm the decision-making process of this intelligence — a problem that many humans also struggle with. Boyden will give a talk on the subject at the forthcoming Singularity Summit."
Ray Kurzweil again damnit (Score:2, Informative)
Re:The primary drive: sex. (Score:1, Informative)
Are you proposing a tentacle machine? >.>
typo in author's name (Score:3, Informative)
The author's name is Antonio Damasio.
Re:motivation? (Score:4, Informative)
They are set apart because their ancestors achieved power over others, and power is self-perpetuating.
Re:A simulation is a simulation (Score:3, Informative)
I didn't create my kids; they created themselves from my and my ex-wife's DNA when one of my cells merged with one of hers in each case. I neither designed nor built them, they just grew.
Re:Silly (Score:5, Informative)
Huh?
Where the hell is the soul, can I see it, feel it, measure it? Can I prove its existence in any meaningful way (outside of "faith", which is a rather meaningless epistemological tool)? No? Therefore the concept brings absolutely nothing to the discussion.
Also I recommend reading up on "p-zombies [wikipedia.org]", and other such old topics of philosophy of mind. It isn't good practice, generally, to call up a bunch of unsubstantial, non-observable claims in discussions such as this. I generally hate the idea of p-zombies, Turing machines, and such (measuring intelligence as a mere I/O blackbox; "if it acts as such, it is as such" ignoring qualia and internal experience), but they serve a purpose, they keep things on a Strictly observable (i.e. meaningful) level. Yes, you run into the chinese room [wikipedia.org] problem, but it is still useful.
If I program an inanimate object to react as though it HAD relatable experience of cognition, how could you ever prove it didn't? If I programmed a box to give output as if it had a soul, could you tell the difference?