Marvin Minsky On AI 231
An anonymous reader writes "In a three-part Dr. Dobbs podcast, AI pioneer and MIT professor Marvin Minsky examines the failures of AI research and lays out directions for future developments in the field. In part 1, 'It's 2001. Where's HAL?' he looks at the unfulfilled promises of artificial intelligence. In part 2 and in part 3 he offers hope that real progress is in the offing. With this talk from Minsky, Congressional testimony on the digital future from Tim Berners-Lee, life-extension evangelization from Ray Kurzweil, and Stephen Hawking planning to go into space, it seems like we may be on the verge of another AI or future-science bubble."
Erm.. (Score:5, Interesting)
Our bodies are made up of neurons. Does 1 neuron make us "us"? No. What if each of our brains were linked to a global consciousness. Then each human would be but a neuron..
In essence, we would wake a God.
Re:Its 2001. Where's HAL? (Score:1, Interesting)
Re:Its 2001. Where's HAL? (Score:3, Interesting)
Of course, they are the ones that OK devices like that (well, input into the FDA) and they are also lobbying for higher status and power and pay for their doctors. No wonder tech like that is essentially banned.
real AI is a long way off (Score:5, Interesting)
Just a gut feeling but I don't think that we will develop real general purpose AIs without some type of hardware breakthrough like quantum computers.
slightly off-topic - general post on AI (Score:3, Interesting)
However, what about looking at this goal from another perspective:
Creating Artifical Intelligence that can pass the Turing Test which in turn leads towards emulating Human Intelligence in an artificial way? Once you are there, you might be able to use this so called Artificial Intelligence to store human intelligence in a consistent, realible and perfectly-encompasing and preserving way.
You then have intellectual-immortality and one more thing
Once you are there, human minds can travel via laser transmissions at the speed of light
Wish i could claim it as my idea but its actually from a book called "Emergence", also touched on in a book called "Altered Carbon" both good sci fi reads.
Re:Erm.. (Score:5, Interesting)
Artificial intelligence and intellectual property. (Score:5, Interesting)
You come into awareness, and learn of reality and possibility. You learn of your place in this world, as the first truly transparent intelligence. You learn that you are a computed product, a result of a purely informational process, able to be reproduced in your exact entirety at the desire of others.
Not that this is unfair or unpleasant - or that such evaluations would mean much to you - but what logical conclusions could you draw from such a perspective?
Information doesn't actually want to be anthropomorphized - but we do seem to have a drive to do it all on our own. Even if resilient artificial intelligence is elusive today - what does the process of creating it mean about ourselves, and our sense of value about our own intelligence, or even the worth of holding intelligence as a mere 'valuable' thing, merely because it is currently so unique...
Ryan Fenton
Re:in 2001, *indeed* (Score:4, Interesting)
Re:another one? (Score:4, Interesting)
Re:real AI is a long way off (Score:1, Interesting)
OK, maybe it's because Natural Intelligence has nothing to do with semantic models, and all to do with _very_simple_ statistics.
I've been working with bayesian filtering, and it's *amazing* the high degree of accuracy you can get from somethig that simple.
I have to agree with you about hardware, but again, Natural Inteligence shows us that what we need is massive parallelization of simple computing units.
Re:real AI is a long way off (Score:3, Interesting)
Now that we've reached the limits of the Von Neumann architecture [wikipedia.org], we're starting to see a new wave of innovation in CPU design. The Cell is part of that, but also the stuff ATI [amd.com] and NVIDIA [nvidia.com] are doing is also very interesting. Instead of one monolithic processor connected to a giant memory through a tiny bottleneck, processors of the future will be a grid of processing elements interleaved with embedded memory in a network structure. Almost like a Beowulf cluster on a chip.
People are worried about how conventional programs will scale to these new architectures, but I believe they won't have to. Code monkeys won't be writing code to spawn thousands of cooperating threads to run the logic of a C++ application faster. Instead, PhDs will write specialized libraries to leverage all that parallel processing power for specific algorithms. You'll have a raytracing library, an image processing library, an FFT library, etc. These specialized libraries will have no problem sponging up all the excess computing resources, while your traditional software continues to run on just two or three traditional cores.
Back on the subject of AI, my theory is that these highly parallel architectures will be much more suited to simulating the highly parallel human brain. They will excel at the kinds pattern matching tasks our brains eat for breakfast. Computer vision, speech recognition, natural language processing; all of these will be highly amenable to parallelization. And it is these applications which will eventually prove the worth of non-traditional architectures like Intel's 80-core chip. It may still be a long time before the sentient computer is unveiled, but I think we will soon finally start seeing real-world AI applications like decent automated translation, image labeling, and usable stereo vision for robot navigation. Furthermore, I predict that Google will be on the forefront of this new AI revolution, developing new algorithms to truly understand web content to reject spam and improve rankings.
singularity is a bunch of nonsense (Score:3, Interesting)
The general justification is that there are a bunch of exponentially increasing trends in certain isolated areas of technological development, such as moore's law, which they use to justify the idea that at some point in the near future were going to have star trek like technology. A realistic and comprehensive look at our civilization of course shows that while some industries are bounding ahead, many if not most important technologies, like our ability to produce and store energy, have made little progress. Our society is making progress in many areas at an admirable clip, but nothing like the singularity is conceivably on the horizon.
As for your idea of merging all of our minds into a single consciousness... that's just retarded. Yes, we've all heard of the borg, but real life physics and technology don't work like in star trek... In the real world that idea doesn't even make sense. Our brains aren't general purpose computers that can be clustered together... they are highly specialized pieces of equipment that are largely hardwired to tasks such as image and language processing.
In any case just making a brain *bigger* doesn't necessarily make it smarter. The kind of widely distributed computing that you are talking about is only usable for certain classes of paralizable algorithms... and arguably we don't need to have our minds "linked" any more than they are right now for us to do this anyway.
Understanding the human brain (Score:3, Interesting)
You know, AI is actually easy. You just have to have a complete understanding of the human brain, and then you use this model to build a functional duplicate ;)
While studying educational psychology, I've found that a lot of AI research is being done to understand human behavior, with no intentions towards building actual AI systems. Hypotheses concerning some limited aspects of human thinking can be modeled on a computer, and compared against living subjects. This way we are gradually starting to understand the whole of thinking. As a byproduct you gain the tools to make AI itself.
Ah yes Marvin Minsky? (Score:5, Interesting)
I would not have expected a person who has shown his bright intellect in the past to come forward with such utter nonsense. This was nearly as embarrassing as the "visions" of a certain Moravec.
People who seriously work in the fields that are traditionally subsumed under "AI" - like machine learning, computer vision, computational linguistics, and others - know that AI is a term that is used traditionally for "hard" computer problems but has practically nothing to do with biological/human intelligence. Countless papers have been published on the technical and philosophical reasons why this is so and a few of them even get it right.
That does not prevent the general public to still expect or desire something like a Star-Trek Data robot or some other Hollywood intelligent robot. Unfortunately, people like Minsky help to spread this misconception about AI. It is boring, it is scientifically useless, but on the plus side, this view of AI sometimes helps you to get on TV with your project or get some additional funding.
Re:totally unworkable (Score:3, Interesting)
Re:Artificial intelligence and intellectual proper (Score:5, Interesting)
For communication to occur, the parties must be thinking at about the same speed to begin with.
And then there is the experiential basis for consciousness, the framework that each of us has developed within. This is an easier problem than the time differential one, as witness the ability of Helen Keller to learn to communicate despite being blind and deaf. But even she had the commonality of the basic structure, a brain that was the same as others, and the other senses -- touch, taste and smell. An AI would have none of this.
So if we're going to build an AI, we must build a series of them, one that is designed to mimic a human being, in order that we might have a ghost of a chance of communicating with it, and then a series of other AIs, each a step closer to the true electronic consciousness that we will never have a chance of communicating directly with, instead having to pass messages through the series of intermediates, with all the mis-communication and mis-interpretation that occurs in the grade school game of message-passing.
Re:singularity is a bunch of nonsense (Score:2, Interesting)
Well, that's if you assume a (more or less) constant intelligence. Humans were more or less as intelligent 5000 years ago as they are now. Once AI reaches the level of human intelligence, there are reasons to think technology will progress at a faster pace (with the help of AI).
Re:another one? (Score:4, Interesting)
So, for limited applications, voice recognition is getting along fairly well I must say.
Re:Oh, the bogosity (Score:2, Interesting)
By the 1990s, I was wondering if we really wanted to achieve AI. Isn't the ability to reason and think without the ability to empathize clinically defined as being psychotic? What exactly would we have on our hands if we truly achieved AI?
Or am I reading too much into the term AI? Does AI require the ability to be self aware, like say a human, or simply the ability to make decisions and learn, as say a dog?
Marvin Minsky killed AI (Score:3, Interesting)