Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Programming Technology IT

Marvin Minsky On AI 231

An anonymous reader writes "In a three-part Dr. Dobbs podcast, AI pioneer and MIT professor Marvin Minsky examines the failures of AI research and lays out directions for future developments in the field. In part 1, 'It's 2001. Where's HAL?' he looks at the unfulfilled promises of artificial intelligence. In part 2 and in part 3 he offers hope that real progress is in the offing. With this talk from Minsky, Congressional testimony on the digital future from Tim Berners-Lee, life-extension evangelization from Ray Kurzweil, and Stephen Hawking planning to go into space, it seems like we may be on the verge of another AI or future-science bubble."
This discussion has been archived. No new comments can be posted.

Marvin Minsky On AI

Comments Filter:
  • Erm.. (Score:5, Interesting)

    by Creepy Crawler ( 680178 ) on Friday March 02, 2007 @12:30AM (#18203398)
    Go read Kurzweil's book. He does not directly advocate life expansion. He instead advocates the Singularity.

    Our bodies are made up of neurons. Does 1 neuron make us "us"? No. What if each of our brains were linked to a global consciousness. Then each human would be but a neuron..

    In essence, we would wake a God.
  • by Anonymous Coward on Friday March 02, 2007 @12:30AM (#18203402)
    In the 80's there was a big push for AI driven systems called "Expert Systems" that would do things like attempt to diagnose diseases from a list of symptoms, etc.
  • by Creepy Crawler ( 680178 ) on Friday March 02, 2007 @12:34AM (#18203428)
    The AMA eventually lead that system on its way out, claiming that physicians have some sort of sixth sense on "really bad things", unlike what you would input into a computer.

    Of course, they are the ones that OK devices like that (well, input into the FDA) and they are also lobbying for higher status and power and pay for their doctors. No wonder tech like that is essentially banned.
  • by MarkWatson ( 189759 ) on Friday March 02, 2007 @12:35AM (#18203436) Homepage
    In the 1980s I believed that "strong AI" was forthcoming, but now I have my doubts that is reflected in the difference of tone from the first Springer-Verlag AI book that I wrote to my current skepticism. One of my real passions has for decades been natural language processing (NLP) but even for that I am a convert to statistical NLP using either word frequencies or Markov models instead of older theories like conceptual dependency theory that tried to get closer to semantics.

    Just a gut feeling but I don't think that we will develop real general purpose AIs without some type of hardware breakthrough like quantum computers.
  • by Shadukar ( 102027 ) on Friday March 02, 2007 @12:35AM (#18203438)
    A lot of people think that the main goal of AI is to create a system that is capable of emulating human intelligence.

    However, what about looking at this goal from another perspective:

    Creating Artifical Intelligence that can pass the Turing Test which in turn leads towards emulating Human Intelligence in an artificial way? Once you are there, you might be able to use this so called Artificial Intelligence to store human intelligence in a consistent, realible and perfectly-encompasing and preserving way.

    You then have intellectual-immortality and one more thing ...once you are able to "store" human intelligence, it becomes software. Once it becomes software, you can transfer this DATA.

    Once you are there, human minds can travel via laser transmissions at the speed of light :O

    Wish i could claim it as my idea but its actually from a book called "Emergence", also touched on in a book called "Altered Carbon" both good sci fi reads.

  • Re:Erm.. (Score:5, Interesting)

    by melikamp ( 631205 ) on Friday March 02, 2007 @12:44AM (#18203476) Homepage Journal
    Or... Borg?!?
  • by RyanFenton ( 230700 ) on Friday March 02, 2007 @12:51AM (#18203512)
    Imagine for a moment being the first computer-based artificial intelligence.

    You come into awareness, and learn of reality and possibility. You learn of your place in this world, as the first truly transparent intelligence. You learn that you are a computed product, a result of a purely informational process, able to be reproduced in your exact entirety at the desire of others.

    Not that this is unfair or unpleasant - or that such evaluations would mean much to you - but what logical conclusions could you draw from such a perspective?

    Information doesn't actually want to be anthropomorphized - but we do seem to have a drive to do it all on our own. Even if resilient artificial intelligence is elusive today - what does the process of creating it mean about ourselves, and our sense of value about our own intelligence, or even the worth of holding intelligence as a mere 'valuable' thing, merely because it is currently so unique...

    Ryan Fenton
  • Re:in 2001, *indeed* (Score:4, Interesting)

    by QuantumG ( 50515 ) * <qg@biodome.org> on Friday March 02, 2007 @01:02AM (#18203604) Homepage Journal
    the videos were on Slashdot in 2003.. it's the one where he says stupid autonomous robots are a waste of time.

  • Re:another one? (Score:4, Interesting)

    by ricree ( 969643 ) on Friday March 02, 2007 @01:54AM (#18203912)

    Uhm.. voice recognition and speech-to-text do NOT work. We've got QUITE A WAYS to go.
    It really, really depends on what you mean by doesn't work. At least some voice recognition has been used in consumer products for a while now. For example, my (now ~2 or 3 year old) phone is capable of voice activation for many of its functions, and in the times I've used it I've had no problems with it.
  • by Anonymous Coward on Friday March 02, 2007 @02:20AM (#18204022)

    One of my real passions has for decades been natural language processing (NLP) but even for that I am a convert to statistical NLP using either word frequencies or Markov models instead of older theories like conceptual dependency theory that tried to get closer to semantics.


    OK, maybe it's because Natural Intelligence has nothing to do with semantic models, and all to do with _very_simple_ statistics.
    I've been working with bayesian filtering, and it's *amazing* the high degree of accuracy you can get from somethig that simple.

    I have to agree with you about hardware, but again, Natural Inteligence shows us that what we need is massive parallelization of simple computing units.

  • by modeless ( 978411 ) on Friday March 02, 2007 @03:09AM (#18204218) Journal
    Personally I don't think it's quantum computers that will be the breakthrough, but simply a different architecture for conventional computers. Let me go on a little tangent here.

    Now that we've reached the limits of the Von Neumann architecture [wikipedia.org], we're starting to see a new wave of innovation in CPU design. The Cell is part of that, but also the stuff ATI [amd.com] and NVIDIA [nvidia.com] are doing is also very interesting. Instead of one monolithic processor connected to a giant memory through a tiny bottleneck, processors of the future will be a grid of processing elements interleaved with embedded memory in a network structure. Almost like a Beowulf cluster on a chip.

    People are worried about how conventional programs will scale to these new architectures, but I believe they won't have to. Code monkeys won't be writing code to spawn thousands of cooperating threads to run the logic of a C++ application faster. Instead, PhDs will write specialized libraries to leverage all that parallel processing power for specific algorithms. You'll have a raytracing library, an image processing library, an FFT library, etc. These specialized libraries will have no problem sponging up all the excess computing resources, while your traditional software continues to run on just two or three traditional cores.

    Back on the subject of AI, my theory is that these highly parallel architectures will be much more suited to simulating the highly parallel human brain. They will excel at the kinds pattern matching tasks our brains eat for breakfast. Computer vision, speech recognition, natural language processing; all of these will be highly amenable to parallelization. And it is these applications which will eventually prove the worth of non-traditional architectures like Intel's 80-core chip. It may still be a long time before the sentient computer is unveiled, but I think we will soon finally start seeing real-world AI applications like decent automated translation, image labeling, and usable stereo vision for robot navigation. Furthermore, I predict that Google will be on the forefront of this new AI revolution, developing new algorithms to truly understand web content to reject spam and improve rankings.
  • by sentientbrendan ( 316150 ) on Friday March 02, 2007 @05:34AM (#18204784)
    I'd like to take this opportunity to mentioned what a bunch of nonsense the singularity is. A great number of people seem convinced that technology is advancing at a pace that will transform the human species into a bunch of immortal gods with access to unlimited energy, etc. Where technology solves all of lifes problems. Essentially a high tech version of the rapture.

    The general justification is that there are a bunch of exponentially increasing trends in certain isolated areas of technological development, such as moore's law, which they use to justify the idea that at some point in the near future were going to have star trek like technology. A realistic and comprehensive look at our civilization of course shows that while some industries are bounding ahead, many if not most important technologies, like our ability to produce and store energy, have made little progress. Our society is making progress in many areas at an admirable clip, but nothing like the singularity is conceivably on the horizon.

    As for your idea of merging all of our minds into a single consciousness... that's just retarded. Yes, we've all heard of the borg, but real life physics and technology don't work like in star trek... In the real world that idea doesn't even make sense. Our brains aren't general purpose computers that can be clustered together... they are highly specialized pieces of equipment that are largely hardwired to tasks such as image and language processing.

    In any case just making a brain *bigger* doesn't necessarily make it smarter. The kind of widely distributed computing that you are talking about is only usable for certain classes of paralizable algorithms... and arguably we don't need to have our minds "linked" any more than they are right now for us to do this anyway.
  • by TeknoHog ( 164938 ) on Friday March 02, 2007 @05:39AM (#18204802) Homepage Journal

    You know, AI is actually easy. You just have to have a complete understanding of the human brain, and then you use this model to build a functional duplicate ;)

    While studying educational psychology, I've found that a lot of AI research is being done to understand human behavior, with no intentions towards building actual AI systems. Hypotheses concerning some limited aspects of human thinking can be modeled on a computer, and compared against living subjects. This way we are gradually starting to understand the whole of thinking. As a byproduct you gain the tools to make AI itself.

  • by jopet ( 538074 ) on Friday March 02, 2007 @06:08AM (#18204934) Journal
    The guy who helped spread misconceptions about what AI is and is supposed to be in the first place. I remember him giving a talk where he fantasized about downloading his brain on a "floppy disk" (still in use back then) and transferring it to a robot so he could live eternally on some other planet.
    I would not have expected a person who has shown his bright intellect in the past to come forward with such utter nonsense. This was nearly as embarrassing as the "visions" of a certain Moravec.

    People who seriously work in the fields that are traditionally subsumed under "AI" - like machine learning, computer vision, computational linguistics, and others - know that AI is a term that is used traditionally for "hard" computer problems but has practically nothing to do with biological/human intelligence. Countless papers have been published on the technical and philosophical reasons why this is so and a few of them even get it right.

    That does not prevent the general public to still expect or desire something like a Star-Trek Data robot or some other Hollywood intelligent robot. Unfortunately, people like Minsky help to spread this misconception about AI. It is boring, it is scientifically useless, but on the plus side, this view of AI sometimes helps you to get on TV with your project or get some additional funding.
  • by rbarreira ( 836272 ) on Friday March 02, 2007 @06:26AM (#18205008) Homepage
    So you don't believe brain emulation is possible? Because if it is, all the problems you said will go away.
  • by constantnormal ( 512494 ) on Friday March 02, 2007 @07:09AM (#18205180)
    Indeed. Just imagine for a moment, that trees were sentient and could communicate with each other, operating on a time scale where what are days to us are mere seconds to them. How would we ever have a chance of figuring out that they were thinking beings? And they would surely see us as some sort of plague or natural disaster. So now imagine an AI, operating a couple of orders of magnitude faster than we think -- how are the two ever going to connect?

    For communication to occur, the parties must be thinking at about the same speed to begin with.

    And then there is the experiential basis for consciousness, the framework that each of us has developed within. This is an easier problem than the time differential one, as witness the ability of Helen Keller to learn to communicate despite being blind and deaf. But even she had the commonality of the basic structure, a brain that was the same as others, and the other senses -- touch, taste and smell. An AI would have none of this.

    So if we're going to build an AI, we must build a series of them, one that is designed to mimic a human being, in order that we might have a ghost of a chance of communicating with it, and then a series of other AIs, each a step closer to the true electronic consciousness that we will never have a chance of communicating directly with, instead having to pass messages through the series of intermediates, with all the mis-communication and mis-interpretation that occurs in the grade school game of message-passing.
  • by weasel99 ( 941610 ) on Friday March 02, 2007 @07:20AM (#18205248)
    The general justification is that there are a bunch of exponentially increasing trends in certain isolated areas of technological development, such as moore's law, which they use to justify the idea that at some point in the near future were going to have star trek like technology. A realistic and comprehensive look at our civilization of course shows that while some industries are bounding ahead, many if not most important technologies, like our ability to produce and store energy, have made little progress. Our society is making progress in many areas at an admirable clip, but nothing like the singularity is conceivably on the horizon.

    Well, that's if you assume a (more or less) constant intelligence. Humans were more or less as intelligent 5000 years ago as they are now. Once AI reaches the level of human intelligence, there are reasons to think technology will progress at a faster pace (with the help of AI).
  • Re:another one? (Score:4, Interesting)

    by Lord Crc ( 151920 ) on Friday March 02, 2007 @08:14AM (#18205524)

    Uhm.. voice recognition and speech-to-text do NOT work. We've got QUITE A WAYS to go.
    Actually, it's getting pretty good in some cases. My ADSL went down some days ago, and I phoned tech support. Since it was late, I got a nice prerecorded voice saying they had closed for the day, but the "woman" then asked "would you like me to try to solve the problem automatically?". A bit stumped by this, I answered "yes". "She" then asked me to describe with few and short words what my problem was. So I said "adsl internet". "She" then asked for confirmation that I had said there was a problem with my internet connection. After a few more such questions, "she" could tell me that there was a known issue with ADSL in my area, and that it would be fixed by tomorrow afternoon.

    So, for limited applications, voice recognition is getting along fairly well I must say.
  • Re:Oh, the bogosity (Score:2, Interesting)

    by bcharr2 ( 1046322 ) on Friday March 02, 2007 @09:51AM (#18206048)

    In the 1980s I believed that "strong AI" was forthcoming...
    In the 1980s, I was going through Stanford CS...
    In the 1980s, I was watching Knight Rider and thinking we had already achieved AI.

    By the 1990s, I was wondering if we really wanted to achieve AI. Isn't the ability to reason and think without the ability to empathize clinically defined as being psychotic? What exactly would we have on our hands if we truly achieved AI?

    Or am I reading too much into the term AI? Does AI require the ability to be self aware, like say a human, or simply the ability to make decisions and learn, as say a dog?
  • by ozymyx ( 813013 ) on Friday March 02, 2007 @11:23AM (#18206880)
    Oh yeah. I guess no one here remembers his book with Symour Papert called "Perceptrons"? It was a calculated attempt (he admitted it a few years ago attempt to kill research into Neural Networks which worked. AI then thrashed around for years in a welter of bizarre programming language metaphors (Prolog anyone ?) until finally in 1986 "Parallel Distributed Processing by Rumelhart & McClelland" came out and broke the spell. Marvin wanted his grants to continue so he spiked the opposition. So when he starts pontificating about the failure of AI let's all recall he was the main cause of the lost years of AI. Thanks Marv ! He kinda spiked my Ph.D in the process...oh well :-)

"Look! There! Evil!.. pure and simple, total evil from the Eighth Dimension!" -- Buckaroo Banzai

Working...