Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Programming Technology IT

Marvin Minsky On AI 231

An anonymous reader writes "In a three-part Dr. Dobbs podcast, AI pioneer and MIT professor Marvin Minsky examines the failures of AI research and lays out directions for future developments in the field. In part 1, 'It's 2001. Where's HAL?' he looks at the unfulfilled promises of artificial intelligence. In part 2 and in part 3 he offers hope that real progress is in the offing. With this talk from Minsky, Congressional testimony on the digital future from Tim Berners-Lee, life-extension evangelization from Ray Kurzweil, and Stephen Hawking planning to go into space, it seems like we may be on the verge of another AI or future-science bubble."
This discussion has been archived. No new comments can be posted.

Marvin Minsky On AI

Comments Filter:
  • by QuantumG ( 50515 ) * <qg@biodome.org> on Friday March 02, 2007 @12:28AM (#18203390) Homepage Journal
    so I'll say this another way.. thanks for the podcasts from SIX YEARS AGO.
  • A podcast? (Score:5, Insightful)

    by UbuntuDupe ( 970646 ) * on Friday March 02, 2007 @12:35AM (#18203434) Journal
    Podcasts are great if you're on the go, but why no transcript for the differently-hearing /.ers? I personally hate having to listen, I'd rather just read it.
  • Bubble? (Score:3, Insightful)

    by istartedi ( 132515 ) on Friday March 02, 2007 @12:36AM (#18203442) Journal

    Ah, so I should get out of real estate and stocks, and get into AI. Do I just make checks out to Minsky, or is there an AI ETF? Seriously. Ever since the NASDAQ bubble, investing has been a matter of rotation from one bubble to the next. Where's the next one going to be? I wish I knew.

  • Re:another one? (Score:5, Insightful)

    by SnowZero ( 92219 ) on Friday March 02, 2007 @12:48AM (#18203498)
    While much of the "traditional AI" hype could be considered dead, robotics is continuing to advance, and much symbolic AI research has evolved into data-driven statistical techniques. So while the top-down ideas that the older AI researches didn't pan out yet, bottom-up techniques will still help close the gap.

    Also, you have to remember that AI is pretty much defined as "the stuff we don't know how to do yet". Once we know how to do it, then people stop calling it AI, and then wonder "why can't we do AI?" Machine vision is doing everything from factory inspections to face recognition, we have voice recognition on our cell phones, and context-sensitive web search is common. All those things were considered AI not long ago. Calculators were once even called mechanical brains.
  • by bersl2 ( 689221 ) on Friday March 02, 2007 @12:59AM (#18203588) Journal
    Um... AI may give rise to consciousness, but it won't give rise to your consciousness. We still don't know what makes you "you"; way too much neuroscience to be done.
  • Re:Erm.. (Score:0, Insightful)

    by Anonymous Coward on Friday March 02, 2007 @01:22AM (#18203740)
    Singularity is the nerd version of the rapture.

    Seriously, that is some stupid ass shit. Right now, neuroscientists don't fucking know how memories are stored, and you think we'll be hooking brains into the internet or some shit? It's a completely faith based proposition with no evidence for it at all.
  • Re:Erm.. (Score:1, Insightful)

    by Anonymous Coward on Friday March 02, 2007 @01:32AM (#18203784)
    I rather thought the Borg was an allegory for Manifest Destiny and the American culture as seen by Native Americans. They practically spelled it out when Picard's telling them 'we don't want to be assimilated! we have our own culture!' and the borg declare human culture to be irrelevant.
      The borg are a technologically superior amalgam of different peoples, cultures, and technologies who demand you abandon your 'backward' way of life and individuality, and insist that you instead adopt their culture, for your own good. That's the united states during the nineteenth century, not the soviets.
  • Re:another one? (Score:2, Insightful)

    by Architect_sasyr ( 938685 ) on Friday March 02, 2007 @02:38AM (#18204092)
    didn't see that one coming, did ya!

    Having been on the receiving end of some of the larger telcos support system, and considering the "quality" of so-called "AI" systems today, I would have to suggest that it was about the only thing I saw coming ;)
  • by mbone ( 558574 ) on Friday March 02, 2007 @02:50AM (#18204140)
    You assume that a "true" AI would have human like emotional reactions. I suspect that if we ever develop true AIs, we will neither understand how it works nor will we be able to communicate with it very well. Lacking our biological imperatives, I also suspect that true AIs would not really want to do anything.
  • Re:Erm.. (Score:3, Insightful)

    by lysergic.acid ( 845423 ) on Friday March 02, 2007 @03:05AM (#18204206) Homepage
    it's an idea/concept, not a belief system. just like "god" is a concept, and i can use/reference that concenpt without subscribing to a particular belief system. i've always found the concept of a godhead machine to be interesting to think about. i don't know if it'll ever happen, and i don't know if it'd even work, but it certainly incorporates some really interesting premises on the nature of the universe, life, information, and humanity.
  • Re:another one? (Score:1, Insightful)

    by Anonymous Coward on Friday March 02, 2007 @06:09AM (#18204938)
    We know about ELIZA. Try something new.
  • by rbarreira ( 836272 ) on Friday March 02, 2007 @06:11AM (#18204958) Homepage
    Why would someone program a true AI which has no built-in goals?
  • by timeOday ( 582209 ) on Friday March 02, 2007 @12:41PM (#18207886)

    Notice that this doesn't mean he argues that it is impossible that machines could think or that robot doppelgangers couldn't be built---just that the mainstream approaches won't work.
    I don't even think propositional logic is a mainstream approach any more. You'd be hard-pressed to publish a paper on decision tree algorithms these days. People have moved on to machine-learning algorithms which estimate patterns and distributions of data instead of trying to find nice clean rules for everything.
  • by ClassMyAss ( 976281 ) on Friday March 02, 2007 @02:34PM (#18209374) Homepage
    You know, I always get confused when people claim that it's perfectly reasonable to say that something "can't be formalized." Some of them seem to mean this in more particular ways than others, for instance, meaning that any algorithmic representation will not be hard coded; but others tend to mean it in the sense that "you can never, even in theory, write a program that will capture this behaviour," which is trivially asinine because the universe runs such software (not that we could program a simulation on that scale; still, ask any physicist whether they could throw together a reasonable enough approximation of the real world to get chemistry and biology given near infinite computing resources - the physics underlying it is not that tricky, just the scale). The real question of import to strong AI research is the following: is Turing completeness enough to simulate intelligence, or is a Turing complete machine still somehow crippled? The answer, at least to me, is damn straight it's enough, since anything that shows up in nature appears to be computable in that framework. [note: yes, I know all about Godel's results and all that, but I'm glossing them over because there's no indication that anything in nature has the answers to undecidable propositions, either]

    But since you opened the "Searle" bag, let's have a recent quote from him:

    'Could a machine think?' My own view is that only a machine could think, and indeed only very special kinds of machines, namely brains and machines that had the same causal powers as brains. And that is the main reason strong AI has had little to tell us about thinking, since it has nothing to tell us about machines. By its own definition, it is about programs, and programs are not machines. Whatever else intentionality is, it is a biological phenomenon, and it is as likely to be as causally dependent on the specific biochemistry of its origins as lactation, photosynthesis, or any other biological phenomena. No one would suppose that we could produce milk and sugar by running a computer simulation of the formal sequences in lactation and photosynthesis, but where the mind is concerned many people are willing to believe in such a miracle because of a deep and abiding dualism: the mind they suppose is a matter of formal processes and is independent of quite specific material causes in the way that milk and sugar are not.

    I might note that the worst part of this quote is that it's a severe misunderstanding of the strong AI quest to say that we're hoping to produce milk and sugar (i.e. a physical product) from a simulation. Personally, I don't care how the thing happens as long as it passes the Turing test - I don't know what exactly Searle wants to see, but it's clearly not what I expect. Now from the parent:

    I'm afraid you've misunderstood Dreyfus's work. His work, like Searle's, does not deny that our minds are *like* (to use your locution) computers. What he denies is that our minds engage the world in a way that is (totally) capturable in propositional form and so are formal programs of the sort

    You're kidding, right? To be fair, I know very little of Dreyfus' work, but Searle's work most definitely does deny that our minds are like computers. That is literally the point of the severely flawed Chinese room thought experiment. I will grant you that his above quote makes it sound like he's now fallen back to arguing that a program needs a physical instantiation to be intelligent, but think back - whether he's backtracked on this position or not, I don't know, but this guy was absolutely claiming that even in theory, any sort of algorithmic understanding was impossible or inferior to the "real stuff" that happens in our brain.

    As to whether our minds can be captured with formal logic, I'll ask again, what else is there? Informal logic? I.e. of the kind that we can simulate quite nicely by mixing formal logic with pseudorandom number generation? Maybe this is a term

"Life begins when you can spend your spare time programming instead of watching television." -- Cal Keegan

Working...