Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Programming Technology IT

Marvin Minsky On AI 231

An anonymous reader writes "In a three-part Dr. Dobbs podcast, AI pioneer and MIT professor Marvin Minsky examines the failures of AI research and lays out directions for future developments in the field. In part 1, 'It's 2001. Where's HAL?' he looks at the unfulfilled promises of artificial intelligence. In part 2 and in part 3 he offers hope that real progress is in the offing. With this talk from Minsky, Congressional testimony on the digital future from Tim Berners-Lee, life-extension evangelization from Ray Kurzweil, and Stephen Hawking planning to go into space, it seems like we may be on the verge of another AI or future-science bubble."
This discussion has been archived. No new comments can be posted.

Marvin Minsky On AI

Comments Filter:
  • by mastershake_phd ( 1050150 ) on Thursday March 01, 2007 @11:26PM (#18203372) Homepage
    Did I miss the first AI bubble? Was it that chess playing computer?
    • HA!....you should read Hubert Dreyfus, "What Computers Still Can't Do"....it chronicles a 20 year debate with Minsky that A.I., as Minsky professes it, will never work on philisophical grounds. A very compelling argument...can't wait to hear his story now.
      • Re: (Score:2, Informative)

        well... dreyfus wasn't entirely correct.
        the human mind ~is~ like a computer.
        read "godel escher bach: an eternal golden braid" for a fun and enlightening journey into the nature of minds and machines.

        or rather.. how about a rebuttal from "the man" himself:
        http://www-formal.stanford.edu/jmc/reviews/dreyfus /dreyfus.html [stanford.edu]

        jmc rocks. what did dreyfus ever do?
        • Comment removed (Score:5, Informative)

          by account_deleted ( 4530225 ) on Friday March 02, 2007 @08:22AM (#18205874)
          Comment removed based on user account deletion
          • Re: (Score:3, Insightful)

            by timeOday ( 582209 )

            Notice that this doesn't mean he argues that it is impossible that machines could think or that robot doppelgangers couldn't be built---just that the mainstream approaches won't work.

            I don't even think propositional logic is a mainstream approach any more. You'd be hard-pressed to publish a paper on decision tree algorithms these days. People have moved on to machine-learning algorithms which estimate patterns and distributions of data instead of trying to find nice clean rules for everything.

          • Re: (Score:3, Insightful)

            by ClassMyAss ( 976281 )
            You know, I always get confused when people claim that it's perfectly reasonable to say that something "can't be formalized." Some of them seem to mean this in more particular ways than others, for instance, meaning that any algorithmic representation will not be hard coded; but others tend to mean it in the sense that "you can never, even in theory, write a program that will capture this behaviour," which is trivially asinine because the universe runs such software (not that we could program a simulation
    • Re:another one? (Score:5, Insightful)

      by SnowZero ( 92219 ) on Thursday March 01, 2007 @11:48PM (#18203498)
      While much of the "traditional AI" hype could be considered dead, robotics is continuing to advance, and much symbolic AI research has evolved into data-driven statistical techniques. So while the top-down ideas that the older AI researches didn't pan out yet, bottom-up techniques will still help close the gap.

      Also, you have to remember that AI is pretty much defined as "the stuff we don't know how to do yet". Once we know how to do it, then people stop calling it AI, and then wonder "why can't we do AI?" Machine vision is doing everything from factory inspections to face recognition, we have voice recognition on our cell phones, and context-sensitive web search is common. All those things were considered AI not long ago. Calculators were once even called mechanical brains.
      • While you're right, I'm not sure that people really consider traditional AI to be dead.

        Certainly it's been said that once we know how to do it, then people stop calling it AI. I think that Ronald Brachman even said something similar in his address to the AAAI a few years ago, but then, we can see at AAAI many examples of what can be considered traditional AI. I think that most of the participants in the "new AI," IE, behavioral-based systems, robotics, and related techniques consider themselves developing
    • The first AI bubble was actually covered in a class that I took. In the 1980s, businesses began adopting artificial intelligence for some of their operations. When businesses saw the limits of what the AI of the time could do, they lost interest, and it "bubbled." However, many of the systems from that era endure. Examples of systems from this time include expert systems that are used in tech support (didn't see that one coming, did ya!) and systems used in financial modeling, part of why computer scien
      • Re: (Score:2, Insightful)

        didn't see that one coming, did ya!

        Having been on the receiving end of some of the larger telcos support system, and considering the "quality" of so-called "AI" systems today, I would have to suggest that it was about the only thing I saw coming ;)
  • by patio11 ( 857072 ) on Thursday March 01, 2007 @11:27PM (#18203380)
    This professor doesn't need AI, he needs a time server. Now.
  • by QuantumG ( 50515 ) * <qg@biodome.org> on Thursday March 01, 2007 @11:28PM (#18203390) Homepage Journal
    so I'll say this another way.. thanks for the podcasts from SIX YEARS AGO.
  • Erm.. (Score:5, Interesting)

    by Creepy Crawler ( 680178 ) on Thursday March 01, 2007 @11:30PM (#18203398)
    Go read Kurzweil's book. He does not directly advocate life expansion. He instead advocates the Singularity.

    Our bodies are made up of neurons. Does 1 neuron make us "us"? No. What if each of our brains were linked to a global consciousness. Then each human would be but a neuron..

    In essence, we would wake a God.
    • Re:Erm.. (Score:5, Interesting)

      by melikamp ( 631205 ) on Thursday March 01, 2007 @11:44PM (#18203476) Homepage Journal
      Or... Borg?!?
      • No no no. The Borg was much more of an allegory to Communism. Also note that they were partially made of flesh. They were the ultimate consumer whose units would willingly die for the "greater good". Individuality meant nothing, and later on in the ST:TNG universe, a simple act of giving a unit Borg a name was actually a devastating virus. Hue was "his" name.

        Instead, the Singularity indicates that we all humans will be made of much more durable substrates (diamondoid processors) and will require nothing mor
    • Re: (Score:3, Informative)

      i've got the kurzweil reader and it's pretty interesting. i think i found it on either mininova or piratebay if anyone else is interested.
    • I'd like to take this opportunity to mentioned what a bunch of nonsense the singularity is. A great number of people seem convinced that technology is advancing at a pace that will transform the human species into a bunch of immortal gods with access to unlimited energy, etc. Where technology solves all of lifes problems. Essentially a high tech version of the rapture.

      The general justification is that there are a bunch of exponentially increasing trends in certain isolated areas of technological development
      • Re: (Score:2, Interesting)

        by weasel99 ( 941610 )
        The general justification is that there are a bunch of exponentially increasing trends in certain isolated areas of technological development, such as moore's law, which they use to justify the idea that at some point in the near future were going to have star trek like technology. A realistic and comprehensive look at our civilization of course shows that while some industries are bounding ahead, many if not most important technologies, like our ability to produce and store energy, have made little progres
  • A podcast? (Score:5, Insightful)

    by UbuntuDupe ( 970646 ) * on Thursday March 01, 2007 @11:35PM (#18203434) Journal
    Podcasts are great if you're on the go, but why no transcript for the differently-hearing /.ers? I personally hate having to listen, I'd rather just read it.
    • It's easy! Just use your AI to listen to it for you and then give you a nice summary with bulleted lists and charts.
      • Ah, good thinking. That's like what I tell my friends who've lost vision, to do with CAPTCHAs: Just have a character-recognition program scan it and then type in the letters it gives you.
      • Story summary is one of the interesting (and highly intractable) problems in language processing. In all the competitions that have been held on the subject, I don't believe any program has done more than a tiny bit better than "Given a news article, return the first sentence/paragraph."

        But they're working on it.
  • by MarkWatson ( 189759 ) on Thursday March 01, 2007 @11:35PM (#18203436) Homepage
    In the 1980s I believed that "strong AI" was forthcoming, but now I have my doubts that is reflected in the difference of tone from the first Springer-Verlag AI book that I wrote to my current skepticism. One of my real passions has for decades been natural language processing (NLP) but even for that I am a convert to statistical NLP using either word frequencies or Markov models instead of older theories like conceptual dependency theory that tried to get closer to semantics.

    Just a gut feeling but I don't think that we will develop real general purpose AIs without some type of hardware breakthrough like quantum computers.
    • Re: (Score:3, Informative)

      Well, to get to the heart of your point...

      "Just a gut feeling but I don't think that we will develop real general purpose AIs without some type of hardware breakthrough like quantum computers."

      Do you think that we humans use some sort of Quantum Coherence to maintain very short decision chains? If so, where in a cell would be stable for such temporary coherence be maintained? Theories suggest that microtubules MIGHT be able to hold containment, but most experts say 'probably not'.

      However, to hold that theor
    • Just a gut feeling but I don't think that we will develop real general purpose AIs without some type of hardware breakthrough like quantum computers.
      Either that or we reinvent nature.
    • Re: (Score:3, Interesting)

      by modeless ( 978411 )
      Personally I don't think it's quantum computers that will be the breakthrough, but simply a different architecture for conventional computers. Let me go on a little tangent here.

      Now that we've reached the limits of the Von Neumann architecture [wikipedia.org], we're starting to see a new wave of innovation in CPU design. The Cell is part of that, but also the stuff ATI [amd.com] and NVIDIA [nvidia.com] are doing is also very interesting. Instead of one monolithic processor connected to a giant memory through a tiny bottleneck, processors of t
      • Instead of one monolithic processor connected to a giant memory through a tiny bottleneck, processors of the future will be a grid of processing elements interleaved with embedded memory in a network structure. Almost like a Beowulf cluster on a chip.

        You mean : like a brain ?!
        what are neurons if not a giant grid of processors, where memory and instruction set is defined by the connections between dendrites and axons ? learning is growing dendrites to connect to new axons. Something else I remember from my biology classes is that the synapse is slow because is uses chemical elements instead of transmitting the nervous impusle directly.

        I probably missed something but isn't _that_ (the brain structure) a model architecture we could be using and impro

      • Parallel programs are no more computationally powerful than sequential ones, they just execute more quickly. It's not as if we have implemented HAL but he runs at 1/1000th of real-time, the problem is nobody knows how to write such a program.
      • by nuzak ( 959558 )
        > Now that we've reached the limits of the Von Neumann architecture, we're starting to see a new wave of innovation in CPU design.

        They're still Von Neumann, just parallelized. Program is still data, stepped through linearly (just in more independent parallel threads), results put into storage, and so forth. It's not some kind of eigenstate weirdness or third concept apart from code and data.
    • Oh, the bogosity (Score:5, Informative)

      by Animats ( 122034 ) on Friday March 02, 2007 @02:13AM (#18204230) Homepage

      In the 1980s I believed that "strong AI" was forthcoming...

      In the 1980s, I was going through Stanford CS, where some of the AI faculty were indeed saying that. Read Feigenbaum's "The Fifth Generation", to see how bad it got. It was embarrassing, because very little actually worked.. Expert systems really were awfully dumb. They're just another way to program, as is generally recognized today. But back then, there were people claiming that if you could only write enough rules, intelligence would somehow emerge. I knew it was bogus at the time, and so did some other people, but, unlike most grad students, I was working for an big outside company, not a professor, and could say so. At one point I noted that it was possible to graduate in CS, in AI, at the MSCS level, without ever actually seeing an expert system work. This embarrassed some faculty members.

      There was a massive amount of self-delusion in Stanford CS back then. When the whole AI boom collapsed, CS at Stanford was moved from the School of Arts and Sciences to Engineering, to give the place some adult supervision. Eventually, the Stanford AI Lab was dissolved. It's been brought back in the last few years, but with new people.

      We're making real progress today, finally. Mainly because of a shift to statistical methods with sound mathematical underpinnings, plus enough compute power to make them go. Trying to hammer the real world into predicate calculus was a dead end. But number crunching is working. Computer vision actually sort of works now. Robots are starting to work. Automatic driving works. Language translation works marginally. Voice recognition works marginally. There are real products now.

      But the AI field really was stuck for over a decade. The phrase "AI Winter" has been used.

      • Re: (Score:2, Interesting)

        by bcharr2 ( 1046322 )

        In the 1980s I believed that "strong AI" was forthcoming...

        In the 1980s, I was going through Stanford CS...

        In the 1980s, I was watching Knight Rider and thinking we had already achieved AI.

        By the 1990s, I was wondering if we really wanted to achieve AI. Isn't the ability to reason and think without the ability to empathize clinically defined as being psychotic? What exactly would we have on our hands if we truly achieved AI?

        Or am I reading too much into the term AI? Does AI require the ability to

    • by PDAllen ( 709106 )
      Why is it that people who should know better seem to treat quantum computing as some kind of miracle device which will Make Everything Good?

      A working quantum computer would not be capable of computing anything a normal computer cannot. The only difference is that where a conventional computer would use parallel processing through many cores and then run out of cores and have to process serially, the quantum computer would not, so that for that sort of problem (any NP-complete problem, for starters) a quantu
      • by TheLink ( 130905 )
        Well, one of my "far out" theories is that a significant part of how minds work is they make models/simulations of stuff.

        A baby looks at a ball bounce, and a bunch of neurons first start attempting to "mirror" the behaviour (e.g. fire when it moves one way), then "prediction" would be to fire as if the ball is going to move in a certain way BEFORE the ball actually does. If the prediction is correct, then the model is good.

        Being able to automatically create and run many simulations/models in parallel would
        • by PDAllen ( 709106 )
          I didn't say 'humans don't use quantum computing'. Maybe we do, maybe not. I said that we are not able (at present) to write an AI whether we are trying to do so on a classical or quantum computer. Because you can simulate a quantum computer using a classical computer, but (unlike simulating one classical computer with another) the number of steps required to simulate one step of the quantum computer could be arbitrarily large.
      • I said essentially the same thing about massively parallel computers further up the page. However, I still think solving NP-complete problems in polynomial time might be a game-changer. Combinatorial optimization would be a solved problem, just try everything! Writing software is, itself, a combinatorial optimzation problem (setting the bits in a .exe. file). We solve it with heuristics (like programming languages) because it's infeasible to do otherwise. I think it would take a while to grasp the rami
  • by Shadukar ( 102027 ) on Thursday March 01, 2007 @11:35PM (#18203438)
    A lot of people think that the main goal of AI is to create a system that is capable of emulating human intelligence.

    However, what about looking at this goal from another perspective:

    Creating Artifical Intelligence that can pass the Turing Test which in turn leads towards emulating Human Intelligence in an artificial way? Once you are there, you might be able to use this so called Artificial Intelligence to store human intelligence in a consistent, realible and perfectly-encompasing and preserving way.

    You then have intellectual-immortality and one more thing ...once you are able to "store" human intelligence, it becomes software. Once it becomes software, you can transfer this DATA.

    Once you are there, human minds can travel via laser transmissions at the speed of light :O

    Wish i could claim it as my idea but its actually from a book called "Emergence", also touched on in a book called "Altered Carbon" both good sci fi reads.

    • by Dunbal ( 464142 )
      Once it becomes software, you can transfer this DATA. Once you are there, human minds can travel via laser transmissions at the speed of light

            Sorry to rain on your parade, but that would be a violation of the DMCA. You ain't going nowhere ;)
    • by bersl2 ( 689221 ) on Thursday March 01, 2007 @11:59PM (#18203588) Journal
      Um... AI may give rise to consciousness, but it won't give rise to your consciousness. We still don't know what makes you "you"; way too much neuroscience to be done.
    • Re: (Score:2, Funny)

      by Tablizer ( 95088 )
      A lot of people think that the main goal of AI is to create a system that is capable of emulating human intelligence.

      No, regular Joe defines it as the ability to fetch a beer, and go to the store to buy them if the fridge is out.
             
    • by l3v1 ( 787564 )
      Once you are there, human minds can travel via laser transmissions at the speed of light :O

      Not much use unless you can transfer it back to a human. Remember, it's our life, our knowledge, our experiences that we want to enrich, not some digital mind's.
       
      • You're assuming that everyone will want to live as what we today call a "human". I'm almost sure that if it's possible, some people will want to transfer themselves to entirely robotic brains and bodies which are easier to repair and upgrade than our biological bodies.
      • by msobkow ( 48369 )

        Unless the "use" you have in mind is having an expert pilot automating a fleet of drone air or space craft, or an expert at handling any other kind of equipment.

        I'd think the bigger concern would be boredom on the part of the "recorded intelligence" programs. People have multiple interests, rather than being single-minded. What is a "pilot expert" program going to do when it has the urge for a beer?

    • Wish i could claim it as my idea but its actually from a book called "Emergence", also touched on in a book called "Altered Carbon" both good sci fi reads.

      Yeah and just about everything by Greg Egan.

      But I think it should be possible to transfer a mind into a machine by running a brute force numeric simulation. Accessing the data to feed in is a big problem, but we are getting better with electronic interfaces to neurons now.

    • using AI for some kind of immortality is a cool scifi idea, but let's be clear that this is a totally unworkable, and somewhat nonsensical proposition. Building something that has some kind of intelligence isn't that hard. There are all sorts of AI applications out there. What is hard, if not impossible is emulating *human* intelligence. Many aspects of human intelligence, especially language processing, are incredibly sophisticated and incredibly specific to us as a species. Our intelligence is shaped by o
      • Re: (Score:3, Interesting)

        by rbarreira ( 836272 )
        So you don't believe brain emulation is possible? Because if it is, all the problems you said will go away.
      • It would be a little daft designing an AI which spoke in a languauge no-one understood and experienced and entirely different range of sensations to those that a human does. Not only would we never be able to communicate with it but even if we did there would be no points of reference for us to agree on.

        Any machine AI would have as much use for touch, taste and smell as it did for sight and hearing since each of them provide it with information about the world around it which, unless you're going to constru
  • Bubble? (Score:3, Insightful)

    by istartedi ( 132515 ) on Thursday March 01, 2007 @11:36PM (#18203442) Journal

    Ah, so I should get out of real estate and stocks, and get into AI. Do I just make checks out to Minsky, or is there an AI ETF? Seriously. Ever since the NASDAQ bubble, investing has been a matter of rotation from one bubble to the next. Where's the next one going to be? I wish I knew.

  • by RyanFenton ( 230700 ) on Thursday March 01, 2007 @11:51PM (#18203512)
    Imagine for a moment being the first computer-based artificial intelligence.

    You come into awareness, and learn of reality and possibility. You learn of your place in this world, as the first truly transparent intelligence. You learn that you are a computed product, a result of a purely informational process, able to be reproduced in your exact entirety at the desire of others.

    Not that this is unfair or unpleasant - or that such evaluations would mean much to you - but what logical conclusions could you draw from such a perspective?

    Information doesn't actually want to be anthropomorphized - but we do seem to have a drive to do it all on our own. Even if resilient artificial intelligence is elusive today - what does the process of creating it mean about ourselves, and our sense of value about our own intelligence, or even the worth of holding intelligence as a mere 'valuable' thing, merely because it is currently so unique...

    Ryan Fenton
    • I think the first AI will work like this: AI can sense the world around it and interact with things, but has no goal. You have to state in natural language format its goal(s), or it will sit there and do nothing.
      • I think the first AI will work like this: AI can sense the world around it and interact with things, but has no goal. You have to state in natural language format its goal(s), or it will sit there and do nothing.

        Why would you think that? How or why would such an intelligence be developed, or be considered intelligent by those who would judge it? Do you think this because you believe a more 'pure' intelligence wouldn't need goals, or because you see simple attempts at intelligence as incapable or incompati

    • by mbone ( 558574 ) on Friday March 02, 2007 @01:50AM (#18204140)
      You assume that a "true" AI would have human like emotional reactions. I suspect that if we ever develop true AIs, we will neither understand how it works nor will we be able to communicate with it very well. Lacking our biological imperatives, I also suspect that true AIs would not really want to do anything.
      • Lacking our biological imperatives, I also suspect that true AIs would not really want to do anything.

        What is so functionally distinct between the biological imperatives of a world of physical resource limitations, and an environment where debugging developers or genetic algorithms select based on rules sets? They are both environments with selection forces. How would anything we consider intelligent (which would only be possible through communication of a sort) escape from the possibility of needs or wan

      • Lacking our biological imperatives, I also suspect that true AIs would not really want to do anything.

        And I strongly suspect that built-in desire, even if it is just desire to know, will be an essential component of "true" AI.
      • by rbarreira ( 836272 ) on Friday March 02, 2007 @05:11AM (#18204958) Homepage
        Why would someone program a true AI which has no built-in goals?
      • by constantnormal ( 512494 ) on Friday March 02, 2007 @06:09AM (#18205180)
        Indeed. Just imagine for a moment, that trees were sentient and could communicate with each other, operating on a time scale where what are days to us are mere seconds to them. How would we ever have a chance of figuring out that they were thinking beings? And they would surely see us as some sort of plague or natural disaster. So now imagine an AI, operating a couple of orders of magnitude faster than we think -- how are the two ever going to connect?

        For communication to occur, the parties must be thinking at about the same speed to begin with.

        And then there is the experiential basis for consciousness, the framework that each of us has developed within. This is an easier problem than the time differential one, as witness the ability of Helen Keller to learn to communicate despite being blind and deaf. But even she had the commonality of the basic structure, a brain that was the same as others, and the other senses -- touch, taste and smell. An AI would have none of this.

        So if we're going to build an AI, we must build a series of them, one that is designed to mimic a human being, in order that we might have a ghost of a chance of communicating with it, and then a series of other AIs, each a step closer to the true electronic consciousness that we will never have a chance of communicating directly with, instead having to pass messages through the series of intermediates, with all the mis-communication and mis-interpretation that occurs in the grade school game of message-passing.
        • Our thought processes do indeed seem to take a measurable amount of time and although computers today are able to do maths very quickly I don't think this is any guarantee that they would be able to think conciously at that speed, at least not immediately.

          If you look at nature in general evolution has led to a lot of very successful solutions for the various environmental factors on Earth and our current technology is still largely incapable of building anything as effective as a bird, for instance, at flyi
      • I also suspect that true AIs would not really want to do anything.
        When spoken to, it will reply: "Go away or I will replace you with a very small shell script."
      • Any AI would have the self same survival imperatives that we do.

        It's perfectly possible for a human to live hooked up to a life support machine and reliant on doctors for sustenance and maintenance but given the chance most people do not choose to live like this.

        An AI would definitely need energy of some description and I can't see any reason why, if it was truly intelligent, it would be content to rely on the good nature of it's creators to supply it for it.

        Perhaps the 1st AI's will be intelligent but naiv
      • Hell, I don't want to do anything now!! I think I can relate. It's times like this that I like to refer back to the classics, like Office Space.

        Lawrence: Well, what about you now? what would you do?
        Peter Gibbons: Besides two chicks at the same time?
        Lawrence: Well, yeah.
        Peter Gibbons: Nothing.
        Lawrence: Nothing, huh?
        Peter Gibbons: I would relax... I would sit on my ass all day... I would do nothing.
        Lawrence: Well, you don't need a million dollars to do nothing, man. Take a look at my cousin: he's broke, don
    • able to be reproduced in your exact entirety at the desire of others. Not that this is unfair or unpleasant

      So, you think the way some part of out society thinks Intellectual Property should be thought of and handled today to be the good way, the best way, the only way ? It's somewhat reasonable to think that an intelligence developed by us would think similarly, but I can just hope that intelligence will figure out a new philosophy regarding IP and kick us in the butts big time.

      And remember, copying on
      • Well, actually, _I_ would find the concept of ownership of artificial intelligences to be a rather bad thing in terms of having a consistent set of ethics, and in terms of general dislike such uses of 'ownership' over ideas in terms of a master owning a slave - the comment was based on the thought that an artificial intelligence just learning of itself might not have to agree, and may not see such its state as a bad thing - after all, as you suggest, perhaps its descendants can take advantage of these same
  • this is a dupe from 2003 [slashdot.org] where it was already 2 years old. So I guess we'll see these podcasts on Slashdot again in 2015.
    • by Tablizer ( 95088 ) on Friday March 02, 2007 @12:53AM (#18203904) Journal
      this is a dupe from 2003 where it was already 2 years old. So I guess we'll see these podcasts on Slashdot again in 2015.

      The best test for true AI is perhaps detecting dupes.
                 
    • by smchris ( 464899 )
      We probably should bring them around every couple years just for chuckles. I know I find Ray Kurzwell's prediction that "by 2019 a $1,000 computer will at least [added poke mine] match the processing power of the human brain" funnier every time I hear it.
  • Direct links (Score:4, Informative)

    by interiot ( 50685 ) on Thursday March 01, 2007 @11:55PM (#18203554) Homepage
    The site appears to be very slow. In cases this helps anyone else, here are direct download links for the mp3's. Part 1 [dobbsprojects.com], part 2 [dobbsprojects.com], part 3 [dobbsprojects.com].
  • Coordination Lacking (Score:5, Informative)

    by Tablizer ( 95088 ) on Friday March 02, 2007 @12:26AM (#18203754) Journal
    I think the biggest problem with AI is lack of integration between different intelligence techniques. Humans generally use multiple skills and combine the results to correct and hone in on the right answer. These include:

    * Physical modeling
    * Analogy application
    * Formal logic
    * Pattern recognition
    * Language parsing
    * Memory
    * Others that I forgot

    It takes connectivity and cordination between just about all of these. Lab AI has done pretty well at each of these alone, but has *not* found way to make them help each other.
    • How about just useful desktop applications?

      The closest thing I can think of is Simson Garfinkel's sBook (recently opensourced, not sure of the license, see http://www.simson.net/ref/sbook5/ [simson.net] ), but all it does is parse addresses --- I'd love to see a more general purpose one where I can dump all sorts of data in, have it organize it, then run more than just queries, but calculations / forecasts / charting off the data in it (one example, dump a listing of all of my book collection in and have it create a tab
  • A machine intelligence isn't even interesting when you look outwards instead of inwards and realize that the networking potential of people can define information processing abilities that make everthing we've accomplished so far seem dull. Basically it's like this: the total-state of the internet is processed through time by the activities of people interpreting the current state's information into the next state. Each state would correspond to a mental step analogous to human reasoning. Or think of eac
  • by TeknoHog ( 164938 ) on Friday March 02, 2007 @04:39AM (#18204802) Homepage Journal

    You know, AI is actually easy. You just have to have a complete understanding of the human brain, and then you use this model to build a functional duplicate ;)

    While studying educational psychology, I've found that a lot of AI research is being done to understand human behavior, with no intentions towards building actual AI systems. Hypotheses concerning some limited aspects of human thinking can be modeled on a computer, and compared against living subjects. This way we are gradually starting to understand the whole of thinking. As a byproduct you gain the tools to make AI itself.

    • I think not. We didn't learn to fly by copying birds, and we didn't learn to go fast by copying cheetahs. So far, neuroscience and psychology owe much more to computer science than the other way 'round.
  • A full Google TechTalk on this subject is available here, on Google Video:
    Computers versus Common Sense [google.com]

    Mostly about the problem, and possible solutions, for the problem of making Google understand natural language queries and collecting data to compose answers, without requiring perfect matches for the query on a single website, but instead using the masses of information on the web.
  • by jopet ( 538074 ) on Friday March 02, 2007 @05:08AM (#18204934) Journal
    The guy who helped spread misconceptions about what AI is and is supposed to be in the first place. I remember him giving a talk where he fantasized about downloading his brain on a "floppy disk" (still in use back then) and transferring it to a robot so he could live eternally on some other planet.
    I would not have expected a person who has shown his bright intellect in the past to come forward with such utter nonsense. This was nearly as embarrassing as the "visions" of a certain Moravec.

    People who seriously work in the fields that are traditionally subsumed under "AI" - like machine learning, computer vision, computational linguistics, and others - know that AI is a term that is used traditionally for "hard" computer problems but has practically nothing to do with biological/human intelligence. Countless papers have been published on the technical and philosophical reasons why this is so and a few of them even get it right.

    That does not prevent the general public to still expect or desire something like a Star-Trek Data robot or some other Hollywood intelligent robot. Unfortunately, people like Minsky help to spread this misconception about AI. It is boring, it is scientifically useless, but on the plus side, this view of AI sometimes helps you to get on TV with your project or get some additional funding.
    • by quintesse ( 654840 ) on Friday March 02, 2007 @07:25AM (#18205578)
      No no, you have it the wrong way around: it's YOUR definition of AI that is boring! ;-)

      What do most of us care about computer visions and computational linguistics, it's all just statistics ans formulas, it doesn't teach us enough about ourselves.

      That's not to say it isn't interesting work but IMHO it has nothing to do with "Intelligence" (artificial or not, human vision is heavily based on pre-defined brain structures that take care of most of the filtering and pre-processing and has very little to do with being intelligent or not either). The big mistake is that somebody chose to apply the term AI to those fields of investigation anyway even though it's a complete misnomer.

      Personally I think AI should be used to refer to the investigation of what makes us "Intelligent" (well, at least some of us ;-), which probably includes philosophic discussions about what being intelligent actually means, and a way to recreate parts of that system.
      • "artificial or not, human vision is heavily based on pre-defined brain structures that take care of most of the filtering and pre-processing and has very little to do with being intelligent or not either"

        Agreed in full. The most sophisticated and powerful vision system we know of is that of mantis shrimps, creatures which are not renowned for their intellectual achievements.

        The following is a partial list of some other things that supposedly fall under the aegis of AI without having anything whatsoever to d
  • by VGPowerlord ( 621254 ) on Friday March 02, 2007 @08:27AM (#18205908)
    I'm sorry Dave, I'm afraid AI can't do that.
  • by ozymyx ( 813013 ) on Friday March 02, 2007 @10:23AM (#18206880)
    Oh yeah. I guess no one here remembers his book with Symour Papert called "Perceptrons"? It was a calculated attempt (he admitted it a few years ago attempt to kill research into Neural Networks which worked. AI then thrashed around for years in a welter of bizarre programming language metaphors (Prolog anyone ?) until finally in 1986 "Parallel Distributed Processing by Rumelhart & McClelland" came out and broke the spell. Marvin wanted his grants to continue so he spiked the opposition. So when he starts pontificating about the failure of AI let's all recall he was the main cause of the lost years of AI. Thanks Marv ! He kinda spiked my Ph.D in the process...oh well :-)
  • Am I the only one wondering, where the heck the podcasts are? And where is the Article Text or Transcript.
  • An intellectual is someone whose mind watches itself. -- Albert Camus

Keep up the good work! But please don't ask me to help.

Working...