Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI Technology

Is AI Development Moving In the Wrong Direction? (hackaday.com) 189

szczys writes: Artificial Intelligence is always just around the corner, right? We often see reports that promise near breakthroughs, but time and again they don't come to fruition. The cause of this may be that we're trying to solve the wrong problem. Will Sweatman took a look at the history of AI projects starting with the code-breaking machines of WWII and moving up to modern efforts like IBM's Watson and Google's Inceptionism. His conclusion is that we haven't actually been trying to solve "intelligence" (or at least our concept of intelligence has been wrong). And that with faster computing and larger pools of data the goal has moved toward faster searches rather than true intelligence.
This discussion has been archived. No new comments can be posted.

Is AI Development Moving In the Wrong Direction?

Comments Filter:
  • by Anonymous Coward on Thursday December 03, 2015 @08:37AM (#51048329)

    Hubert Dreyfus described most work on AI as being like climbing a tree to get to the moon.

    Your tree-climbing teams may report consistent progress, always getting further up the tree as they become better climbers, but they're never going to reach their goal without a radical change in methods.

    • Nonsense analogies are like running on one leg to make bread rise.

      Seriously though, AI has made tons of progress, despite what some old nerds who are bitter that they don't have Lt. Cmdr. Data yet like to believe. We know how intelligence works, in a rough way - by observing the world, finding patterns, building models, using the models to evaluate actions, and picking the actions that will lead to maximizing some set of goals. Given enough computational resources, we could build superintelligent AIs right

  • A Different Beast (Score:5, Insightful)

    by Jim Sadler ( 3430529 ) on Thursday December 03, 2015 @08:38AM (#51048331)
    I think the problem is that people expect machine intelligence to look like human intelligence. Machine intelligence exists and is strong in some areas. Modern chess programs are an example. They can play unique games and be stronger than any human player. Yes, they are given the rules of chess and machines did not invent chess. But they have passed beyond human abilities and it is at the point where some programs are coded to only make move patterns that humans would tend to make. Learning how to adapt machine intelligence to our real world problems is challenging. But we are in for a fright when computers get really good at analyzing human problems and applying better solutions that we now have at hand.
    • AI is an extension of human intelligence in the same way a telescope is an extension of human vision. The IBM Jeopardy stunt [youtube.com] still blows me away, "self taught" open ended trivia is a far more impressive feat than Deep Blue.
      • by ganv ( 881057 ) on Thursday December 03, 2015 @11:02AM (#51049293)
        Yes, this is a good way to think about it. Any AI is an expression and outgrowth of human intelligence. And Watson is totally amazing. People who dismiss it in hindsight do not realize how impossible such a system seemed in the 1980s. Of course the complex issue is that AI opens the possibility of intelligence very very different than human intelligence developing as an outgrowth of human intelligence. And we know so little about the kinds of intelligence that are possible that it is very hard to predict what interactions between very different kinds of intelligence might be.
        • And Watson is totally amazing. People who dismiss it in hindsight do not realize how impossible such a system seemed in the 1980s.

          Impossible? People envisioned it long before and started building it in the 80s. The problem was how to get the data into the system in a way that was searchable? IBM solved that, thanks to all of us uploading things onto the internet.

          In the 80s, it was thought that the AI problem would be simple to solve if you had a large enough database of human knowledge. The cyc project showed that such a database is necessary, but not sufficient.

    • by LWATCDR ( 28044 )

      The definition of true AI is simple.
      Tasks that we can not do well with a computer yet.

    • I think the problem is that people expect machine intelligence to look like human intelligence. Machine intelligence exists and is strong in some areas. Modern chess programs are an example. They can play unique games and be stronger than any human player. Yes, they are given the rules of chess and machines did not invent chess. But they have passed beyond human abilities and it is at the point where some programs are coded to only make move patterns that humans would tend to make. Learning how to adapt machine intelligence to our real world problems is challenging. But we are in for a fright when computers get really good at analyzing human problems and applying better solutions that we now have at hand.

      The problem isn't technology, it is people. We already have human intelligence and it is relatively cheap to procure. They are called people... you know actual human beings. Hire one. Make a baby. Go find an actual friend.

      Try selling a product or service based on blank slate human intelligence. Sure there are aspects of the human brain that we are eager to replicate, simulate and make into a reproducible machine, such as image recognition or some other pattern recognition... But the marketability of h

    • I think the problem is that people expect machine intelligence to look like human intelligence. Machine intelligence exists and is strong in some areas. Modern chess programs are an example.

      I think this is a bit of a linguistic issue in that we all keep using the word "intelligence" without really agreeing on what it means. You're talking about an idea that you have of intelligence that means that chess-playing computers are "intelligent", but the concept others have in mind might rule out any existing chess-playing computer from being considered "intelligent". For myself, the word "intelligent" implies not only an ability to adapt to solve a problem, but also an understanding of what the pr

    • The only real reason a computer can beat a human chess master is that it's equivalent of 'attention span' is virtually unlimited compared to a human being, and the speed at which it can work through different scenarios is orders of magnitude greater than the human brain. It's not really 'thinking' about anything, it's just 'computing'; it's an 'expert system', not sentient or self-aware. This is not real AI.
    • by HiThere ( 15173 )

      Modern chess playing programs are not machine intelligence. At least they weren't 20 years ago when I studied them. They are a specialized tool that was completely understood (by someone). Calling that intelligence is a mistake. If you understand precisely how something operates, then it's not intelligence, it's an algorithm, template, or something like that.

      There are modern programs which are intelligent. It's appropriate to say this because even though to source code is available, nobody understands

  • From the article: "Intelligence should be defined by the ability to understand." Of course that opens up a discussion on what it really means to 'understand' something. Still, the directions we've been going in such as expert systems, neural networks, etc. address the processing of information in order to make decisions. They have nothing to do with actually understanding anything. Watson, as an example, was fun to watch on Jeopardy and is a very useful sort of tool, but it is not intelligent and prob

    • Re:Spot on (Score:5, Interesting)

      by njnnja ( 2833511 ) on Thursday December 03, 2015 @09:49AM (#51048715)

      The problem with this argument is Wittgenstein's beetle [philosophyonline.co.uk]. I can't even be sure that you are sentient, aware, and able to understand; all I can do is observe your actions and if those actions seem to be consistent with you having what we typically label as a "mind," then I pretty much accept that you have a mind.

      We are currently very far away from having machines that can perform general actions consistent with having a mind, except in very artificial and controlled situations (e.g. a chess game, the Jeopardy! game show), but I would hardly say that it will never happen. And if it does, then how can you be sure it doesn't understand things, at least in the same way that I assume that you understand things? If the actions of the machine are the same as the actions of a person (who I believe does understand things), then why wouldn't I say there is a beetle in the box?

      • I can't even be sure that you are sentient, aware, and able to understand; ...

        Only if you're a solipsist. Otherwise a simple discussion can reveal it.

        • by njnnja ( 2833511 )

          That's exactly the point. If, based upon my observations of your response to stimuli (a simple discussion), I come to the conclusion that you have a mind, then you have a mind. There is no "deeper" sense of something called "your understanding" that I could ever possibly have access to. And that's how it works for artificial intelligences - if, based on my observations, it appears to have a mind, then it has a mind. The ggp concerns about whether it really "understands" is a meaningless question.

          Note tha

      • The problem with this argument is Wittgenstein's beetle [philosophyonline.co.uk]. I can't even be sure that you are sentient, aware, and able to understand; all I can do is observe your actions and if those actions seem to be consistent with you having what we typically label as a "mind," then I pretty much accept that you have a mind.

        Just like you can ask someone questions about their beetle (how many eyes, how many legs, what color is it), you can ask people questions about their mind (how did you solve this problem, why do you consider this and this related, etc..) and thru introspection get a pretty good idea that things are similar. We also have things like MRIs now days that can also examine the inside of the minds and how it relates to thinking. By doing this we have discovered that we do all think similar in some areas but not

    • by invid ( 163714 )
      The article describes intelligence as the ability to predict, but humans actually experience information, the mechanism of which is still a complete mystery. Another mysterious aspect of human intelligence is that it is able to experience information that is spatially located in different locations in the brain simultaneously. Until we understand how it is able to do these things we'll just be making more complicated Chinese rooms.
      • by HiThere ( 15173 )

        The mechanism is not a complete mystery. We know, e.g., that it involves imagining yourself in the environment and solving the problem. That it doesn't need to be visual, but in humans it usually is. However kinesthetic modeling works just as well if your sensorium is set up that way. That it depends on having a large library of experiences that are similar in some way to the action or result being predicted. Etc. Some of this is because of experimental brain surgery. Some is derived from lab animals

    • Of course, its entirely possible some alien species may not consider us intelligent or even sentient based upon their yardstick.

      One interesting thing to really think about is how evolution has shaped our "intelligence". We often are worried about how an artificial AI may be fearful and attack us. But isn't fear simply something we evolved, an intelligent machine has no reason to fear death. It also has no reason to feel greed or anger or any of the feelings we've evolved in order to ensure our own survival.

      • by HiThere ( 15173 )

        If your are worried that it may be fearful and attack us, then you are anthropomorphizing it invalidly. This doesn't mean that in optimally pursuing it's goals it wouldn't undertake actions that in a human would be fear driven, but in a well-understood goal system this would more properly be called constructing sub-goals to optimize pursuit of it's major goals. E.g., you can't turn the universe into paper clips is some intrusive individual insists on turning you off.

        This makes design of the major goals a

  • The whole "Chinese room" argument is ass-backwards reasoning to me.

    The whole argument only works if you assume whatever happens in the chinese room is not to be considered intelligent, therefore whatever happens in the chinese room is not intelligence.

    If you allow for the mere possibility that the chinese room could be considered intelligent, then it follows that if something is indistinguishable from intelligence from outside the room, it must be intelligent by any reasonable definition of "intelligence".

    F

    • The Chinese room argument is and always has been stupid.

      Is one of your neurons intelligent? How about all of them together, combined with your sensory input and body and other machinery? Yes, the combination of all that is intelligent. (At least for some people anyway.)

      The Chinese room argument centers on the fact that the component pieces of the machine have NO IDEA what they're doing and NO UNDERSTANDING of what is going on, so then the whole room can't be intelligent.

      That is like saying that because m

      • by ganv ( 881057 )
        Thanks PM. I think you are exactly right. I have a hard time understanding why so many take the Chinese room argument so seriously. (I wouldn't call it stupid...it is (smart) philosophers oversimplifying reality so they can cope with it using the tools at their disposal). The kind of processing done in the Chinese room is just a tiny piece of what is required to be intelligent.
        • Actually, it is stupid. It is equating a whole integrated system with its parts. It is saying that since the parts can't function as the whole integrated system, then the whole integrated system can't work.

          Can a pile of disassembled car parts drive? Nope. But assembled they can.

          Can a disassembled brain think? Nope. Can any individual neuron in your brain claim to "understand" a thought? Nope. But your entire assembled brain can.

          So why couldn't the Chinese room be an intelligence? It's true that no

          • There are many ways in which the Chinese Room fails miserably (other than the cop-out 'It makes you think about something'): In its basic form it's unable to acceptably answer questions such as "What did I say ten seconds ago?" (a simple lookup table or rule book does not track state and such is required)
            The standard reply to that is something like: "Oh, well, then we'll just add a notepad on which the guy can scribble."
            The more elements you add like this, the closer you get to the complexity required for i

      • Commenting to undue mod. Should have been insightful instead of redundant. Good point!

    • Oblig: Chinese Room by Daniel Dennett & Neil Cohn [visuallanguagelab.com].
    • What you've hit on is a common objection to the Chinese Room argument: the operator may not have any understanding of the symbols being manipulated, but the system as a whole does. Still, people have a problem accepting emergent properties, so the argument persists anyway.
      • by narcc ( 412956 )

        There are a few things wrong with that objection:

        First, it is not a rebuttal, but a simple restatement of the assertion the argument is intended to address. It's the equivalent of saying "No it isn't!" like a petulant child. Consequently, it's not convincing to anyone who doesn't already agree.

        Second, it does not address the argument in any way. The crux of the argument is that syntactic content is insufficient for semantic content. This is not addressed in any way by the systems reply. It also seem to

  • by Gazzonyx ( 982402 ) <scott,lovenberg&gmail,com> on Thursday December 03, 2015 @08:51AM (#51048391)
    Really, a thermos is the ultimate AI. When I put cold things in one, they stay cold. When I put hot things in one, they stay hot. How does it know?!
    • That's pretty clever!

      I think I heard intelligence described as maintaining a certain average... for example, you're presented with a random variable, your task is to come up with an offset to maintain a certain average. You won't get it perfectly right, but if your average has lower variance than the original random variable, then you're doing well. In other words, you take input, and adjusting to it...

      For example, a cell maintains it's state such that metabolism continues to happen. Environment gives it va

    • Magnets, how do they work?

  • I don't see a problem in going for analytics (i.e., gathering, analyzing, reacting), before trying intelligence (i.e., understanding, creating, interacting). I see it as a step along the way, not as a wrong direction.
  • How about we take a human intelligence and replace it bit by bit by a machine?
    We'd learn an awful lot about how the human brain works and eventually have a machine with a humanlike intelligence.

  • Lack of definition (Score:5, Insightful)

    by lorinc ( 2470890 ) on Thursday December 03, 2015 @09:19AM (#51048521) Homepage Journal

    Define "true intelligence". The more computers advance in doing complex things, the more you will see there is no such thing as true intelligence. You are a very big Turing machine, get over it.

  • Actual researchers have known for decades that strong AI was well beyond the future horizon. As in 50 years off or more barring some kind of unexpected revolution. Often, for research grant proposals, or for some quick media exposure, wild claims have been thrown about. But the vast majority have known, and continue to know this.

    It boggles my mind how we cant solve simple visual captchas a 3 year old has no problems with, but supposedly self driving cars are prescient. None are able to spot the diffe
    • I was about to post that genetic algorithms could cross that gap, but I suppose even if, most likely, the computer could simulate a faster generation time than a fly has, the fly also evolved massively in parallel so maybe the computational cost is still pretty huge.
  • by lkcl ( 517947 ) <lkcl@lkcl.net> on Thursday December 03, 2015 @09:45AM (#51048689) Homepage

    i've mentioned this before, whenever the phrase "artificial" intelligence comes up. the problem is not with quotes AI quotes, it's with *us humans*. just look at the two words "artificial" combined with the word "intelligence". it is SHEER ARROGANCE on our part to declare that intelligence - such an amazing concept - can be *artificial*. as in "not really real". as in "beneath us". as in "objective and thus subject to our manipulation and enslavement". so until we - humans - stop thinking of intelligence as being "beneath us" and "not real", i don't really see how we can ever actually properly recreate it.

    to make the point clearer: all these "tests", it doesn't really matter, because the people doing the assessment have a perspective that doesn't really respect intelligence... so how on earth can they judge that they've actually *detected* intelligence? like the "million monkeys typing shakespeare", the problem is that even if one of the monkeys did actually accidentally type up the complete works of shakespeare, unless there was someone there who was INTELLIGENT ENOUGH to recognise what had happened, the monkey that typed shakespeare's complete works is quite likely to go "oo oo aaah aah", rip up the pages, eat some of them and wipe its backside with the rest, destroying any chance of the successful outcome being *noticed*, even though it actually occurred.

    i much prefer the term "machine consciousness". that's where things get properly interesting. the difference between "intelligence" and "consciousness" is SELF-AWARENESS, and it's the key difference between what people are developing NOW and what we see in sci-fi books and films. programs being developed today are trying to simulate INTELLIGENCE. sci-fi books and films feature CONSCIOUS (self-aware) machines.

    this lack of discernment in the [programming] scientific community between these two concepts, combined with the inherent arrogance implied by the word "Artificial" in the meme "AI" is why there is so little success in actually achieving any breakthroughs.

    but it's actually a lot worse than that. let's say that the scientific community makes a cognitive breakthrough, and starts pursuing the goal of developing "machine consciousness". let's take the previous (million-monkeys) example and step that up, as illustrated with this question:

    How can people who are not sufficiently self-aware - conscious of themselves - be expected to (a) DEFINE such that they can (b) RECOGNISE consciousness, such that (c) they can DEVELOP it in the first place?

    let's take George Bush (junior) as an example. George Bush is known to have completely destroyed his mind with drink and drugs. he has an I.Q. of around 85 (unlike his father, who had an extra "1" in front of that number). yet he was voted into the world's most powerful office, as President of the United States. the concept of the difference between "intelligence" and "consciousness" is explored in Charles Stross's book, "Singularity Sky". George Bush - despite being elected as President - would actually FAIL the consciousness test adopted by the alien race in "Singularity Sky"!

    my point is, therefore, that until we start using the right terms, start developing some humility sufficient to recognise that we could create something GREATER than ourselves, start developing some laws *in advance* to protect machine conscious beings from being tortured, the human race stands very little chance of success in this field.

    in short: we need to become conscious *ourselves* before we stand a chance of being able to move forward.

    • by Muros ( 1167213 ) on Thursday December 03, 2015 @01:52PM (#51050987)

      i've mentioned this before, whenever the phrase "artificial" intelligence comes up. the problem is not with quotes AI quotes, it's with *us humans*. just look at the two words "artificial" combined with the word "intelligence". it is SHEER ARROGANCE on our part to declare that intelligence - such an amazing concept - can be *artificial*. as in "not really real". as in "beneath us". as in "objective and thus subject to our manipulation and enslavement".

      I would have to dispute your definition of artificial as being somehow "not really real". If you use the original meaning, ie. the product of an artisan, or a crafted object, then it makes complete sense. We are talking about intelligence that is designed and built, rather than having developed naturally. Artifacts are still real things.

  • by TuringTest ( 533084 ) on Thursday December 03, 2015 @09:45AM (#51048695) Journal

    The traits we identify with intelligence in humans (flexible problem-solving, self-consciousness, autonomy based on self-created goals) are all but absent in current Artificial Intelligence techniques, even the ones based on the Connectionist paradigm. Any emergent behaviors appearing in an AI system are ultimately put there by the system builders' fine-tuning of input parameters.

    The approaches that show the most promise are those following the "Augmented Intellect" [wikipedia.org] school of thought (the one that brought us the notebook and the hypertext), where a human is put in the center of the complex data analysis system, as an orchestra director coordinating all the activity.

    There, intelligence systems are seen as tools at the service of the human master, extending their capabilities to handle much more complex situations and overcoming their natural limits, allowing us to solve larger problems.

    By keeping a human in the loop as the ultimate judge of how the system is behaving, any bias inherent in the techniques used to create the AI. It's a symbiotic approach where both human and AI system complement the shortcomings of the other half.

  • Of course, despite the hype, current AI research doesn't attempt to reach human-level intelligence. AI research proceeds incrementally, from improved solutions to one practical problem after another. That's not just because there is tons more funding for practical problems, it's because it's less risky for students and researchers to solve actual problems and to take research one step at a time.

    It's not all that different in biomedical research either: much of that is driven again either by practical proble

  • by Marginal Coward ( 3557951 ) on Thursday December 03, 2015 @10:42AM (#51049113)

    Is AI development moving In the wrong direction?

    Why do you ask that, Dave?

  • This article is on a useful track but suffers from a simple confusion. The author argues that intelligence should be defined by understanding and not by behavior, but then proposes that we use successful prediction as the measure of understanding. The confusion is that prediction is also a behavior. I agree with the direction the author is going. Most successful understanding (or intelligence if you like that word) can be connected to the use of a (partially) coherent set of ideas to predict observatio
    • The problem is they don't have good definition of intelligence. Until you have a definition of intelligence you won't know what whether not a computer exhibits it. FYI it has nothing to do with prediction. It's understandable why computer scientist would think that but talk to a Neuroscientist and they can explain why it is not.
  • His conclusion is that we haven't actually been trying to solve "intelligence" (or at least our concept of intelligence has been wrong). And that with faster computing and larger pools of data the goal has moved toward faster searches rather than true intelligence.

    In the Turing test, one of the easiest ways I've found to disrobe computers is failing to grasp semantic interrelations that is not a is-a or has-a relationship like for example music and dancing by making contradictory statements or not reacting to absurd combinations like going to a rave party to listen to jazz music dancing a waltz. That's knowledge though, it wouldn't help me determine the intelligence of an isolated Amazon tribe that doesn't know what rave parties or jazz or a waltz is. But it we want

  • We already have various methods of ascertaining intelligence as expressed by nonhumans. Animals are routinely imbued with various levels of intelligence. We do this from a behavioral analysis very much like the Turing test which I suspect is the basis from which it was taken. I think the main issue with the Chinese room thought experiment is the inclusion of an outside influence over the behavior, the book. Since we cannot manipulate the behaviors of animals in the wild, we can rightly ascribe intellige

  • by iamacat ( 583406 ) on Thursday December 03, 2015 @12:07PM (#51049915)

    We don't need servers or robots to have human intelligence. Already have 7 billions of those, including access to superhuman intelligence in the form of of many smart people collaborating with assistance of technology. Also humans have been around forever, and we still suck at human rights. Got to square those away first before having to worry about rights of other intelligent species (and having them respect ours).

    What we need now is computers that are good at tasks that we suck at, like repetitive processing on huge amounts of data.

    About the only exception is space exploration, where humans are not available for real time remote control due to speed of light. Still, we don't want a Mars probe to get bored and lonely, or make it's own survival the first priority. So cloning our own kind of intelligence, which was shaped by natural selection for self preservation, is not the best approach.

  • There is no wide agreement on this fundamental question, and without a clear understanding of what "intelligence" is, we cannot make progress toward making a real version of it.

    Seriously - if we knew what intelligence was, then consistent unambiguous ways of measuring it would exist. We have many "IQ" tests, and there is real experimental evidence that a common factor called "g" underlies intelligence, but the field attempting to study/measure intelligence is fragmented, and contentious. If intelligence tes

  • The real problem is that to many academics, “AI” is a dirty word. They feel that everything so far claiming to be AI has been all smoke and mirrors, and nothing remotely like human intelligence will appear any time soon. A subdiscipline, Machine Learning, has garnered some respect, along with various AI techniques like evolutionary algorithms and some limited kinds of machine inference like bayesian analysis. However, even machine learning is often done so badly that academics who understand

  • My problem with the author's conclusions is this: that "predictive behavior" can also be imitated by a machine. As I read that part, though, it struck me that neither he, nor anyone I've read, has made a distinction between what "intelligence" is, and how it is separate from self-awareness/consciousness.

    It seems to me that all the AI I've read about, are conflating the two.

    There are plenty of computer-controlled systems that are far more "intelligent" than a rat... but none, so far as we know, self-aware.

    So

  • We do not really know what intelligence (https://en.wikipedia.org/wiki/Intelligence) is. Therefore, we cannot built a machine which has that. We also do not know what self-awareness is (which is considered by some people to be part of intelligence). We just have it. True, many would say: "I know who I am and that I exist, so I am self-aware. And you can add this information to a machine." But that is not the same. Self-awareness goes beyond the information of existing, as information is nothing more than el

  • I recently saw a 6 part documentary on PBS, "The Brain With David Eagleman" that impressed me quite a bit. It covers a lot of ground in it's 6 hours about the brain, from basic biological attributes of the physical brain to philosophical questions about reality and questions about the more disturbing aspects of human behavior.

    Included are people who have suffered one kind of mental disability or another. There's a man who had Asperger's Syndrome and seems to have been cured during a scientific study, and

  • " Behavior is a manifestation of intelligence, and nothing more. Imagine lying still in a dark room. You can think, and are therefore intelligent. But you’re not producing any behavior."

    Yes, you can think, but why do you? Because you have a motivation array with no off switch to satisfy. And yes, you are creating behavior to satisfy the human motivation array. It is called thought. What we commonly call intelligence is the combination of the HMI (Human Motivation Array) and its tool "intelligence."

One way to make your old car run better is to look up the price of a new model.

Working...