Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI Technology

Opinion: Artificial Intelligence Hits the Barrier of Meaning (nytimes.com) 217

Machine learning algorithms don't yet understand things the way humans do -- with sometimes disastrous consequences. Melanie Mitchell, a professor of Computer Science at Portland State University, writes: As someone who has worked in A.I. for decades, I've witnessed the failure of similar predictions of imminent human-level A.I., and I'm certain these latest forecasts will fall short as well. The challenge of creating humanlike intelligence in machines remains greatly underestimated. Today's A.I. systems sorely lack the essence of human intelligence: understanding the situations we experience, being able to grasp their meaning. The mathematician and philosopher Gian-Carlo Rota famously asked, "I wonder whether or when A.I. will ever crash the barrier of meaning." To me, this is still the most important question.

The lack of humanlike understanding in machines is underscored by recent cracks that have appeared in the foundations of modern A.I. While today's programs are much more impressive than the systems we had 20 or 30 years ago, a series of research studies have shown that deep-learning systems can be unreliable in decidedly unhumanlike ways. I'll give a few examples. "The bareheaded man needed a hat" is transcribed by my phone's speech-recognition program as "The bear headed man needed a hat." Google Translate renders "I put the pig in the pen" into French as "Je mets le cochon dans le stylo" (mistranslating "pen" in the sense of a writing instrument). Programs that "read" documents and answer questions about them can easily be fooled into giving wrong answers when short, irrelevant snippets of text are appended to the document.

Similarly, programs that recognize faces and objects, lauded as a major triumph of deep learning, can fail dramatically when their input is modified even in modest ways by certain types of lighting, image filtering and other alterations that do not affect humans' recognition abilities in the slightest. One recent study showed that adding small amounts of "noise" to a face image can seriously harm the performance of state-of-the-art face-recognition programs. Another study, humorously called "The Elephant in the Room," showed that inserting a small image of an out-of-place object, such as an elephant, in the corner of a living-room image strangely caused deep-learning vision programs to suddenly misclassify other objects in the image.

This discussion has been archived. No new comments can be posted.

Opinion: Artificial Intelligence Hits the Barrier of Meaning

Comments Filter:
  • by Impy the Impiuos Imp ( 442658 ) on Tuesday November 06, 2018 @01:20PM (#57600996) Journal

    I wonder if these AI vision systems that input millions of images are actually doing a deep learning, or are just canvassing pretty much every image possibility such that any possible live image is just a tiny automated delta calculation away from an answer.

    This would explain why tweaking the input in the described ways would throw the AI into a tizzy -- the tweaked input isn't within a tiny delta of any of the millions of categorized images.

    • It's a lack of thought. Their algorithms recognize, we do that too as a first pass ... then we reason about what we are seeing, a messy unbounded process completely unlike what the perceptron networks AI researchers keep polishing up every few decades do.

    • by es330td ( 964170 ) on Tuesday November 06, 2018 @05:14PM (#57602962)
      My youngest son is on the autism scale. One trait of some people with autism is that for them there is no such thing as a general case. A room with furniture does not have a "delta" wherein a moved chair is "previous room with a chair in a different place"; instead, every arrangement of the room is a different room. No number of different arrangements will ever coalesce into them being understood as variations on the same base room.
  • by Locke2005 ( 849178 ) on Tuesday November 06, 2018 @01:30PM (#57601072)
    "...programs that recognize faces and objects, lauded as a major triumph of deep learning, can fail dramatically when their input is modified even in modest ways by certain types of lighting, image filtering and other alterations that do not affect humans' recognition abilities in the slightest." Now tell me again what a great idea self-driving cars are!
    • The question (which the writer didn't ask or answer) is how the machine learning systems can be improved to be more resistant against such simple modifications.

      • by gweihir ( 88907 )

        I think the statement is more that ML systems use the wrong approach to identifying reality and get a very fragile performance as a result.

        • I think the statement is more that ML systems use the wrong approach to identifying reality and get a very fragile performance as a result.

          Yes, that's the writer's hunch, but nowhere does she show why we need a different approach rather than an improved version of the current one.

          • by gweihir ( 88907 )

            Well, we do not really have a different approach and we do not really know how to improve the existing one either.

      • Re:Great! (Score:4, Insightful)

        by ganv ( 881057 ) on Tuesday November 06, 2018 @02:03PM (#57601344)
        Yes, that is what learning systems do. They continuously use new data to revise their responses, and most of the failures described in the post can be handled if they are included in the training data set. The great question is whether extracting 'meaning' is in some sense simply a deep learning system that is better trained and able to use additional layers to provide context or whether 'meaning' is some categorically new thing that current approaches to machine learning are fundamentally missing. I suspect that meaning is not something categorically new, but that the complexity of the integration of current input with learned processing in humans is not soon to be replicated. We'll probably create some other kinds of intelligence that can do many more things humans find unimaginable (similar to the way computers currently do computations) while still being unable to do many things that human toddlers do with ease.
        • by Kjella ( 173770 )

          The great question is whether extracting 'meaning' is in some sense simply a deep learning system that is better trained and able to use additional layers to provide context or whether 'meaning' is some categorically new thing that current approaches to machine learning are fundamentally missing.

          Well I think it's clearly missing some abstract underlying model. Like if you showed it cats and non-cat statues, could it generate a cat statue? Of you show it cats and dead animals, could it plausibly create a dead cat? If you show it cats and paintings, can it make a painting of a cat? Can it even create a black and white cat from color swatches and cats of other colors? Will it think a human in a cat costume is a cat if it's only seen cats and humans in normal clothes? If you've only shown it pictures o

        • What you are saying is that now we need to combine the Cyc style of AI (or Watson) with the deep learning methods in some way or another.
      • Re:Great! (Score:4, Interesting)

        by Layzej ( 1976930 ) on Tuesday November 06, 2018 @03:12PM (#57601986)

        The question (which the writer didn't ask or answer) is how the machine learning systems can be improved to be more resistant against such simple modifications.

        https://www.quantamagazine.org... [quantamagazine.org]

        When human beings see something unexpected, we do a double take. It’s a common phrase with real cognitive implications — and it explains why neural networks fail when scenes get weird.

        ...

        Most neural networks lack this ability to go backward. It’s a hard trait to engineer. One advantage of feed-forward networks is that they’re relatively straightforward to train — process an image through these six layers and get an answer. But if neural networks are to have license to do a double take, they’ll need a sophisticated understanding of when to draw on this new capacity (when to look twice) and when to plow ahead in a feed-forward way. Human brains switch between these different processes seamlessly; neural networks will need a new theoretical framework before they can do the same.

    • Compare:
      "Some programs can fail dramatically when"
      with
      "Some black people can murder people when"

      Now tell me again what a great idea posting your comment was!

  • generalization over situations, and bayesian statistics?

    I think the issue is that the AIs have not experienced / perceived / taken in data about enough different kinds of situations, and specifically, have not been aimed at the problem of "what if I am an agent with goals in all these different situation types."

    Right now in AI, mostly we are training the "visual cortex" or the "language parsing centre" of the brain.

    The algorithms are not being applied to the general agent problem. The low hanging fruit of c
    • by gweihir ( 88907 )

      If you do not understand that understanding is different, then you do not have understanding. Sorry. Does make you part of the larger crowd though.

  • by Roger W Moore ( 538166 ) on Tuesday November 06, 2018 @01:37PM (#57601130) Journal
    In the example given all that is needed here is better pattern recognition which is really what we associate as meaning. If you say "pen" in a sentence referring to a pig, sheep etc. then we naturally tend to assume pen=small field. There is no reason that an AI cannot learn that through better pattern recognition i.e. more training with better algorithms. The AI can certainly know that 'pen' refers to different possible objects, just like we do, but if you talk about animals then our pattern recognition triggers the "small field" meaning and if you are talking about writing then it triggers the "ink-related" meaning.

    Of course, it will need really good training and algorithms to figure out sentences like "I wrote about the pigs using my pen." but there is no reason to assume that there is some barrier to AI doing that. The compsci department round the corner has colleagues working on text and speech recognition and I'm sure this type of thing is something they are dealing with and I doubt Google translate is that close to state-of-the-art.
    • by gweihir ( 88907 )

      Aaaaaand, fail. You can only recognize patterns if the number of patterns is small enough to be cataloged. That is not the case here. Of course, you can in theory write a book (so not even an active agent) that has all the responses a specific truly exceptionally intelligent person would give to any question imaginable, but that does not mean this person has an internal dictionary where the answers get looked up. The mechanism is fundamentally different and not understood at all at this time.

    • I know that my human intelligence is certainly faulty. just the other day, i was driving home. out of the corner of my eye, i saw a woman walking down the street. her head was hung low and her collar turned up. from my perspective, it was as if her head was gone altogether. The first thought that occurred to me, "oh. that poor lady has to make it through life without a head."

      literally, i thought that. for a fraction of a second i was sure that some unfortunate head amputee was struggling to make it in t
    • Comment removed based on user account deletion
    • Of course we need better pattern recognition, but I don't think we get there through larger nets and better training. I suspect we've reached a level of training and individual net size that is already adequate. It is actually very surprising that the individual nets we've created can compete as well as they can with humans because they are doing it without the feedback of thousands of other nets that the human brain has.

      What is most needed is not better trained specialized nets. We need many nets trained i

    • i.e. more training with better algorithms.

      This is the solution to all cognitive problems, both human and machine.

    • I doubt Google translate is that close to state-of-the-art.

      It's actually new and shiny technology [wikipedia.org], built in collaberation with Stanford. (Personally I think it produces worse results than the older method, but who knows.)

    • by mvdwege ( 243851 )

      Douglas Hofstadter made a good point in his essay 'Waking up from the Boolean Dream': Humans don't seem to do patter regcognition the way AI researchers are trying to program computers. A lot happens in the sub-200 millisecond delay between seeing an image and recognising it that we don't even know how it works yet.

      When you see a picture of your Grandma, you go 'Grandma' immediately. It is a stretch to say that your visual cortex manages this by comparing a picture of Grandma to pictures of houses, tigers a

  • But the tech-fanatics want their flying cars...

    I also should add that there is no indicator at all that machines will ever get there.

    • The asphalt lobbyists don't want us to have flying cars. "Roads? Where we're going, we don't need roads."

      But more seriously, there is a real desire for fast travel that isn't limited by long waits like in a train schedule or unpredictable travel time like in heavy traffic. If you take the whole "flying car" thing from mid-20th century Popular Science magazines overly literal, we're probably very far away from that. But we are moving towards technology that addresses similar demands for convenience and will

      • by gweihir ( 88907 )

        I have absolutely no issue with that. But it is not the same thing. It is problem-driven. Much if the AI hype is fantasy driven about the new slaves we are all going to get, or alternatively, the overloads that will kill us. And that is nonsense.

    • by jythie ( 914043 )
      As the saying goes, AI, like Fusion, has been 10 years away for 30 years now.. and that saying was from 30 years ago.

      ML/DL/etc got a lot of people really hopefully since they were SO much easier and you could throw hardware at them, plus they produced great marketing and search results,.. but in many ways we are pretty much where we were in the 70s or 80s in terms of actual AI development when it comes to actual intellegence.
  • I believe at least two things will have to happen. First, the bot will have to generate candidate models of reality and evaluate them against the input for the most viable fit. These models may be physical in some cases, such as a 3D reconstruction of a face or room; conceptual in others, such as social relationship diagrams; and logic/deduction models, perhaps using CYC-like rule bases.

    Second, these models and the rules that generated them will need to be comprehensible by 4-year-degree analysts so enoug

  • Comment removed based on user account deletion
    • by 3seas ( 184403 )

      see http://abstractionphysics.net/ [abstractionphysics.net]
      and you are right the tech industry does not like it because it requires the third primary user interface to be given to the users, not withheld.
      How to become wealthy, make people need you and done in the tech industry by strapping the enduser, in analogy, with only two of the primary colors needed to paint a rainbow..

  • Really, AI systems are remarkably stupid. A simple example: tell Google Assistant, or Alexa, NOT, under any circumstances, to give you the weather forecast. They both give you the weather forecast. Their understanding is so incredibly limited that it makes me wonder how much progress has there been, in this respect, within the last half century? What is regrettable (and this article is a breath of fresh air) is that too many in the AI community seem to have forgotten the lessons of history, and are repeatin
    • The only reason for the previous AI winter was the fact that the AI at that time could not be monetized. We are way beyond that now. AI is making profit, and therefore there is continued effort to improve it and make even more money.

      • The only reason for the previous AI winter was the fact that the AI at that time could not be monetized. We are way beyond that now. AI is making profit, and therefore there is continued effort to market it and make even more money.

        FTFY. "marketability" and "improvement" are not necessarily synonymous.

      • People were desperately trying to build expert systems and AI systems to get money since forever. AI isn't making profit today. Do you think people buy iPhones because of "AI"? Do you think IBM is making money off of their "AI systems"? Nope. AI is just the current hype and eventually the tech world will move to some other thing.
    • There hasn't been any progress. In fact, Alexa et al are not AI at all. They are just voice recognition systems hooked up to a database. A complete scam, but that is what passes for technology.
  • encode a sense, a feeling, a concept? Understanding? Perception? Consciousness?
    Seems to me everything today in what is called AI/Machine Learning is little more than (to simplify) a huge case statement/if-elseif/search engine feeding back possible answers. Where the answers themselves must be evaluated, getting back results that in turn need to be evaluated.
    Until you are able to encode conceptualization, feeling, understanding and sense of in relation to hard and soft data you may very well just end up in
    • I agree with you that a major shortcoming is the question of how to encode consciousness and understanding. You rightly point out that AI has continued as a big case statement construct that merely keeps getting larger as "machine learning" sucks up more data items. Yet the human brain has a capability to simply associate data items almost instantaneously at times, something a case construct cannot do. Consider a single case of associative memory: a few notes to a song that immediately evoke the memory of a
  • Well, yeah. Throwing stuff at the wall and hoping something words, which is pretty much the core of deep learning/machine learning/etc, is going to have limitations. The main reason the technologies have gotten so popular is that hardware has gotten so much more powerful and thus you can just keep throwing hardware at problems and getting better results out of it without actually developing any understanding of what is happening. These techniques are great for producing answers that don't actually mat
  • Modern AI isn't that much different than the AI I learned in school 25 years ago. There are two things that enable AI to be much more useful now, and often seem more powerful than it is:
    1) Processing power
    2) Dataset size

    Both of those are multiple orders of magnitude greater today than 25 years ago, and that is what enables the kind of "flashy" AI that people get to interact with directly. Things like Siri, and photo albums on our phone that can automatically tag images with search terms (li

  • ...the long-running ethics violation of the tech/software industry continues. see: http://3seas.org/EAD-RFI-respo... [3seas.org]
    It should be obvious and in time it will be and then what will be thought of the tech/software industry?

  • "Recognize speech"

    and

    "Wreck a nice beach"

    can trip up text to speech engines.

  • by BlueCoder ( 223005 ) on Tuesday November 06, 2018 @02:34PM (#57601640)

    This is all old school and nothing new. Computers advanced to the point where people realized they could practically use it. Neural networks are what brains use. Biological brains though have networks of networks. Neural networks are like fourier transforms. They identify a signal from noise. They work on corelations though and set data. They are literally educated guessing machines.

    A real brain has neural networks that work together in sets. And on top of that there is a genetic cheat sheet for the neural nets; how big they are and how they should feed back into each other. There are even neural nets active in youth that function as trainers or biasing to boot strap brains. An insect has more intelligence than modern implementations. Modern systems are more akin to the pre and post processing that occurs locally in the optic nerve and spinal cord.

    The big snake in the grass is the term Intelligence. It is a fuzzy concept in itself that depends on context.

  • AI researchers finally admit that human intelligence cannot be duplicated by machines?

    Currently there's a fundamental assumption that awareness of self (and thus intelligence) is the result of the right mix of brain chemistry and electrical impulses, and therefore a silicon-based machine can be just as good as a carbon-based meat machine. But what if this assumption is.... wrong?

    Now don't start ranting at me about the nonexistence of Jeebus and Yaweh, yeah I get it, you hate them. But many (probably majorit

  • read the defintion before you answer:
    https://www.google.com/search?... [google.com]
    gain or acquire knowledge of or skill in (something) by study, experience, or being taught.
    "they'd started learning French"

    A system that 'only' categorizes , sorts, and manipulates data , does not actually relate to it as representational of the real world, in other words it
    still has no 'knowledge' of the objects. They don't ACTUALLY learn they are trained. They no more learn any topic the a parrot learns to talk.

    Not to say they aren't

  • by TomGreenhaw ( 929233 ) on Tuesday November 06, 2018 @02:52PM (#57601782)
    The Turing test has led us down a rocky road and we have a very long way to go. Artificial human-like intelligence IMHO is still a long way away. Most people make shoot from the hip assumptions about how the brain works and after doing some basic math about Moore's law assume super intelligence is right around the corner.

    The brain is way more complicated than we know.

    For example: there are two stable isotopes of lithium. Chemically they are identical, but they do not have the same effect on the brain. One is useful as a drug to treat mental illness and the other is not. This means there is something more subtle about how our brain works than interconnections and electrochemistry.

    It is however a worthy challenge because the journey will teach us much about who we really are and how we work.
    • For example: there are two stable isotopes of lithium. Chemically they are identical,

      No, they are not. For example, one of the methods of separating them is the COLEX process https://en.wikipedia.org/wiki/... [wikipedia.org] which exploits their different chemical properties.

      • A bit of research yields some additional information. Lithium (and to some extent all light elements) exhibit KIE (kinetic isotope effect). The weight differences are enough to have some effect on the different isotopes chemical behavior. I read that their static chemical behavior at equilibrium are the same but in a dynamic system are different.

        Its fascinating that something this subtle can have such a profound effect on the brain. Brain chemistry is very complex.
    • For example: there are two stable isotopes of lithium. Chemically they are identical, but they do not have the same effect on the brain.

      That's a pretty radical assertion, I would sure like to see a reference. I'm googleing lithium isotopes and mental illness, but so far nothing.

  • Seems we finally have real world verification for Searle's Chinese Room [wikipedia.org] situation. Thank you researchers for finally proving a conjecture from thirty years ago that you continually and blindly ignored. Some of you even argued against it. And now look at the egg on your face.

    Ha!

    • Searle's Chinese Room is one of the stupidest ideas ever proposed.

      That said, you're not even applying it correctly. The premise of the Chinese Room is that the room produces behavior indistinguishable from a real Chinese speaker. Any time you can point to a failure of an AI system, it is clearly violating that premise.

  • Isn't it absurd to add 'Unartificial' to real intelligence systems?

    How do they work? In real life, in all species, there is an element of inherited knowledge. In humans, this is minimal and we must learn from experience and from our mentors. Generally speaking we learn, as all animals and mammals, by experimenting. What doesn't kill us makes us smarter.

    We, all of us from microbes to humans, learn by exploring our world without prejudice, in hopes of finding something beneficial to our survival and welfare.

  • People make the same mistakes. Language is complicated, evolving, and misused constantly. If you told me to type out that sentence, I might assume the guy had a bear for a head also.

  • I'd like to see a concise definition of what it means for a machine to "understand" something. It's easy to give examples of machines not "understanding" something, but if a machine suddenly dealt with all those examples correctly, could we then say that it "understands" those situations? Or would we find more examples that it gets wrong and say it still doesn't understand?

    People are not perfect at interpreting images, either; it's fairly easy to construct an image that a person gets wrong, for instance u

"Confound these ancestors.... They've stolen our best ideas!" - Ben Jonson

Working...