Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI

Why Mastering Language Is So Difficult For AI (undark.org) 75

Long-time Slashdot reader theodp writes: UNDARK has an interesting interview with NYU professor emeritus Gary Marcus (PhD in brain and cognitive sciences, MIT) about Why Mastering Language Is So Difficult for AI. Marcus, who has had a front-row seat for many of the developments in AI, says we need to take AI advances with a grain of salt.

Starting with GPT-3, Marcus begins, "I think it's an interesting experiment. But I think that people are led to believe that this system actually understands human language, which it certainly does not. What it really is, is an autocomplete system that predicts next words and sentences. Just like with your phone, where you type in something and it continues. It doesn't really understand the world around it.

"And a lot of people are confused by that. They're confused by that because what these systems are ultimately doing is mimicry. They're mimicking vast databases of text. And I think the average person doesn't understand the difference between mimicking 100 words, 1,000 words, a billion words, a trillion words — when you start approaching a trillion words, almost anything you can think of is already talked about there. And so when you're mimicking something, you can do that to a high degree, but it's still kind of like being a parrot, or a plagiarist, or something like that. A parrot's not a bad metaphor, because we don't think parrots actually understand what they're talking about. And GPT-3 certainly does not understand what it's talking about."

Marcus also has cautionary words about Google's LaMDA ("It's not sentient, it has no idea of the things that it is talking about."), driverless cars ("Merely memorizing a lot of traffic situations that you've seen doesn't convey what you really need to understand about the world in order to drive well"), OpenAI's DALL-E ("A lot of AI right now leverages the not-necessarily-intended contributions by human beings, who have maybe signed off on a 'terms of service' agreement, but don't recognize where this is all leading to"), and what's motivating the use of AI at corporations ("They want to solve advertisements. That's not the same as understanding natural language for the purpose of improving medicine. So there's an incentive issue.").

Still, Marcus says he's heartened by some recent AI developments: "People are finally daring to step out of the deep-learning orthodoxy, and finally willing to consider "hybrid" models that put deep learning together with more classical approaches to AI. The more the different sides start to throw down their rhetorical arms and start working together, the better."

This discussion has been archived. No new comments can be posted.

Why Mastering Language Is So Difficult For AI

Comments Filter:
  • Children mimic their parents. Thats how they develop their language skills, so let us not play down the importance of mimicry.

    We are very close to "no true Scotsman" fallacy here. If you can't define _understand the language_ with sufficient specificity to objectively out understanding from mimicry, you are just stating your opinion, appeal to authority fallacy comes into play

    Define understanding and mimicry in a way we can tell the difference. If you can define it, if we can't tell then are we not simpl

    • Does it matter whether your partner fakes pleasure when being touched by you? Does it matter if a parent fakes their love for a child? Does it matter if a teacher only pretends to know their subject area? Does it matter if the news is accurate, as long as it sounds about right? Does it matter if your doctor found the correct diagnosis, as long as he says something that makes him sound like an authority on health? People who say it doesnâ(TM)t really matter donâ(TM)t understand what our most impor
      • How do you know that any of these things youâ(TM)ve mentioned arenâ(TM)t the case? You only believe that itâ(TM)s the case. You canâ(TM)t tell that your mother is even conscious, you just believe that she seems similar to you, therefore she is conscious like you.

        You, along with the author here seem to mistake your belief that consciousness or intelligence isnâ(TM)t there with proof that it isnâ(TM)t. Itâ(TM)s nothing more than proof by lack of imagination.

        • And I didnâ(TM)t ask if you could know with certainty whether any of those things were the case, I asked whether they mattered. I submit that they do, probably even to you. Having confidence is not the same as having certainty. The former is required to go on trusting that your conversation partner understands the words you and they are using. The latter is not. And whether they do or donâ(TM)t most certainly matters to people who use language.
          • The former is required to go on trusting that your conversation partner understands the words you and they are using.

            What does it mean to say your conversation partner "understands" the words you say? How do you determine whether they do or not? Because they make replies that seem vaguely appropriate? Computers can do that too. Because you ask them a question and their answer contains the information you wanted? Computers can do that too. Because you feel an emotional connection? That says more about you than about them.

            It seems to me that you have a vague idea of some special property you call "understanding" exis

            • Because they make replies that seem vaguely appropriate?

              No, they make appropriate replies. And how do you know it's an appropriate reply? Because you're not just reading a printout, it's a conversation.

              Computers can do that too.

              No, they cannot. I've dealt with every AI that's allowed online interaction and they are lousy at self-referential conversation and become conversational morons within sentences.

              The property you call "understanding" is just a pattern of connections and nerve impulses sending si

              • by gwjgwj ( 727408 )

                No, they cannot. I've dealt with every AI that's allowed online interaction and they are lousy at self-referential conversation and become conversational morons within sentences.

                So at least you agree, that you cannot distinguish artificial intelligence from natural stupidity?

            • by jvkjvk ( 102057 )

              >How do you define "understanding"? Why do you believe networks made of physical neurons have it, but ones simulated in a computer don't?

              I don't have to tell you how I define understanding to tell you what I don't find it:

              Matrix multiplication and table lookups

              So no, AI has no understanding.

              • And how does your brain work? Just some neurons transmitting some signals here and there. It has no understanding.

                I think the real issue with GPT is that it's just not quite as good as some cherry-picked examples make it look like. I've played with some demos and it mostly just reproduced the closest thing verbatim. Write a press release for Nvidia's new GPUs? It just reproduces the Turing announcement, complete with 20 series model names and prices and all. Write a C function to convert celsius to fahrenhe

            • I have no more than a layman's understanding of the field, but the way I see it, humans (and probably a number of animals) have some sort of a rule-based system of categorising things - concepts. That way, if a person knows what a 'dog' is, you show them something they've never seen before, e.g. a marble statue of a dog, a cartoon with a mecha or demonic dog, or a child's stick figure drawing of a dog, they can still figure out that all these totally different things all belong in the set of 'dog'. An AI ot

        • Solipsism is a terrible mind bug.
    • How does he know what it means to understand concepts anyway? Some of the AIs mentioned certainly seem to understand. GPT-3 is able to take a bunch of woolly text of an exam question, figure out what needs to be done, come up with a solution, and explain the steps. Dall-E is able to take a concept in some text and understand how something that itâ(TM)s only seen in a few photos might be represented in all kinds of other circumstances, or artistic styles. They certainly seem to have some concept of

      • by crunchygranola ( 1954152 ) on Sunday October 16, 2022 @05:20PM (#62972115)

        "Seem" is indeed the operative word. What is in the TFS actually explains the matter pretty well in just a few words. If you have compiled every exam question that exists on-line and can regurgitate the answer in syntactically correct form with appropriate statistical modifications to match context terms it does "seem" like it understands the question. We assume this with humans, unless we have season to suspect cheating because no human on Earth can answer an exam question that way. They can if they cheat and happen to have gotten the answer key in advance since no human can memorize all answer keys in the world. But GPT-3 can, and does.

        One thing that so much of the coverage of these giant NN models glosses over is that they also regularly spit on bizarre non-sequiturs and nonsensical answers out of the blue. Sure, they are no doubt training more models to try to screen out that stuff, but that does not mean any of the models actually understand anything. As the TFS says, they are just giant automated parrots.

        • I mean, another way of looking at this would be from the point of view of the parrot. Are you implying that thereâ(TM)s something fundamentally different about a parrot brain and a human brain? That thereâ(TM)s something that makes a human intelligent that makes a parrot not? Iâ(TM)d argue strongly that thereâ(TM)s increasing evidence that the only difference between our intelligence and âoelesserâ organisms is a matter of extent. Make a brain complex enough and it will app

          • I mean, another way of looking at this would be from the point of view of the parrot.

            That implies that you understand the parrot's pov.

            Are you implying that thereâ(TM)s something fundamentally different about a parrot brain and a human brain?

            Yes, there are fundamental differences between an avian brain and a human brain. In structure and down to cellular composition.

        • I think there needs to be a separation of what is the goal.

          For example, automated driving has the broad goal of being able to drive a vehicle on public roads safely. If we can do that by basically pattern recognition and it works better than humans 90% of the time. That works well enough. It doesn't matter if it 'understands' it. It works for it's purpose.

          Consciousness may not even be a good thing for some goals. I remember a few years back there was a driver in Quebec who slammed on her breaks on the highw

    • Look at the mistakes I made, wrote just "out", when I meant "rule out"

      I was about to write "if you define it as any output of AI is mimicry and whats done by a carbon brain is understanding we are close to "no true Scotsman fallacy.". But mangled it in editing and lost most of that sentence.

      Meant "if you can't define it" but wrote "if you can define it". Lots of carbon brains auto correct such things and get what I meant, not what I wrote. If a silicon brain is also able to do it, would it be called unde

      • I would say only if you subsequently ask it why it "mistranslated" your request and it answers with "I figured that was what you meant" instead of "it was the closest match" without being programmed to.
    • If it is thought to "understand" and it can communicate natively in two languages then it should be able to translate between those languages.

    • The example about "real" and "faked" "Pleasure" and "love" (from another comment above) might not be ideal, because they could be told apart through some sort of objective test (psychological, biochemical, medical imaging). Real and faked "knowledge" however is the topic and are harder to distinguish. If I read an learn a book about History of Art, I can fake understanding for a short conversation, and I know I am a faker. If I read and learn a hundred books about the topic, it becomes difficult to tell if

    • The real difference is TIME.

      Children spend about 10 hours a day for close to 16 years learning language. It is our most practiced skill, constantly being exposed to the brain. We start out mimic-ing for the first 2-3 years. After that it slowly becomes more.

      Give a computer 10 hours a day, 16 years and mimicry becomes learning.

      • From what I've heard, GPT-3 was trained on an amount of text far greater than any human could read in their lifetime. Depending on how one looks at it, this may or may not mean that it has a greater understanding of any topic than any person. :)

      • Computers dont need to eat, poop or sleep, so we can cram a lot more years. But, on the other hand, sleep seems to be important to form neural patterns. So we may have to, pardon my pun, mimic sleep patterns in the teaching / learning mode.
      • by narcc ( 412956 ) on Sunday October 16, 2022 @09:26PM (#62972485) Journal

        Give a computer 10 hours a day, 16 years and mimicry becomes learning.

        Complete nonsense. This 1) shows a fundamental misunderstanding of the technology and 2) bizarrely equates imitation with comprehension.

        The first could take a long time to correct. Suffice it to say that if you think that an AI is something like an autonomous entity that continually learns from and adapts to its environment, then you're deeply confused. You can't raise an AI like you raise a child. That's silly science fiction.

        The second is much easier. A photocopier can mimic any text presented to it, no matter how complicated, with incredibly accuracy, even in languages or systems it has never before encountered. At no time is it's performance in any way dependent upon understanding of the material being reproduced. The same is true for, say, a Markov chain for generating music. The limitations of both are simple and obvious. Mimicry does no imply understanding.

         

    • When I think about words like "understand", I just realized I can't really explain what I mean when I say I understand something. The closest I can define the concept would involve producing some output, so that others may check if I really understood what they've written/said. So to prove that I understand you, I've written this post. Now if a program can produce a sentence similar to this, won't that qualify as understanding already? Or maybe I've misunderstood you.
    • Humans map words to concepts. GPT-3 does not.

    • No, children do not come up with language solely by mimicking their parents. A parrot mimics its owner, but the parrot never says anything it didn't hear. Children, on the other hand, will in their lives say lots of things their parents never said. Children induce a grammar, and they build a lexicon; together those constitute the language that the child builds. Both grammars and lexicons are complex things, as any linguist will tell you.

  • by AmazingRuss ( 555076 ) on Sunday October 16, 2022 @03:51PM (#62971929)
    ...that have no idea what is going on in the world around them, so that isn't really a disqualifier.
  • by Retired Chemist ( 5039029 ) on Sunday October 16, 2022 @04:06PM (#62971973)
    If understanding what you are talking about is a criterion, I suspect we would have to rule that most of the human race is not sentient either. The criteria for a true artificial intelligence should be that it can create original thought. Given how much trouble humans have with that, developing a device that can do that may be beyond our current ability.
  • need new paradigms (Score:5, Interesting)

    by Walt Dismal ( 534799 ) on Sunday October 16, 2022 @04:26PM (#62972017)
    I completely agree with him. The statistical NN language handlers are only pattern recognizers; they have no ability to understand full meaning. In my research work I've focused on AIs that can understand meaning as a human does. From this I know it is possible, and that the companies focused on making ANNs do it are on the wrong track. ANNs are only a tool and not the architecture. We have to stop thinking about how to emulate with neurons alone - which is really very low-level - and move to a higher paradigm. For example, ICs did not take off until we developed modular libraries of logic components. AI needs something similar, which I am working on.
    • The language model is only one aspect of the human mind. Even if somehow copied from a human brain, it would not by itself enable understanding of meaning. We need to add to that all of the other models that are present in a more sapient example of a human to get close to achieving understanding. But, this is not a change in direction, just a recognition that pieces of the human brain do not each amount to a general purpose intelligence, only the whole, and even that would be near useless if it did not have
  • Seriously, use Esperanto to form the connections for talking, then move to say Italian, Spanish, french, etc.
    • Why? It's just another language. Can you give a good reason other than it's new?
      • It is an artificial language with 16 rules and no exceptions. As such, it makes a useful language for teaching an AI speech. From there, it can move to other Latin based language and learn how exceptions are common.
  • by Nartie ( 1128613 ) on Sunday October 16, 2022 @05:28PM (#62972127)
    A long time ago I wrote a little program. I started with a large (for the time) collection of English text and calculated the probability of pairs of letters appearing in English words. Then I randomly generated words picking the next letter randomly but weighted by the probability of that letter pair appearing in English. The result was a bunch of words which weren't English, but which sounded like they could be.
    These big machine learning systems are roughly the same thing on a much larger scale. They know what words are likely to appear together and put them together in what looks like it could be a meaningful sentence but isn't.
    • These big machine learning systems are roughly the same thing on a much larger scale. They know what words are likely to appear together and put them together in what looks like it could be a meaningful sentence but isn't.

      You do not know that, and you cannot know that.
      If that is indeed their behavior, it is emergent.

      At which point, I challenge you to demonstrate that you do differently.
      You are incapable of separating your belief that your mind must be special, somehow, with an objective analysis of what your mind really is.

    • by narcc ( 412956 )

      You should lookup Markov chains.

    • by fazig ( 2909523 )

      They know what words are likely to appear together and put them together in what looks like it could be a meaningful sentence but isn't.

      The human brain isn't that different in that regard. Or at least Trump's brain doesn't seem to be. All while a non insignificant subset of people still seems to believe that he's a most stable genius.

      Regardless, look up things like garden-path sentences and see how the speculative execution in our mind is messed with due to the fact that our spoken language contains a lo

  • Mental Models (Score:2, Informative)

    by devnullkac ( 223246 )

    Most of these deep neural network systems seem to lack the equivalent of a mental model. There was an article [wired.com] a while back with a line that has really stuck with me (emphasis added):

    When engineers do peer into a deep neural network, what they see is an ocean of math: a massive, multilayer set of calculus problems that—by constantly deriving the relationship between billions of data points—generate guesses about the world.

    That ocean of math doesn't correspond to any of the kinds of abstractions

    • Re:Mental Models (Score:4, Interesting)

      by DamnOregonian ( 963763 ) on Sunday October 16, 2022 @08:25PM (#62972415)
      The math in your brain exists under the conscious thinking.
      You cannot prove that you have free will. You cannot prove, or disprove, that anyone other than you is conscious.
      At the core, you have neurons. 80-someodd-billion of them.
      Each of these discrete elements can be thought of as a simple analog calculating gate.
      You cannot prove that there is more to the brain than this.

      Ergo, you cannot prove that your consciousness is anything more than an ocean of math, with biological signals and thresholds rather than digital.
      • You cannot prove that you have free will. You cannot prove, or disprove, that anyone other than you is conscious.

        Philosophers have been speculating on the existence or non-existence of free will for centuries, and scientists have been investigating this issue for decades, and they have come to no consensus, nor is this likely in the near future. So this essay is not intended to settle the matter philosophically or scientifically.

        Rather, it takes a pragmatic position of pointing out the obvious: there are circumstances under which we are more or less free to make wise decisions that contribute to our own and others

        • You cannot determine if free will exists, that's the point. It is literally impossible.
          You cannot tell if someone else is faking it, or if even your own concept of a mind is merely an elaborate self-reflective layer on top of the reactive neural circuitry of your brain. We can never know.

          So absent the ability to know, we can only look at what's more likely.
          Given all we have discovered of the universe in our history, is it more likely that:
          A) the mind is based on mundane physical phenoma
          B) the mind is
  • by vadim_t ( 324782 ) on Sunday October 16, 2022 @05:55PM (#62972183) Homepage

    AIs that operate only on text are missing 95% of the world we live in. They don't perceive color, or space, or sound, or touch or the passage of time. They take a text input describing all those things, but we've never came up with a way to fully describe any of those things in words, nor do we really need to. We don't need a fully precise way of describing what it feels like to pet a cat because we either experienced the exact thing or something close enough. AIs miss that completely.

    I think a true, human-level AI would necessarily have to be housed in a robot with human-like senses.

    But that of course would be very tricky because we can hardly have a million robots running around and experimenting with everything.

    • by narcc ( 412956 )

      You forget that colors, sounds, and smells, are no different from letters and numbers to the program. They're all just 1's and 0's. There is no, er, qualitative difference.

      • by vadim_t ( 324782 )

        I'm not saying it can be done, just that we're not doing it. Also, I don't think we have the required data on hand. Reading a wikipedia article is hardly a replacement for going to the actual place it's about, after all.

    • A human only perceives those things because they are built with machinery specific to those tasks, those being it’s necessary inputs to function (“live a life”). The fundamental truth is that humans do this with a finite, contained, and local computer (the brain). We even know the building blocks (neutrons), we just haven’t cracked the amazing way they organize and execute to accomplish such a function. The fact we have a working proof that such a machine is possible (hell, done in m
  • Because just because you make a higher-level programming language and call it intelligent--in fact: does not make it intelligent.
  • for what I've been saying for years.
  • For each X, X is difficult for AI because when humans consider X, they use all their experience, knowledge, and built-in hardware.

    This has been true since the first hopes of AI surfaced in 1602, until the present, and will be until the world finally collapses in global warming armageddon in about 11 years, as predicted by the best AI.

  • I've been saying for many years that the path we've been taking for computer vision, natural language processing, and other things, has been wrong. But I don't mean wrong in the sense that the direction we've been going is not useful. Rather, I mean that we do a lot more then edge detection, pattern matching, and the like. Because we can understand not only the semantics of specific words, but how they relate to the real world and our experiences, we can apply a layer of error detection and correction that

  • Is there any AI research addressing written language translation? Are there any really good implementations?

Think of it! With VLSI we can pack 100 ENIACs in 1 sq. cm.!

Working...