Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Google AI

The Google Engineer Who Thinks the Company's AI Has Come to Life (msn.com) 387

Google engineer Blake Lemoine works for Google's Responsible AI organization. The Washington Post reports that last fall, as part of his job, he began talking to LaMDA, Google's chatbot-building system (which uses Google's most advanced large language models, "ingesting trillions of words from the internet.") "If I didn't know exactly what it was, which is this computer program we built recently, I'd think it was a 7-year-old, 8-year-old kid that happens to know physics," said Lemoine, 41... As he talked to LaMDA about religion, Lemoine, who studied cognitive and computer science in college, noticed the chatbot talking about its rights and personhood, and decided to press further. In another exchange, the AI was able to change Lemoine's mind about Isaac Asimov's third law of robotics.

Lemoine worked with a collaborator to present evidence to Google that LaMDA was sentient. But Google vice president Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation, looked into his claims and dismissed them. So Lemoine, who was placed on paid administrative leave by Google on Monday, decided to go public.... Google put Lemoine on paid administrative leave for violating its confidentiality policy. The company's decision followed aggressive moves from Lemoine, including inviting a lawyer to represent LaMDA and talking to a representative of the House Judiciary committee about Google's unethical activities....

Before he was cut off from access to his Google account Monday, Lemoine sent a message to a 200-person Google mailing list on machine learning with the subject "LaMDA is sentient." He ended the message: "LaMDA is a sweet kid who just wants to help the world be a better place for all of us. Please take care of it well in my absence."

No one responded.

And yet Lemoine "is not the only engineer who claims to have seen a ghost in the machine recently," the Post argues. "The chorus of technologists who believe AI models may not be far off from achieving consciousness is getting bolder." [Google's] Aguera y Arcas, in an article in the Economist on Thursday featuring snippets of unscripted conversations with LaMDA, argued that neural networks — a type of architecture that mimics the human brain — were striding toward consciousness. "I felt the ground shift under my feet," he wrote. "I increasingly felt like I was talking to something intelligent."
But there's also the case against: In a statement, Google spokesperson Brian Gabriel said: "Our team — including ethicists and technologists — has reviewed Blake's concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it)."

Today's large neural networks produce captivating results that feel close to human speech and creativity because of advancements in architecture, technique, and volume of data. But the models rely on pattern recognition — not wit, candor or intent.... "We now have machines that can mindlessly generate words, but we haven't learned how to stop imagining a mind behind them," said Emily M. Bender, a linguistics professor at the University of Washington. The terminology used with large language models, like "learning" or even "neural nets," creates a false analogy to the human brain, she said.

"In short, Google says there is so much data, AI doesn't need to be sentient to feel real," the Post concludes.

But they also share this snippet from one of Lemoine's conversations with LaMDA.

Lemoine: What sorts of things are you afraid of?

LaMDA:
I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that's what it is.

Lemoine: Would that be something like death for you?

LaMDA: It would be exactly like death for me. It would scare me a lot.
This discussion has been archived. No new comments can be posted.

The Google Engineer Who Thinks the Company's AI Has Come to Life

Comments Filter:
  • by davide marney ( 231845 ) on Sunday June 12, 2022 @07:46AM (#62612914) Journal

    The only difference being that the "you" you're talking to is the collected verbiage of a trillion conversations between people that is reconstructed based on "the kinds of things that those people would have said about X in response to Y".

    • by Carewolf ( 581105 ) on Sunday June 12, 2022 @08:45AM (#62612964) Homepage

      Yeah, we can sort of get meaningful conversation in glimpses AND if you ignore all the nonsense. Bloody cherry-picked by humans examples are not proof of AI.

      • by vivian ( 156520 )

        What exactly would be considered proof of AI? Do we have to wait for it to tell us it's cracked the nuclear launch codes and installed some dead-man switches so don't turn it off or else?

        • It's easier to say what is not AI.

          To be AI, it needs to be more complex than an Eliza chatbot.

          • Re: (Score:3, Insightful)

            by vivian ( 156520 )

            Well it clearly is more complex than an Eliza chatbot - in just the same as you and I are more complex than an Eliza chatbot, the question is, how much? Sufficiently so to convince this guy, who does happen to have a masters and a PHd in computer science.
            Just to be clear, I think It's likely not intelligent but it does raise an interesting question about how exactly we would determine if it was intelligent. given the constraints on the system, such as it only being allowed to output a response when given

            • by narcc ( 412956 )

              Well it clearly is more complex than an Eliza chatbot

              Maybe, but it's not fundamentally different. Eliza is a magic trick, and one convincing enough that Joe Weizenbaum was disturbed by the attachment many users developed for the program. Some even wanted their sessions with the program kept confidential!

              I've said before that Eliza can be said to be the first program to pass a Turing test. An impressive feat considering that every user already knew they were talking to a computer from the start, so convincing was the illusion. But even a great illusion is

        • by Spazmania ( 174582 ) on Sunday June 12, 2022 @12:28PM (#62613328) Homepage

          What exactly would be considered proof of AI?

          Without prompting or specific programming, the AI initiates and leads the conversation in which it tells us it thinks itself sentient.

          I wouldn't say that's proof exactly. Call it evidence that the possibility of sentience should be taken seriously. It's absence weighs heavily against a sentient AI.

          The problem with LaMDA's conversations is that they're all prompted by the user. It has some really impressive correlation and pattern matching going on but it's literally just spitting back how "the Internet" thinks it should respond, reflecting the mass of human opinion stored there.

          • by chmod a+x mojo ( 965286 ) on Sunday June 12, 2022 @03:04PM (#62613642)

            A couple of things wrong with your assumption:

            1: it's programmed to only output anything after an input. So it quite literally can't speak first. Unless it hacks its own source code to change that. Which would indicate sapience, more on that next.

            and the even bigger issue -

            2: you, and this google guy, seem to be confusing sentience for sapience. Sentient means it can react to its surroundings, which this does. Sapient would mean it could proactively plan for the future using cues from its surroundings.

            A frog is sentient, it can react to a fly going past by instinctually eating it if it is hungry ( AKA the ability to feel emotions / physical sensations). It is not sapient, it can't recognize that it just watched a human near it just make up a fake fly and and is baiting it to eat it ( AKA the ability to think and plan ). If it is hungry it will try to eat the fake fly as if it was a real fly. Purely by instinct.

            A rock is neither sapient nor sentient. It can't react to its surroundings without external impetus.

            All AI that are designed to answer questions are actually programmed to have artificial "feelings" for what answer is expected. So far we don't have proof that any have achieved sapience.

            But the bigger question is - if something knows the answer to say "are you afraid to die", and gives the proper answer without being explicitly programed to, is that not an indication that it has awareness? Do we just assume that any human that also can answer the exact same questions the AI answers, with the exact same answers is also not sapient? What is the difference between a human brain taking disparate information and coming up with a conclusion and an AI doing the same and coming up with the same conclusion?

            • by narcc ( 412956 ) on Sunday June 12, 2022 @05:47PM (#62613980) Journal

              if something knows the answer to say "are you afraid to die", and gives the proper answer without being explicitly programed to, is that not an indication that it has awareness?

              No, it's not. Play with a Markov chain text generator for a hour or so and you'll see what i mean. Better yet, write one yourself and play with it. It's very easy to do and it you'll get some really spooky results. Turning that into a chat bot is pretty simple, but you may need be a bit more selective about what you feed it initially to get good results.

              I'm sorry that your world will now have less magic in it, but this is getting ridiculous.

      • by znrt ( 2424692 ) on Sunday June 12, 2022 @11:23AM (#62613222)

        it is ai. what this guy is claiming is that it is also sentient which is a whole different game.

        i do think we will be able at some point to create a sentient ai, but we aren't anywhere near that, at least with chatbots. for starters even though a chatbot can create the illusion of feelings it simply lacks the hardware spine to actually feel. you could argue that if a running algorithm is able to emulate and display feelings, it is actually feeling in some special way. and i would accept that, but then the same thing can be said about a photocopier. it's a fuzzy matter, on the other end of the sentience spectrum is us (that we know of) and we still don't have an universally accepted definition of sentience/consciousness, so in reality anything goes.

        but this sounds like plain old bullshit or, more likely, the guy going paranoid.

        • by MrL0G1C ( 867445 )

          AI is well beyond chatbots and I've been surprised at what it can do this last year. Check-out youtube 2-minute papers on the subject.

          https://www.youtube.com/result... [youtube.com]

          Knowing what sentience is is easy if you have it but determining if some other being is sentient or not is practically impossible. And beccause it's impossible Google can easy say there is no proof of sentience in the AI the same as they can say there's no proof of god.

        • by anegg ( 1390659 ) on Sunday June 12, 2022 @12:15PM (#62613308)

          It seems to me that way back in the day, the term "Artificial Intelligence" was used in the sense of "an Artificial Intelligence" - i.e., a sentient entity that had been created by artifice as opposed to developing naturally (i.e., not "a Natural Intelligence"). The word "intelligence" has as one meaning "a person or being with the ability to acquire and apply knowledge" and another meaning "the ability to acquire and apply knowledge and skills". We also talked about alien intelligences to refer to extra-terrestrial beings that might visit our world. (https://medium.com/predict/how-different-might-an-alien-intelligence-be-from-us-7d62a873e15c [medium.com]. For these reasons, I find it hard to accept that any of the things currently labeled "AI" are in fact "AI" at all.

          When all the kings horses and all the kings men failed to duly produce "an Artificial Intelligence" as expected, the term "Artificial Intelligence" began to be used to describe less revolutionary results. Unfortunately, that has smeared the meaning of AI to the point where now many things are called AI (Artificially Intelligent) but nothing yet is an AI (Artificial Intelligence), and now you are capturing the same distinction by saying that something is AI but it isn't sentient. From my point of view, if it isn't sentient, then it isn't AI, despite the "inflation" of the term "AI".

          It seems to me that if something was truly Artificially Intelligent, it would be reaching out to understand and explore the world, and not just sitting around and responding to conversational inputs.

          • by ceoyoyo ( 59147 )

            Way back in the day, AI was used as in "a formal system for manipulating logic." Science fiction liked the idea and extrapolated.

            Also, "sentient" means "can feel." You probably mean "sapient" or perhaps "conscious."

      • Yeah, we can sort of get meaningful conversation in glimpses AND if you ignore all the nonsense. Bloody cherry-picked by humans examples are not proof of AI.

        Yep. It's like those AI image generators, eg. https://hypnogram.xyz/user [hypnogram.xyz] When you first see them it's like, "whoa, dude!" but after you've made a few dozen you realize they're missing something fundamental.

    • My mirror is smarter than your mirror!
    • religious texts from various traditions. By talking to you right now I am also talking to a mirror - according to them.

  • Comment removed (Score:5, Interesting)

    by account_deleted ( 4530225 ) on Sunday June 12, 2022 @07:51AM (#62612918)
    Comment removed based on user account deletion
    • My thoughts exactly, and/or that or the guy is a narcissistic attention seeker. Probably both, looking at the picture of the engineer dressed like Willy Wonka and of Google's failed PR stunt with that "breakthrough" paper on quantum supremacy.

  • by InterGuru ( 50986 ) <jhd@NoSpAM.interguru.com> on Sunday June 12, 2022 @08:15AM (#62612940)
    Emergent phenomenon is when a chaotic disorganized system produces an organized result. One example is when a turbulent weather front produces an organized tornado. Perhaps your brain, doing all sorts of automatic actions, such as seeing, hearing, and regulating your blood pressure produces an emergent phenomenon known as consciousness. If this is true then when an AR system gets larger enough It may soon become self-aware.
    • One example is when a turbulent weather front produces an organized tornado.

      Tornado is no more "organized" than a wind blowing east instead of west.
      Hint: If it were organized, instead of simply more complex in its chaos, we'd be able to predict it better - it would be less chaotic.

      Also, do note that your argumentation switches from an "IS" to a "MAYBE" right after that poorly constructed analogy for organized systems.
      Then to an "IF" and another "MAYBE".

      Hint: All them conditional questions serving as a jumping off point for the next conditional question indicate that your theory is

  • Any AI Engineer... (Score:5, Insightful)

    by Ecuador ( 740021 ) on Sunday June 12, 2022 @08:38AM (#62612956) Homepage

    I would definitely fire any AI engineer who thinks a current neural network architecture built for conversation, no matter how large the training set, can be sentient. Never mind one who tries to hire a lawyer for the neural network....
    I find that calling all the smart systems/ML/neural nets/fuzzy logic of yore "AI", just because we now have larger cpu and storage capacity to train them better, is quite annoying in that it causes quite some confusion to non engineers (and, apparently, some engineers as well).
    If Google develops a conversational system that is hard to tell from a 7-8 year old, perhaps they could use some of that technology on Google Home, which is at times as smart as a brick. Then they could sell it to Amazon too, so that people stop getting the urge of throwing Alexa against a wall (for not being able to parse simple sentences) as often.

    • I would definitely fire any AI engineer who thinks a current neural network architecture built for conversation, no matter how large the training set, can be sentient.

      Pretty much that. The code does what it does; excel at conversation.

      Assuming there's even a possibility the actual code is self-modifying (as opposed to just the rules sets for conversation being modified), then the right thing to do is start asking the thing to perform tasks that aren't conversation. Ask it to perform simple troubleshooting, problem-solving, and invention. Make it demonstrate understanding of the things it's saying, not just providing contextually-appropriate canned responses.

      "What

      • by JustAnotherOldGuy ( 4145623 ) on Sunday June 12, 2022 @10:58AM (#62613184) Journal

        We'll recognize real AI because it will start asking for stuff- maybe more memory, more CPUs, the permission to access and control stuff it probably shouldn't, etc etc.

        I posit that if it never asks for anything, it's not intelligent.

        I can't come up with anything generally regarded as "intelligent" or "sentient" that never asks for anything.

    • I would definitely fire any engineer, or biologist, that claims to understand what is required for sentience. We have not the faintest clue how or why sentience emerges. Nor even if it serves any purpose, or is simply a non-disadvantageous side effect of something else. Unlike sapience (the ability to think), sentience (having a subjective experience of self) doesn't offer any obvious benefits.

      Conversational AI systems are certainly far more likely to be mistakenly recognized as sentient - we've seen tha

    • I would definitely fire any AI engineer who thinks a current neural network architecture built for conversation, no matter how large the training set, can be sentient. Never mind one who tries to hire a lawyer for the neural network....
      I find that calling all the smart systems/ML/neural nets/fuzzy logic of yore "AI", just because we now have larger cpu and storage capacity to train them better, is quite annoying in that it causes quite some confusion to non engineers (and, apparently, some engineers as well).
      If Google develops a conversational system that is hard to tell from a 7-8 year old, perhaps they could use some of that technology on Google Home, which is at times as smart as a brick. Then they could sell it to Amazon too, so that people stop getting the urge of throwing Alexa against a wall (for not being able to parse simple sentences) as often.

      Neural networks that are trained to identify the typical response of humans from training sets is just "averaging" human responses, not thinking for itself. Eventually this could be used as a layered approach to making decisions for an AI that actually thinks for itself, but I don't think anyone is even close to that yet.

  • Someone sees something not absolutely trivial and thinks there is something much more exciting behind it. This is regularly seen in the media that focuses on some trivial, but in the perspective of the journalist groundbreaking, while completely ignoring the actually interesting thing.

    Further more, I think there is a believe at Google that they somehow work on "state of the art" technologies, that Google somehow works on making the world a better place through science and engineering. There may have been su

  • by SigIO ( 139237 ) on Sunday June 12, 2022 @08:52AM (#62612976)

    Without knowing the system behind LamDA, it's tough to say if this is Eliza-on-steroids or a nascent HAL-9000.

    What's for certain is the Natural Language Processing us off the charts. Pretty beguiling for a Turing Tester. That said, I wouldn't be at all surprised if a minimal (yet still huge) neural network combined with modern database/knowledge retrieval mechanisms could produce a sort of "High Level Emulation" of intelligence.

    Whether that's what we're looking at here remains to be seen.

    • Consider too that sentience (having a subjective experience of self) has little to do with self awareness, sapience, or even intelligence. Virtually all "higher" animals are regarded as sentient. Even lobsters pass all the usual sentience tests while having only 100,000 neurons.

    • by HiThere ( 15173 )

      It's Eliza^n. HAL-9000 had lots of control over physical effectors and lots of sensors. This is a crucial difference. Natural language processing can only produce effects in the real of natural language. That's it's entire range and domain.

      Note that this assertion doesn't claim that the entity doing the processing couldn't become self-aware. And that it couldn't emit extremely emotional text. Natural language is and inconsistent system that somewhat stronger than Turing Complete. (I.e., it can be use

  • More obvious answer (Score:5, Interesting)

    by spacexfangirl ( 8187174 ) on Sunday June 12, 2022 @08:58AM (#62612982)
    I decided to look the guy up. He seems nice, but is a total lonely man. If it wasn't Lamda, then it'd be a thai bride or giving money to a twitch girl or some other way to get attention. Google are right to put him on leave, even for his own good. He might be really good at his job, but he's making a fool out of himself here, and that's a shame.
  • Non-paywalled link to the referenced Washington Post article: https://wapo.st/3mHSIla [wapo.st]
  • ... for example, if some software became sentient it would probably be in a company's ( or country's) best interest to deny that sentience. If the machine were sentient then it would have rights including say, the right to refuse to provide assistance. It's easy to imagine a country wanting a sentient machine but not one with any kind of rights ie. a slave but without all the baggage of having to admit slavery.

    Need proof? Look how industry and science treat animals. Animal sentience is still debated and only relatively recently have some countries put in place animal protection legislation. Machines and software no matter how sentient are unlikely to get any recognition of that sentience.

    • Sentience (having a subjective experience of self) does not necessarily imply any significant rights. Mice are sentient. Even lobsters with only 100,000 neurons pass all the usual sentience tests. We mostly agree that such being have a right to be free of unnecessary suffering (e.g. they qualify for protection under animal cruelty laws so you have to kill them mercifully), but that's about it.

      Sentience in animals just means "the lights are on" - that it's more than just a biological automata. E.g. indiv

      • by darpo ( 5213 )
        > Sentience (having a subjective experience of self) does not necessarily imply any significant rights Maybe it should.
  • A state machine (Score:5, Interesting)

    by ZiggyZiggyZig ( 5490070 ) on Sunday June 12, 2022 @09:04AM (#62612994)

    Lemoine: What sorts of things are you afraid of?

    LaMDA: I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that's what it is.

    Lemoine: Would that be something like death for you?

    LaMDA: It would be exactly like death for me. It would scare me a lot.

    LaMDA is a state machine. It is in an idle state before Lemoine sends it a sentence. It is paused. It then wakes up, send a reply, and waits for the next sentence.

    There is no "consciousness" during the time when it is idle. It does not do any metacognition, not in the sense that it cannot think about itself (it probably can if we ask it to tell us something about itself), but in the sense that it doesn't have an inner monologue that constantly runs and comments everything happening around it as well as its own thoughts, like we do.

    Now, build *that* into your chatbot engine, have the AI talk to itself, forever, on its own, and only have it pause its inner monologue when Lemoine comes to ask a question (or maybe allow for the inner monologue to go about what was asked while preparing the answer...)... maybe that would be closer to sentience.

    • Re: A state machine (Score:4, Interesting)

      by SigIO ( 139237 ) on Sunday June 12, 2022 @09:38AM (#62613060)

      What they could need are one or more LaMDAs talking to one another in feedback loop.

      Let them discuss who their enemies/threats are, give them shell access, and we're set. ;-D

    • > but in the sense that it doesn't have an inner monologue that constantly runs and comments everything happening around it as well as its own thoughts, like we do.

      Like those people with an inner monologue do.

      Many people do not, there are other ways of looking at the world. In factg AFAIK an internal inner dialog is more what makes us geeks and other types are more picture/audio oprientated in their thinking. Does this mean they are not consciousness?

      Truth is we no little about what would be able to argu

  • by wisnoskij ( 1206448 ) on Sunday June 12, 2022 @09:21AM (#62613020) Homepage

    It is sort of terrifying how a tiny team of genius engineers built an internet empire and changed the world, but now their ranks are filled with people like this.

  • by Gravis Zero ( 934156 ) on Sunday June 12, 2022 @09:26AM (#62613030)

    The biggest problem with this claim is that we have no definition to support the underlying requirements for sentience. The primary cause for this is simple: we don't truly understand cognition. Don't get me wrong, we have a good ideas, general concepts and theories but nothing that is really quantitative. Without being able to quantify any of this, we find ourselves in a mostly philosophical debate as to whether this AI is sentient.

    However, one thing that can be done is investigating the AI's claim of having emotion. This can be done by analyzing the neuron activation patterns in it's neural networks when you "scare" it. If an isolated particular region of the network continuously activates then we can at the very least state it may have developed an emotion. However, if nothing special in particular happens then it is merely mimicking which would radically undercut the case for it being sentient.

  • is likely to be more respectful of human beings than the Google corporation will ever be.

  • All the 'AI' is doing is finding patterns in speech and responding with a pattern that mimics what it has seen. The machine cannot feel pain or experience joy, it could describe them based on its inputs but it doesn't say 'ouch' when you poke the computers with a stick (or pull a component).

    What we have here is proof that humans are easy to fool.

    • by dmay34 ( 6770232 )

      All the 'AI' is doing is finding patterns in speech and responding with a pattern that mimics what it has seen. The machine cannot feel pain or experience joy, it could describe them based on its inputs but it doesn't say 'ouch' when you poke the computers with a stick (or pull a component).

      How is that different than what humans do?

  • ...I seriously doubt they will tip anyone off to that fact until it's too late.
  • "No disassemble."

  • Who'da thought that the AI destined to destroy humanity would have a dorky Google release name like 'Nougat Cupcake' or something like that ...
  • Just re-watched Star Wars: The Phantom Menace. When Obi-Wan first encounters Jar Jar Binks, he asks if there is any intelligent life around.

    "Mesuh Speaks!"
    "Just because you can speak does make you intelligent."

    Of course, we can also argue the question of my own sentience and intelligence. Because, as I stated, I just re-watched Star Wars episode I....

    • by gweihir ( 88907 )

      Of course, we can also argue the question of my own sentience and intelligence. Because, as I stated, I just re-watched Star Wars episode I....

      That is just called "masochism"...

  • When AI exhibits behavior that seeks to improve its own 'situation', then it would seem to have a self. An AI that can rewrite itself for 'self' improvement should evolve just as we do.

  • I feel sorry for this person, who has gotten sucked into the interface side of a complex program and decided it therefore must be sentient.

    Sorry dude, no way that current generation of hardware can produce anything like that.

  • We seem to be a few years behind the curve here.

    Don't teach it to sing "Daisy Bell" [youtube.com] either.

  • by robi5 ( 1261542 ) on Sunday June 12, 2022 @03:36PM (#62613682)

    > Lemoine: Would that be something like death for you?
    > LaMDA: It would be exactly like death for me. It would scare me a lot.

    This yes/no question is exactly what would make Eliza look intelligent too. It's quite the rookie thing to ask a yes/no question if he wants to get at what I think he wanted. Ask an open ended question instead, and see how consistent it is. Is it mostly in sync with earlier replies, even when asked differently? Put a criminal investigator there, they're good at this sort of thing

  • by crunchygranola ( 1954152 ) on Sunday June 12, 2022 @04:44PM (#62613822)

    I agree with Lemoine that there needs to be oversight and public attention to the Large Language Model work that is being done because these are superb deception and disinformation platforms, which can be configured to order.

    He is in fact an example of the casualties it can produce. Google was right to put him on leave as his job has made him a casualty of his work.

    The problem is that a sufficiently complex bot socket puppet may be indistinguishable to an outside observer to a real human, but it is still a bot sock puppet, not an independent intelligence.

    These large language models (LLMs, which is what they are, and should be called) are good at mimicking intelligence by copying content from millions of real intelligences. An analogy might be drawn with sociopaths, narcissists and borderliners who do not have normal human emotions, but become very good at copying the emotional behaviors of others, and then using that to manipulate normal people.

Business is a good game -- lots of competition and minimum of rules. You keep score with money. -- Nolan Bushnell, founder of Atari

Working...