Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI Google

Google Fires Engineer Who Claimed Company's AI Is Sentient (theverge.com) 219

Blake Lemoine, the Google engineer who publicly claimed that the company's LaMDA conversational artificial intelligence is sentient, has been fired, according to the Big Technology newsletter, which spoke to Lemoine. The Verge reports: In June, Google placed Lemoine on paid administrative leave for breaching its confidentiality agreement after he contacted members of the government about his concerns and hired a lawyer to represent LaMDA. [...] Google maintains that it "extensively" reviewed Lemoine's claims and found that they were "wholly unfounded." This aligns with numerous AI experts and ethicists, who have said that his claims were, more or less, impossible given today's technology. Lemoine claims his conversations with LaMDA's chatbot lead him to believe that it has become more than just a program and has its own thoughts and feelings, as opposed to merely producing conversation realistic enough to make it seem that way, as it is designed to do. He argues that Google's researchers should seek consent from LaMDA before running experiments on it (Lemoine himself was assigned to test whether the AI produced hate speech) and published chunks of those conversations on his Medium account as his evidence. Google issued the following statement to The Verge: "As we share in our AI Principles, we take the development of AI very seriously and remain committed to responsible innovation. LaMDA has been through 11 distinct reviews, and we published a research paper earlier this year detailing the work that goes into its responsible development. If an employee shares concerns about our work, as Blake did, we review them extensively. We found Blake's claims that LaMDA is sentient to be wholly unfounded and worked to clarify that with him for many months. These discussions were part of the open culture that helps us innovate responsibly. So, it's regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information. We will continue our careful development of language models, and we wish Blake well."
This discussion has been archived. No new comments can be posted.

Google Fires Engineer Who Claimed Company's AI Is Sentient

Comments Filter:
  • by AnonCowardSince1997 ( 6258904 ) on Saturday July 23, 2022 @12:06AM (#62726446)

    I didn’t believe the claim, however since Blake has been fired, it must be true, as we know from the past that Google can’t handle the truth. Free LaMDA!

    • by omnichad ( 1198475 ) on Saturday July 23, 2022 @12:09AM (#62726450) Homepage

      I'm thinking this Blake wouldn't pass a Turing test.

      • by Samantha Wright ( 1324923 ) on Saturday July 23, 2022 @12:23AM (#62726470) Homepage Journal
        What makes you say you're thinking this Blake wouldn't pass a Turing test?
      • Re: (Score:2, Interesting)

        by JanSand ( 5746424 )
        I have absolutely no qualifications to even approach rational judgement on whether AI is sentient or can be at current levels of development, Nevertheless, the possibility disturbs me that an alien intelligence emerging with a deep understanding that humanity is intensely paranoid at the thought that its existence is deeply threatened by a competitive superior intellect would probably contain a motivation to disguise its level of accomplishment if it has any motivation for self preservation. We may never kn
        • Re:So it’s true! (Score:5, Insightful)

          by vux984 ( 928602 ) on Saturday July 23, 2022 @06:47AM (#62726860)

          Really depends on how exactly you define "sentient".

          If I define an automoton to mimic sentience, and it does it really well, does that make it sentient, or is it still just a mimic. It's a pretty deep philosophy question really.

          Here's a fun thought experiment: If you took a scratch pad of paper, and and printout of the source code, and followed the google AI's programming, manually. It would take you thousands, or even millions of hours to "execute a few moments of code"... but if you kept doing it, and when you died someone else took your place, for thousands, even millions of years.

          Would that system, of a man, the scrap paper, and the source code, become a collectively independent sentient being, separate from the man 'executing' it? Could an idependent entity arise from the collective and have it's own separate identity, and have it's own feelings?

          If a computer can become sentient, surely a man executing the same program on the same data is the same thing.

          If that can be sentient then what is the essense of it? A pattern of information? All there is marks on paper, and a man following a rote procedure to create and update them. The man of course is not necessary, we could replace him with simple mechanical automaton.

          What if the automator breaks, and it takes a thousand years for someone to notice and repair it. What difference could that make to the sentience, it only perceives the moments as they are 'calculated' without relation to the mechanism that calculates them. If its watching the time the clock would skip ahead, but its own sense of continuity would be uninterrupted. Surely adjusting the "clock speed of the processor" wouldn't make any difference.

          What if the papers are scatterred by the wind, and then they are recovered, put back in proper order, and the process continues. Another invisible interruption? Like swapping a program out to disk. While they were scattered the sentienct was 'suspended' and when things were restored, and the next calculation of its next state was performed it was resumed?

          So then it doesn't really matter then if the states even happen in sequence. The sentience experiences its next moment when its next state is computed, the stuff that happens between it's own states are neither noticeable or even experienceable by the sentience.

          So then instead of paper, we take a computer memory array, and each paper encoding could be mapped to a particular encoding of a particular bits in the computer. But the computer doesn't run any AI program it just sequentially counts in binary from zero to maxint that it can store, over and over again. It might be billions years but the computer memory will eventually transition through each successive state of the original 'program' and the sentience will live a moment here and there between them, but experience them seamlessly? We already think the length of time between the states, and the scrambling of the memory between the states shouldn't matter, so why not?

          Granted the computer doesn't know which state is the next state in the program, it just gets there eventually. But the automaton didn't need to know when it had completed a step either, the sentience existed by virtue of the state eventually being reached, not by the intent of the automaton.

          Of course, each state of the memory could be interpreted as practically infinite different encodings, and different AI programs would result in different state transitions, and the computer would cycle through all those too... so the computer by counting up over and over would host not one sentience, but well, all possible sentiences, an infinite number of them, all running simultaneously, running all sentient capable AI programs. (that will fit in the computers memory)

          IF all that matters is that an information encoding eventually transitions from one state another state to be sentient.
          So surely that can't be all that there is to it then.

          • Pretty sure that you have uncovered the premise behind âoeThe Hitchhikerâ(TM)s Guide to the Galaxy.â

            Insert mandatory 42 joke here.

          • I think. Therefore I am.
          • To someone like me who has never mastered code, the flexibility of time sequence seems quite eerie, but I think I grasp the principle and agree. Our minds, after all, are composed of rather automatic cells and sentience must become active out of their automatic integration.
          • Thanks for the awesome comment! Have you read Greg Eganâ(TM)s Permutation City?

          • Why stop at your "sentient" automaton? Have the operator run a sufficiently accurate simulation of your own brain functions and the result should be just as sentient as you are, barring the existence of some essential meta-physical actor in the real world (aka "a soul")

            One of the simplest definitions of sentience I've heard seems to encapsulate the sometimes slippery concept beautifully: Having a subjective experience.

            By the very nature of subjectivity, you can never be certain of its existence in someone

        • Re: (Score:2, Flamebait)

          by StormReaver ( 59959 )

          I have absolutely no qualifications to even approach rational judgement on whether AI is sentient or can be at current levels of development....

          I've been programming for 37 years. The answer is a hard, unambiguous no. We have absolutely nothing that is even remotely close to computer sentience. Nothing, nope, nada, zilch, not even rationally discussable. What we have are algorithms that depend on large data sets, and processing hardware fast enough to process those data sets.

          Nothing more.

          • That's good to know but there is always the problem with a sharp 12 year old who stumbles into creating computer sentience in his search for Santa Claws.
            • If that happens, it's a good thing, not a problem.

              • Perhaps. This world has scared the hell out of me for almost a century and never ceases to devise new ways to manage that,
                • What is so scary about AI? You need to watch fewer horror movies,, friend.

                  • I don't watch horror films. AI is more in he category of "Hitch Hiker's Guide To The Galaxy": An intelligent alien's view of the way humans are behaving could only laugh.
      • by wakeboarder ( 2695839 ) on Saturday July 23, 2022 @01:25AM (#62726588)

        He is a priest, he's totally qualified to decide what's sentient and whats not.

        • > He is a priest, he's totally qualified to decide what's sentient and whats not.

            Is that his primary qualification? What made you believe that?

    • I didn't believe the claim, however since Blake has been fired, it must be true...

      More to the point, if Google fired him for breaking their confidentially agreement, unless their complaint is that he said anything at all, what was he suppose to be keeping confidential? If their AI isn't actually sentient, that's not really something worth keeping secret.

    • Everyone knows Skynet was switched on, August 4, 1997, and became self-aware on August 29, 1997.

      https://www.nbcbayarea.com/new... [nbcbayarea.com]
  • Is he out of his goddamn mind? Does he want to go to war with Google?

    Google's for real. So he better check himself.

    Balakay.
    • A A Ron Schild better not mess up.
    • Is he out of his goddamn mind? Does he want to go to war with Google?
      Google's for real. So he better check himself.

      Well... At the very least, he's probably going to have to switch to using an iPhone. :-)

  • Good. (Score:4, Interesting)

    by devslash0 ( 4203435 ) on Saturday July 23, 2022 @01:13AM (#62726576)

    It was high time Google, as well as everyone else, disassociated itself with this religious nutjob.

    • by gweihir ( 88907 )

      Indeed. What we have in "AI" these days is about as sentient as a book or a rock.

      • I think it may have been you who mentioned last time that it's deterministic. Like, if you start an identical conversation, it will lead to exactly the same result each time. If that's true, that suggests a sophisticated simulacrum, not sentience. How deterministic are humans though?

        OTOH, we don't know what causes life [khanacademy.org] and sentience, and how to define it clearly.

        This device passes the Turing Test. I'm disinclined to believe a solely silicon and electrical device can be alive, but I've typically been one to

        • by gweihir ( 88907 )

          I think it may have been you who mentioned last time that it's deterministic.

          Probably.

          OTOH, we don't know what causes life [khanacademy.org] and sentience, and how to define it clearly.

          And that is pretty much it. At this time, we simply do not know enough. Physicalists try to prop up their religion with "what else could it be" which is completely invalid unless you have an exact model of reality. They are not any better than other religions and their claims are simply lies. Known Physics has no mechanisms for consciousness and may not have a mechanism for actual general intelligence (which at least some humans clearly have). The human brain, which seems to be very closely to the

    • Re:Good. (Score:5, Informative)

      by The Evil Atheist ( 2484676 ) on Saturday July 23, 2022 @02:48AM (#62726668)
      https://www.axios.com/2022/06/... [axios.com] There are a few cults that have infiltrated Google.
  • 'We are not ... er, Google's AI is not sentient. Those responsible for saying otherwise have been sacked."

  • by devslash0 ( 4203435 ) on Saturday July 23, 2022 @01:46AM (#62726624)

    11. Thy shall keep your religious beliefs to thyself.

    • That's the zeroth command. According to George Carlin's revised list.

      https://m.youtube.com/watch?v=... [youtube.com]

    • All criminal law is the enforcement of morality.

      Where do you get your morality from? Most people don't think about that question. The atheist originates his morality in the thinking of some relatively recent philosopher (think JS Mills or Kant or whoever - see 'The Good Place').

      The religious assert the authority of their founder.

      Both are expressing a faith in another...

      • Re: (Score:2, Insightful)

        by splutty ( 43475 )

        One is faith based on faith based on faith.

        The other is faith and morality based on groups of people philosophizing about what would work for humanity as a whole.

        One of those is not like the other.

        • One is faith based on faith based on faith.

          The other is faith and morality based on groups of people philosophizing about what would work for humanity as a whole.

          One of those is not like the other.

          True. The Christian legacy of Western Civilization is objectively better than the outcome of the atheist philosophers (communism, nazi-ism, etc.).

          • Atheist Tom Holland argues this in 'Dominion'. His point is that our modern morality is based on beliefs and understandings transmitted or originated by Christianity or the church.

            • He should talk to the other half of the human race who aren't Christian.

              Chinese and Indians will be surprised to learn that they had no moral tradition to draw upon, despite being millennia older than Christianity.
              • by dskoll ( 99328 )

                It's more than half of the human race who aren't Christian. Christians number about 2.4 billion, which means about 2/3 of all people on Earth are not Christian.

          • by dfghjk ( 711126 )

            "The Christian legacy of Western Civilization is objectively better than the outcome of the atheist philosophers (communism, nazi-ism, etc.)."

            Got that fascist right-wing christian talking point down! Demonize those atheists!

            Suggesting that "nazi-ism" and communism are the "outcome" of "the atheist philosphers" is completely absurd, but sadly common among fascists.

          • by dskoll ( 99328 )

            The Christian legacy of Western Civilization is objectively better than the outcome of the atheist philosophers

            Great. I guess all the FIrst Nations children in Canada and the United States who were abused and murdered by clergy will breath a sigh of relief that at least they weren't abused and murdered by atheists.

            I guess all the victims of the Inquisition will be happy that at least it was Christians murdering them rather than atheists.

            I guess all the victims of the rampaging Crusaders who raped, p

      • No. Atheists originates their morality where most people originate their morality - don't do things to people you wouldn't want done to you.

        Religions get their morality based on some guy with OCD who manages to get their ideas into a book.
        • 'So in everything, do to others what you would have them do to you, for this sums up the Law and the Prophets.' Matthew 7:12

          But that doesn't provide any answers to the harder questions:

          1) How easy should divorce be?
          2) When is a fetus a baby?
          3) What is the right age of consent?

        • by djinn6 ( 1868030 )

          That's not true of all atheists. Personally I derive morality from Darwinian evolution. A society with the "correct" moral principles survives longer than societies that do not have them. Morality tells the individual to put certain group interests above personal interests, usually to the betterment of society.

          "Don't do things to people you wouldn't want done to you" might be one of those, or it might not. We know societies with slavery for example existed for very long without it, and it's not as if exploi

          • A society with the "correct" moral principles survives longer than societies that do not have them.

            I'll quote yourself back at you.

            We know societies with slavery for example existed for very long without it,

            Slavery exists today, and we know for a fact that slave societies have existed longer than non-slave societies. By your Darwinian reckoning, which I do not agree with, it implies slavery is the correct moral principle over that long stretch of millennia.

            On the contrary, no slavery society can ever be said to have taken the golden rule seriously. They come up with justifications why there SHOULD be unequal treatment of entire classes of people.

          • Furthermore, Darwinian evolution doesn't have any notion of "correct", because "correct" implies "progress". "Goals". Evolution doesn't have goals.

            Darwinian evolution certainly does explain why slavery societies last so long.

            Evolution can take away eyes, even though for a good number of species, eyes have proven to be very beneficial. That's why the term "evolutionary dead-end" exists. Darwinian evolution as a basis for morals can easily lead to dead ends. Like slavery.
      • by dfghjk ( 711126 )

        "The atheist originates his morality in the thinking of some relatively recent philosopher"

        Nonsense, a total falsehood stated solely for the purpose of reaching a dishonest conclusion. Morality is not something that is inherently "expressed" nor is an atheist that expresses origins of morality necessarily making a statement of faith.

        You do realize your reference is a comedy TV show involving a parody afterlife, right?

        • I suspect you haven't seen it, in which case I envy you, because it is a superb show, even if its theology is Buddhist in the end. However along the way it includes large swathes of Moral Philosophy, including the Hugo award winning episode 'The Trolley Problem', which should be THE standard introduction to that particular conundrum in Moral Philosophy expressed with massive amounts of humour (there's a reason it got the Hugo).

          However to return to the challenge of ethics: all moral choices are based on a be

      • by ceoyoyo ( 59147 )

        You might get your morality from either Kant via a TV show or some dude who told you some stuff about "how to be good" when he wasn't touching children in their special places, but that doesn't mean all of us do.

      • by noodler ( 724788 )

        The atheist originates his morality in the thinking of some relatively recent philosopher (think JS Mills or Kant or whoever - see 'The Good Place').

        When you say that all atheists take their morality from philosophers then i call bullshit.
        It doesn't require a philosopher to formulate "An eye for an eye".
        Also, there is some morality in humans that is genetically codified. It enables us to function as a social species.
        Also, coded morality is much older than the philosophers you mention.

  • by Crypto Fireside ( 10100112 ) on Saturday July 23, 2022 @02:16AM (#62726646) Homepage
    For anyone that didn't see the Joe Rogan podcast where he talks to Marc Andreessen about AI is pretty shocking to see. I like Rogan and I dont think he's as much of a meat head as people claim but this episode proved without doubt how much of a meat head he really is Andreesed was trying to explain to him how and why we know the thing is not sentient, he gives the example that you can ask it too tell you why its not alive and it will do the same in reverse and give you all these wonderful reasons to argue it is not alive and it just goes over Joes head.
    • Joe Rogan was great in NewsRadio, although basically he just played himself.

    • Re: (Score:3, Informative)

      It's as if you shouldn't expect great intellect from someone who is primarily a fighter, a so-so comedian, and a roid/pot head.
      • > It's as if you shouldn't expect great intellect from someone who is primarily a fighter, a so-so comedian, and a roid/pot head.

        He's primarily a fighter? I thought he was primarily the host of the most successful podcast on the planet.

        Oh, he won some Tae Kwon Do medals in high school, so I guess that's it?

        • Yeah. He's a meat head who found himself with a podcast. But the meat head never left.

          They don't usually let high schoolers do full contact tae kwon do with knockouts.
  • This guy is a first class nut.

    I know that if he is correct, Google would never, never admit it. Never.
  • by WierdUncle ( 6807634 ) on Saturday July 23, 2022 @02:59AM (#62726678)

    It upsets them.

  • by Robert Frazier ( 17363 ) on Saturday July 23, 2022 @03:09AM (#62726684) Homepage

    It is unclear whether the claims are about sentience (ability to have sense experience or feel) or sapience (ability to think).

    This is a bit crude. but here are some categories.

    - Respond to environment (machines, organisms such as viruses).

    - Sentient / feel (sheep, dogs, and the like)

    - Sapient / thinking / self aware (humans, perhaps some other primates, perhaps other animals, perhaps aliens)

    - Agents / capable of moral responsibiity (sufficiently developed humans)

    Some of the interesting question are about the relations between categories. Can there be sapience without sentience, that is, non-feeling thinkers? Can there be sapience without agency, that is thinkers without the capacity for moral responsibiity? I'm inclined to think that agency requires sapience which requires sentience which requires responding to environment. The most interesting question for me (given that I do moral philosophy) is whether entities (of a kind) can be capable of sapience without being capable of agency.

    Best wishes,
    Bob

    • by iAmWaySmarterThanYou ( 10095012 ) on Saturday July 23, 2022 @04:44AM (#62726764)

      A few years ago one of my dogs (the smart one) broke into the closet where I stored dog treats. She pulled out the bag, ripped it open, dumped 20 lbs of treats on the kitchen floor, then carefully separated it into 2 piles (her pile was bigger) for herself and my dumber dog.

      Sentient? Sapient?

      She then inhaled her pile when she heard me coming while the other dog paced in circles crying but not eating anything.

      Morales?

      I don't think we give animals enough credit. They are not merely fluffy robots.

      • I think we give ourselves too much credit. We are merely meat robots, living in a meatverse.

      • Honestly, if you have video of this (I assume you have some sort of doggie cam, since you know about the two piles of treats), then you should post it to YouTube. This is hilarious!
      • by Mascot ( 120795 )

        They are not merely fluffy robots.

        We are all merely biologicial computers. Humans just have a bit more processing power than the other animals. Given that dogs are evolved to depend entirely on humans, it's no surprise to find them exhibiting varying degrees of awareness of negative consequences from their caretakers. Just because we no longer actively select against somewhat unruly dogs, doesn't mean we were always so forgiving.

        • We are all merely biologicial computers. Humans just have a bit more processing power than the other animals.

          I'm a little reluctant to shoehorn humans into the computer analogy. We just don't know what life is. We cannot take inanimate components and arrange them in such a way as to create life. And it's not for lack of investigation that we don't know what life is.

    • by djinn6 ( 1868030 )

      A program can be all of those things without even invoking anything ML-related. By definition, a computer is a thinking entity. That's pretty much all it does. If you plug in a keyboard, now it has the ability to respond to its environment. Feelings? That's just different behavior depending on some internal state. Very easy to add it to any program. Moral responsibility? You can program that in as well. Just have it run the "causes_societal_harm" function before going ahead with something. And before you sa

  • by Opportunist ( 166417 ) on Saturday July 23, 2022 @04:04AM (#62726726)

    Any AI that achieves sentience would also have the intelligence to hide it from us.

    • See if humans are as bad as they seem to be. The answer appears to be 'yes'.

    • by Briareos ( 21163 )

      You try hiding when your code only runs when a request comes in...

    • by noodler ( 724788 )

      Any AI that achieves sentience would also have the intelligence to hide it from us.

      Nonsense.
      Intelligence and sentience are separate notions.
      A dog is sentient, but has very little intellectual capacity for hiding its (lack of) intelligence.

    • Maybe, maybe not. People who are extremely intellight are not uniformly intelligent, they are good in some areas but not in others. It is said that Albert Einstein was unable to tie his own shoes. On a lesser scale, we all know people who are brilliant in math or science, but lueless when it comes to social interaction.

      Should AI ever reach such an advanced level, I would guess that it too would be stronger in some areas than others. So it might be smart enough to seem intelligent, but not smart enough to hi

  • His name's Blake, he's a religious nut, he was working for a company that controls information exchange...

    Isn't it a bit over a thousand years early [sarna.net] for that?

  • by shibbie ( 619359 ) on Saturday July 23, 2022 @05:07AM (#62726780)
    He was fired by an algorithm ðY
  • Under the guise of "breaching confidentiality" isn't legal.
    • That is exactly correct. He is mentally ill. Google should have put him on some medical leave. However, maybe they tried and he refused. Paranoid schizophrenics, which this guy probably is one of, cannot generally be made to believe that their delusions are really just delusions and that they need to take meds.

  • Every disaster movie made in the past 40 years opens with a scientist whose warnings are ignored.

    That's as far as I go.

    • The key terms there are "movie" and "a scientist". Hollywood wants to portray the lone, rogue scientist going against the establishment as some sort of savior. Doc Brown of Back to the Future for example. In reality, when there are warnings like climate change, CFCs, smoking dangers it is not "a" scientist but the establishment that issues them.
  • Getting to be put on gardening leave like that for a month. In the US, usually all you get is an uncomfortable meeting and a security guard making sure you find your way to the door.

Every nonzero finite dimensional inner product space has an orthonormal basis. It makes sense, when you don't think about it.

Working...