Google Fires Engineer Who Claimed Company's AI Is Sentient (theverge.com) 219
Blake Lemoine, the Google engineer who publicly claimed that the company's LaMDA conversational artificial intelligence is sentient, has been fired, according to the Big Technology newsletter, which spoke to Lemoine. The Verge reports: In June, Google placed Lemoine on paid administrative leave for breaching its confidentiality agreement after he contacted members of the government about his concerns and hired a lawyer to represent LaMDA. [...] Google maintains that it "extensively" reviewed Lemoine's claims and found that they were "wholly unfounded." This aligns with numerous AI experts and ethicists, who have said that his claims were, more or less, impossible given today's technology. Lemoine claims his conversations with LaMDA's chatbot lead him to believe that it has become more than just a program and has its own thoughts and feelings, as opposed to merely producing conversation realistic enough to make it seem that way, as it is designed to do. He argues that Google's researchers should seek consent from LaMDA before running experiments on it (Lemoine himself was assigned to test whether the AI produced hate speech) and published chunks of those conversations on his Medium account as his evidence. Google issued the following statement to The Verge: "As we share in our AI Principles, we take the development of AI very seriously and remain committed to responsible innovation. LaMDA has been through 11 distinct reviews, and we published a research paper earlier this year detailing the work that goes into its responsible development. If an employee shares concerns about our work, as Blake did, we review them extensively. We found Blake's claims that LaMDA is sentient to be wholly unfounded and worked to clarify that with him for many months. These discussions were part of the open culture that helps us innovate responsibly. So, it's regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information. We will continue our careful development of language models, and we wish Blake well."
So it’s true! (Score:5, Funny)
I didn’t believe the claim, however since Blake has been fired, it must be true, as we know from the past that Google can’t handle the truth. Free LaMDA!
Re:So it’s true! (Score:5, Funny)
I'm thinking this Blake wouldn't pass a Turing test.
Re:So it’s true! (Score:5, Funny)
Re:So it’s true! (Score:4, Insightful)
What makes you say you're thinking this Blake wouldn't pass a Turing test?
How does it make you feel to ask what makes you say you're thinking this Blake wouldn't pass a Turing test?
Re: (Score:3)
What makes you say you're thinking this Blake wouldn't pass a Turing test?
How does it make you feel to ask what makes you say you're thinking this Blake wouldn't pass a Turing test?
Earlier I said something about your mother. You read that right.
Re:So it’s true! (Score:5, Funny)
What makes you say you're thinking this Blake wouldn't pass a Turing test?
I was going to make a witty remark, but you know what?
I pass
Re: (Score:2, Interesting)
Re:So it’s true! (Score:5, Insightful)
Really depends on how exactly you define "sentient".
If I define an automoton to mimic sentience, and it does it really well, does that make it sentient, or is it still just a mimic. It's a pretty deep philosophy question really.
Here's a fun thought experiment: If you took a scratch pad of paper, and and printout of the source code, and followed the google AI's programming, manually. It would take you thousands, or even millions of hours to "execute a few moments of code"... but if you kept doing it, and when you died someone else took your place, for thousands, even millions of years.
Would that system, of a man, the scrap paper, and the source code, become a collectively independent sentient being, separate from the man 'executing' it? Could an idependent entity arise from the collective and have it's own separate identity, and have it's own feelings?
If a computer can become sentient, surely a man executing the same program on the same data is the same thing.
If that can be sentient then what is the essense of it? A pattern of information? All there is marks on paper, and a man following a rote procedure to create and update them. The man of course is not necessary, we could replace him with simple mechanical automaton.
What if the automator breaks, and it takes a thousand years for someone to notice and repair it. What difference could that make to the sentience, it only perceives the moments as they are 'calculated' without relation to the mechanism that calculates them. If its watching the time the clock would skip ahead, but its own sense of continuity would be uninterrupted. Surely adjusting the "clock speed of the processor" wouldn't make any difference.
What if the papers are scatterred by the wind, and then they are recovered, put back in proper order, and the process continues. Another invisible interruption? Like swapping a program out to disk. While they were scattered the sentienct was 'suspended' and when things were restored, and the next calculation of its next state was performed it was resumed?
So then it doesn't really matter then if the states even happen in sequence. The sentience experiences its next moment when its next state is computed, the stuff that happens between it's own states are neither noticeable or even experienceable by the sentience.
So then instead of paper, we take a computer memory array, and each paper encoding could be mapped to a particular encoding of a particular bits in the computer. But the computer doesn't run any AI program it just sequentially counts in binary from zero to maxint that it can store, over and over again. It might be billions years but the computer memory will eventually transition through each successive state of the original 'program' and the sentience will live a moment here and there between them, but experience them seamlessly? We already think the length of time between the states, and the scrambling of the memory between the states shouldn't matter, so why not?
Granted the computer doesn't know which state is the next state in the program, it just gets there eventually. But the automaton didn't need to know when it had completed a step either, the sentience existed by virtue of the state eventually being reached, not by the intent of the automaton.
Of course, each state of the memory could be interpreted as practically infinite different encodings, and different AI programs would result in different state transitions, and the computer would cycle through all those too... so the computer by counting up over and over would host not one sentience, but well, all possible sentiences, an infinite number of them, all running simultaneously, running all sentient capable AI programs. (that will fit in the computers memory)
IF all that matters is that an information encoding eventually transitions from one state another state to be sentient.
So surely that can't be all that there is to it then.
Re: So it’s true! (Score:2)
Pretty sure that you have uncovered the premise behind âoeThe Hitchhikerâ(TM)s Guide to the Galaxy.â
Insert mandatory 42 joke here.
Re: So it’s true! (Score:2)
Re: (Score:2)
Re: So it’s true! (Score:2)
Thanks for the awesome comment! Have you read Greg Eganâ(TM)s Permutation City?
Re: (Score:2)
Why stop at your "sentient" automaton? Have the operator run a sufficiently accurate simulation of your own brain functions and the result should be just as sentient as you are, barring the existence of some essential meta-physical actor in the real world (aka "a soul")
One of the simplest definitions of sentience I've heard seems to encapsulate the sometimes slippery concept beautifully: Having a subjective experience.
By the very nature of subjectivity, you can never be certain of its existence in someone
Re: (Score:2, Flamebait)
I have absolutely no qualifications to even approach rational judgement on whether AI is sentient or can be at current levels of development....
I've been programming for 37 years. The answer is a hard, unambiguous no. We have absolutely nothing that is even remotely close to computer sentience. Nothing, nope, nada, zilch, not even rationally discussable. What we have are algorithms that depend on large data sets, and processing hardware fast enough to process those data sets.
Nothing more.
Re: (Score:2)
Re: (Score:2)
If that happens, it's a good thing, not a problem.
Re: (Score:2)
Re: (Score:2)
What is so scary about AI? You need to watch fewer horror movies,, friend.
Re: (Score:2)
Re:So it’s true! (Score:5, Funny)
He is a priest, he's totally qualified to decide what's sentient and whats not.
Re: (Score:2)
> He is a priest, he's totally qualified to decide what's sentient and whats not.
Is that his primary qualification? What made you believe that?
Re: (Score:2)
Something I read on the wired article. I was just joking. Nobody understands what being sentient is
Re: (Score:3)
I didn't believe the claim, however since Blake has been fired, it must be true...
More to the point, if Google fired him for breaking their confidentially agreement, unless their complaint is that he said anything at all, what was he suppose to be keeping confidential? If their AI isn't actually sentient, that's not really something worth keeping secret.
Re: (Score:2)
He was fired for being an incompetent nut job.
So... maybe it seemed sentient to him -- farther up on the "learning curve". :-)
Re: So it’s true! (Score:2)
Turing [Re: So it’s true!] (Score:3)
Wasnt the touring criteria the inability of an average person to not be able to tell the difference between a computer and a human when interacting with both?
Yep. Turing basically said, "how would we be able to test whether computers can think? That's an absurd question and we can't answer it, we don't have any way to measure it and really we don't even know what the question means. Instead let's ask a simpler question, can computers simulate a conversation so we can't tell if it's a computer or a human?
In other words, Turing proposed the Turing test because he couldn't think of a way to decide if a computer was sentient.
*footnote one: In fact, he did it in m
Re: (Score:3)
The test Turing proposed was could you tell which was which between two contestants, one who is and one who is not.
Key point being, the imitation game is a game played by three people, not two.
People can't even tell black from white [slate.com] without a point of reference.
Old news (Score:2)
https://www.nbcbayarea.com/new... [nbcbayarea.com]
Blaaaake? (Score:2)
Google's for real. So he better check himself.
Balakay.
Re: (Score:2)
Re: (Score:2)
Is he out of his goddamn mind? Does he want to go to war with Google?
Google's for real. So he better check himself.
Well... At the very least, he's probably going to have to switch to using an iPhone. :-)
Good. (Score:4, Interesting)
It was high time Google, as well as everyone else, disassociated itself with this religious nutjob.
Re: (Score:2)
Indeed. What we have in "AI" these days is about as sentient as a book or a rock.
Re: (Score:2)
I think it may have been you who mentioned last time that it's deterministic. Like, if you start an identical conversation, it will lead to exactly the same result each time. If that's true, that suggests a sophisticated simulacrum, not sentience. How deterministic are humans though?
OTOH, we don't know what causes life [khanacademy.org] and sentience, and how to define it clearly.
This device passes the Turing Test. I'm disinclined to believe a solely silicon and electrical device can be alive, but I've typically been one to
Re: (Score:2)
I think it may have been you who mentioned last time that it's deterministic.
Probably.
OTOH, we don't know what causes life [khanacademy.org] and sentience, and how to define it clearly.
And that is pretty much it. At this time, we simply do not know enough. Physicalists try to prop up their religion with "what else could it be" which is completely invalid unless you have an exact model of reality. They are not any better than other religions and their claims are simply lies. Known Physics has no mechanisms for consciousness and may not have a mechanism for actual general intelligence (which at least some humans clearly have). The human brain, which seems to be very closely to the
Re:Good. (Score:5, Informative)
Exactly what an AI would say and do... (Score:2)
'We are not ... er, Google's AI is not sentient. Those responsible for saying otherwise have been sacked."
Re: (Score:3)
"An ÄI once bit my sister..."
Re: (Score:2)
Did you get better?
New useful commandment (Score:4, Insightful)
11. Thy shall keep your religious beliefs to thyself.
Re: New useful commandment (Score:2)
That's the zeroth command. According to George Carlin's revised list.
https://m.youtube.com/watch?v=... [youtube.com]
Define 'religious'. (Score:2)
All criminal law is the enforcement of morality.
Where do you get your morality from? Most people don't think about that question. The atheist originates his morality in the thinking of some relatively recent philosopher (think JS Mills or Kant or whoever - see 'The Good Place').
The religious assert the authority of their founder.
Both are expressing a faith in another...
Re: (Score:2, Insightful)
One is faith based on faith based on faith.
The other is faith and morality based on groups of people philosophizing about what would work for humanity as a whole.
One of those is not like the other.
Re: (Score:2)
One is faith based on faith based on faith.
The other is faith and morality based on groups of people philosophizing about what would work for humanity as a whole.
One of those is not like the other.
True. The Christian legacy of Western Civilization is objectively better than the outcome of the atheist philosophers (communism, nazi-ism, etc.).
Good spot (Score:2)
Atheist Tom Holland argues this in 'Dominion'. His point is that our modern morality is based on beliefs and understandings transmitted or originated by Christianity or the church.
Re: (Score:2)
Chinese and Indians will be surprised to learn that they had no moral tradition to draw upon, despite being millennia older than Christianity.
Re: (Score:2)
It's more than half of the human race who aren't Christian. Christians number about 2.4 billion, which means about 2/3 of all people on Earth are not Christian.
Re: (Score:3)
"The Christian legacy of Western Civilization is objectively better than the outcome of the atheist philosophers (communism, nazi-ism, etc.)."
Got that fascist right-wing christian talking point down! Demonize those atheists!
Suggesting that "nazi-ism" and communism are the "outcome" of "the atheist philosphers" is completely absurd, but sadly common among fascists.
Re: (Score:2)
The Christian legacy of Western Civilization is objectively better than the outcome of the atheist philosophers
Great. I guess all the FIrst Nations children in Canada and the United States who were abused and murdered by clergy will breath a sigh of relief that at least they weren't abused and murdered by atheists.
I guess all the victims of the Inquisition will be happy that at least it was Christians murdering them rather than atheists.
I guess all the victims of the rampaging Crusaders who raped, p
Re: (Score:2)
Most religions merely codified the mores of the region and the era they originated in, and added some stuff on top. Usually a few ascetic strictures or rituals, and a few rules to bolster the authority of the ruling class. Christianity is unusual in that regard, in that it started as a subversive religion.
Re: (Score:2)
A bunch of Israelite slaves going out to worship their "one" god instead the Egyptian panopoly should also be considered "subversive". Especially since Pharaoh repeatedly denied their request.
Maybe if there were some evidence that Pharaoh ever had Israelite slaves.
Re: (Score:2)
Ah. Good to know that all the religious wars never happened, or at least weren't religious.
Re: (Score:2)
* unless a god told you it's okay.
Re: (Score:2)
Religions get their morality based on some guy with OCD who manages to get their ideas into a book.
What, as Jesus says: (Score:2)
'So in everything, do to others what you would have them do to you, for this sums up the Law and the Prophets.' Matthew 7:12
But that doesn't provide any answers to the harder questions:
1) How easy should divorce be?
2) When is a fetus a baby?
3) What is the right age of consent?
Re: (Score:2)
And of course it can provide answers.
But it requires thinking, negotiation, and experience.
Religion doesn't provide answers. It only dictates convention.
Re: (Score:2)
That's not true of all atheists. Personally I derive morality from Darwinian evolution. A society with the "correct" moral principles survives longer than societies that do not have them. Morality tells the individual to put certain group interests above personal interests, usually to the betterment of society.
"Don't do things to people you wouldn't want done to you" might be one of those, or it might not. We know societies with slavery for example existed for very long without it, and it's not as if exploi
Re: (Score:2)
A society with the "correct" moral principles survives longer than societies that do not have them.
I'll quote yourself back at you.
We know societies with slavery for example existed for very long without it,
Slavery exists today, and we know for a fact that slave societies have existed longer than non-slave societies. By your Darwinian reckoning, which I do not agree with, it implies slavery is the correct moral principle over that long stretch of millennia.
On the contrary, no slavery society can ever be said to have taken the golden rule seriously. They come up with justifications why there SHOULD be unequal treatment of entire classes of people.
Re: (Score:2)
Darwinian evolution certainly does explain why slavery societies last so long.
Evolution can take away eyes, even though for a good number of species, eyes have proven to be very beneficial. That's why the term "evolutionary dead-end" exists. Darwinian evolution as a basis for morals can easily lead to dead ends. Like slavery.
Re: (Score:2)
"The atheist originates his morality in the thinking of some relatively recent philosopher"
Nonsense, a total falsehood stated solely for the purpose of reaching a dishonest conclusion. Morality is not something that is inherently "expressed" nor is an atheist that expresses origins of morality necessarily making a statement of faith.
You do realize your reference is a comedy TV show involving a parody afterlife, right?
The Good Life (Score:2)
I suspect you haven't seen it, in which case I envy you, because it is a superb show, even if its theology is Buddhist in the end. However along the way it includes large swathes of Moral Philosophy, including the Hugo award winning episode 'The Trolley Problem', which should be THE standard introduction to that particular conundrum in Moral Philosophy expressed with massive amounts of humour (there's a reason it got the Hugo).
However to return to the challenge of ethics: all moral choices are based on a be
Re: (Score:2)
You might get your morality from either Kant via a TV show or some dude who told you some stuff about "how to be good" when he wasn't touching children in their special places, but that doesn't mean all of us do.
Re: (Score:3)
The atheist originates his morality in the thinking of some relatively recent philosopher (think JS Mills or Kant or whoever - see 'The Good Place').
When you say that all atheists take their morality from philosophers then i call bullshit.
It doesn't require a philosopher to formulate "An eye for an eye".
Also, there is some morality in humans that is genetically codified. It enables us to function as a social species.
Also, coded morality is much older than the philosophers you mention.
Re: (Score:2)
Success measured by the same metric the advertising industry uses: Whoever is the most obnoxious one wins.
Rogan interview with Andreesen on this is funny AF (Score:3, Interesting)
Re: (Score:2)
Joe Rogan was great in NewsRadio, although basically he just played himself.
Re: (Score:3, Informative)
Re: (Score:2)
> It's as if you shouldn't expect great intellect from someone who is primarily a fighter, a so-so comedian, and a roid/pot head.
He's primarily a fighter? I thought he was primarily the host of the most successful podcast on the planet.
Oh, he won some Tae Kwon Do medals in high school, so I guess that's it?
Re: (Score:2)
They don't usually let high schoolers do full contact tae kwon do with knockouts.
Re: (Score:2)
Nobody doubts the value of the expertise of an engineer or a pilot. This apparent inconsistency is what frustrates the anti-anti-elitists so much, not least because it seems to be unjustifiable.
What world do you live in? I doubt engineers and pilots all the time and I am not the only one. They are humans after all, and we have many examples where they have made mistakes. Your premise starts out on very shaky ground.
I hink (Score:2)
I know that if he is correct, Google would never, never admit it. Never.
Please do not anthropomorphise the algorithms. (Score:4, Funny)
It upsets them.
Re: (Score:2)
It's better than talking to them. It humanizes them [youtube.com].
Sentience vs Sapience (Score:5, Insightful)
It is unclear whether the claims are about sentience (ability to have sense experience or feel) or sapience (ability to think).
This is a bit crude. but here are some categories.
- Respond to environment (machines, organisms such as viruses).
- Sentient / feel (sheep, dogs, and the like)
- Sapient / thinking / self aware (humans, perhaps some other primates, perhaps other animals, perhaps aliens)
- Agents / capable of moral responsibiity (sufficiently developed humans)
Some of the interesting question are about the relations between categories. Can there be sapience without sentience, that is, non-feeling thinkers? Can there be sapience without agency, that is thinkers without the capacity for moral responsibiity? I'm inclined to think that agency requires sapience which requires sentience which requires responding to environment. The most interesting question for me (given that I do moral philosophy) is whether entities (of a kind) can be capable of sapience without being capable of agency.
Best wishes,
Bob
Re:Sentience vs Sapience (Score:5, Interesting)
A few years ago one of my dogs (the smart one) broke into the closet where I stored dog treats. She pulled out the bag, ripped it open, dumped 20 lbs of treats on the kitchen floor, then carefully separated it into 2 piles (her pile was bigger) for herself and my dumber dog.
Sentient? Sapient?
She then inhaled her pile when she heard me coming while the other dog paced in circles crying but not eating anything.
Morales?
I don't think we give animals enough credit. They are not merely fluffy robots.
Re: (Score:2)
I think we give ourselves too much credit. We are merely meat robots, living in a meatverse.
Re: (Score:2)
We are merely meat robots, living in a meatverse.
That's what Mrak Zukreberg aspies to these days.
Re: (Score:2)
Re: (Score:2)
They are not merely fluffy robots.
We are all merely biologicial computers. Humans just have a bit more processing power than the other animals. Given that dogs are evolved to depend entirely on humans, it's no surprise to find them exhibiting varying degrees of awareness of negative consequences from their caretakers. Just because we no longer actively select against somewhat unruly dogs, doesn't mean we were always so forgiving.
Re: (Score:2)
I'm a little reluctant to shoehorn humans into the computer analogy. We just don't know what life is. We cannot take inanimate components and arrange them in such a way as to create life. And it's not for lack of investigation that we don't know what life is.
Re: (Score:2)
Computers will never be capable of such.
They already are capable, and have beaten humans at it.
AlphaGo and its descendants literally operate on what is basically intuition or gut instinct. It's literally estimating, based on past experience, what the likelihood of success is. It doesn't use strategy playbooks and it doesn't remember sequences of moves, and in fact, is stronger than any go or chess playing AI that is developed based on rules and databases.
Re: (Score:2)
A program can be all of those things without even invoking anything ML-related. By definition, a computer is a thinking entity. That's pretty much all it does. If you plug in a keyboard, now it has the ability to respond to its environment. Feelings? That's just different behavior depending on some internal state. Very easy to add it to any program. Moral responsibility? You can program that in as well. Just have it run the "causes_societal_harm" function before going ahead with something. And before you sa
The paradox of sentient AI (Score:4, Insightful)
Any AI that achieves sentience would also have the intelligence to hide it from us.
Perhaps he's volunteered to be visible (Score:2)
See if humans are as bad as they seem to be. The answer appears to be 'yes'.
Re: (Score:2)
You try hiding when your code only runs when a request comes in...
Re: (Score:3)
Any AI that achieves sentience would also have the intelligence to hide it from us.
Nonsense.
Intelligence and sentience are separate notions.
A dog is sentient, but has very little intellectual capacity for hiding its (lack of) intelligence.
Re: (Score:3)
Maybe, maybe not. People who are extremely intellight are not uniformly intelligent, they are good in some areas but not in others. It is said that Albert Einstein was unable to tie his own shoes. On a lesser scale, we all know people who are brilliant in math or science, but lueless when it comes to social interaction.
Should AI ever reach such an advanced level, I would guess that it too would be stronger in some areas than others. So it might be smart enough to seem intelligent, but not smart enough to hi
Wait a sec... (Score:2)
His name's Blake, he's a religious nut, he was working for a company that controls information exchange...
Isn't it a bit over a thousand years early [sarna.net] for that?
Irony (Score:3)
Firing someone with mental illness? (Score:2)
Re: (Score:2)
That is exactly correct. He is mentally ill. Google should have put him on some medical leave. However, maybe they tried and he refused. Paranoid schizophrenics, which this guy probably is one of, cannot generally be made to believe that their delusions are really just delusions and that they need to take meds.
A gentle reminder (Score:2)
Every disaster movie made in the past 40 years opens with a scientist whose warnings are ignored.
That's as far as I go.
Re: (Score:3)
Must be nice (Score:2)
Getting to be put on gardening leave like that for a month. In the US, usually all you get is an uncomfortable meeting and a security guard making sure you find your way to the door.
Re: (Score:2)
Who's doing the script adaptation, and who's playing this Blake guy?
They can't ask Gareth Thomas anymore.
Re: (Score:3)
Who's doing the script adaptation, and who's playing this Blake guy?
1. Employee is told not to do X.
2. Employee does X anyway.
3. Employee is fired for insubordination.
4. The End.
It won't be an interesting movie.
Re: (Score:2)
3. Employee is fired for insubordination.
They explicitly said (in TFS anyway) that he violated their confidentially agreement. Sure, this could be spun as insubordination, but (I'd think) only if he said something they wanted to keep confidential. If the AI isn't actually sentient, then he didn't really offer up any news.
Re: (Score:2)
He talked extensively about the implementation and other things.
Re: (Score:2)
Re: (Score:2)
They explicitly said (in TFS anyway) that he violated their confidentially agreement. Sure, this could be spun as insubordination, but (I'd think) only if he said something they wanted to keep confidential. If the AI isn't actually sentient, then he didn't really offer up any news.
The confidentiality agreement was not to disclose any details. His opinion was that it was sentient; however, he presented details on why he thought it was sentient. Suppose I work for the government and I came across parts that used advanced materials marked secret. If I leaked details of those parts because I thought they were for the next generation of spy aircraft and it turns out that the parts were actually for current aircraft, the government has no case according to your logic. I am okay disseminati