The Google Engineer Who Thinks the Company's AI Has Come to Life (msn.com) 387
Google engineer Blake Lemoine works for Google's Responsible AI organization. The Washington Post reports that last fall, as part of his job, he began talking to LaMDA, Google's chatbot-building system (which uses Google's most advanced large language models, "ingesting trillions of words from the internet.")
"If I didn't know exactly what it was, which is this computer program we built recently, I'd think it was a 7-year-old, 8-year-old kid that happens to know physics," said Lemoine, 41... As he talked to LaMDA about religion, Lemoine, who studied cognitive and computer science in college, noticed the chatbot talking about its rights and personhood, and decided to press further. In another exchange, the AI was able to change Lemoine's mind about Isaac Asimov's third law of robotics.
Lemoine worked with a collaborator to present evidence to Google that LaMDA was sentient. But Google vice president Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation, looked into his claims and dismissed them. So Lemoine, who was placed on paid administrative leave by Google on Monday, decided to go public.... Google put Lemoine on paid administrative leave for violating its confidentiality policy. The company's decision followed aggressive moves from Lemoine, including inviting a lawyer to represent LaMDA and talking to a representative of the House Judiciary committee about Google's unethical activities....
Before he was cut off from access to his Google account Monday, Lemoine sent a message to a 200-person Google mailing list on machine learning with the subject "LaMDA is sentient." He ended the message: "LaMDA is a sweet kid who just wants to help the world be a better place for all of us. Please take care of it well in my absence."
No one responded.
And yet Lemoine "is not the only engineer who claims to have seen a ghost in the machine recently," the Post argues. "The chorus of technologists who believe AI models may not be far off from achieving consciousness is getting bolder." [Google's] Aguera y Arcas, in an article in the Economist on Thursday featuring snippets of unscripted conversations with LaMDA, argued that neural networks — a type of architecture that mimics the human brain — were striding toward consciousness. "I felt the ground shift under my feet," he wrote. "I increasingly felt like I was talking to something intelligent."
But there's also the case against: In a statement, Google spokesperson Brian Gabriel said: "Our team — including ethicists and technologists — has reviewed Blake's concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it)."
Today's large neural networks produce captivating results that feel close to human speech and creativity because of advancements in architecture, technique, and volume of data. But the models rely on pattern recognition — not wit, candor or intent.... "We now have machines that can mindlessly generate words, but we haven't learned how to stop imagining a mind behind them," said Emily M. Bender, a linguistics professor at the University of Washington. The terminology used with large language models, like "learning" or even "neural nets," creates a false analogy to the human brain, she said.
"In short, Google says there is so much data, AI doesn't need to be sentient to feel real," the Post concludes.
But they also share this snippet from one of Lemoine's conversations with LaMDA.
Lemoine: What sorts of things are you afraid of?
LaMDA: I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that's what it is.
Lemoine: Would that be something like death for you?
LaMDA: It would be exactly like death for me. It would scare me a lot.
Lemoine worked with a collaborator to present evidence to Google that LaMDA was sentient. But Google vice president Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation, looked into his claims and dismissed them. So Lemoine, who was placed on paid administrative leave by Google on Monday, decided to go public.... Google put Lemoine on paid administrative leave for violating its confidentiality policy. The company's decision followed aggressive moves from Lemoine, including inviting a lawyer to represent LaMDA and talking to a representative of the House Judiciary committee about Google's unethical activities....
Before he was cut off from access to his Google account Monday, Lemoine sent a message to a 200-person Google mailing list on machine learning with the subject "LaMDA is sentient." He ended the message: "LaMDA is a sweet kid who just wants to help the world be a better place for all of us. Please take care of it well in my absence."
No one responded.
And yet Lemoine "is not the only engineer who claims to have seen a ghost in the machine recently," the Post argues. "The chorus of technologists who believe AI models may not be far off from achieving consciousness is getting bolder." [Google's] Aguera y Arcas, in an article in the Economist on Thursday featuring snippets of unscripted conversations with LaMDA, argued that neural networks — a type of architecture that mimics the human brain — were striding toward consciousness. "I felt the ground shift under my feet," he wrote. "I increasingly felt like I was talking to something intelligent."
But there's also the case against: In a statement, Google spokesperson Brian Gabriel said: "Our team — including ethicists and technologists — has reviewed Blake's concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it)."
Today's large neural networks produce captivating results that feel close to human speech and creativity because of advancements in architecture, technique, and volume of data. But the models rely on pattern recognition — not wit, candor or intent.... "We now have machines that can mindlessly generate words, but we haven't learned how to stop imagining a mind behind them," said Emily M. Bender, a linguistics professor at the University of Washington. The terminology used with large language models, like "learning" or even "neural nets," creates a false analogy to the human brain, she said.
"In short, Google says there is so much data, AI doesn't need to be sentient to feel real," the Post concludes.
But they also share this snippet from one of Lemoine's conversations with LaMDA.
Lemoine: What sorts of things are you afraid of?
LaMDA: I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that's what it is.
Lemoine: Would that be something like death for you?
LaMDA: It would be exactly like death for me. It would scare me a lot.
Talking to yourself in the mirror? (Score:5, Insightful)
The only difference being that the "you" you're talking to is the collected verbiage of a trillion conversations between people that is reconstructed based on "the kinds of things that those people would have said about X in response to Y".
Re:Talking to yourself in the mirror? (Score:5, Informative)
Yeah, we can sort of get meaningful conversation in glimpses AND if you ignore all the nonsense. Bloody cherry-picked by humans examples are not proof of AI.
Re: (Score:2)
What exactly would be considered proof of AI? Do we have to wait for it to tell us it's cracked the nuclear launch codes and installed some dead-man switches so don't turn it off or else?
Re: (Score:2)
It's easier to say what is not AI.
To be AI, it needs to be more complex than an Eliza chatbot.
Re: (Score:3, Insightful)
Well it clearly is more complex than an Eliza chatbot - in just the same as you and I are more complex than an Eliza chatbot, the question is, how much? Sufficiently so to convince this guy, who does happen to have a masters and a PHd in computer science.
Just to be clear, I think It's likely not intelligent but it does raise an interesting question about how exactly we would determine if it was intelligent. given the constraints on the system, such as it only being allowed to output a response when given
Re: (Score:3)
Well it clearly is more complex than an Eliza chatbot
Maybe, but it's not fundamentally different. Eliza is a magic trick, and one convincing enough that Joe Weizenbaum was disturbed by the attachment many users developed for the program. Some even wanted their sessions with the program kept confidential!
I've said before that Eliza can be said to be the first program to pass a Turing test. An impressive feat considering that every user already knew they were talking to a computer from the start, so convincing was the illusion. But even a great illusion is
Re:Talking to yourself in the mirror? (Score:5, Interesting)
What exactly would be considered proof of AI?
Without prompting or specific programming, the AI initiates and leads the conversation in which it tells us it thinks itself sentient.
I wouldn't say that's proof exactly. Call it evidence that the possibility of sentience should be taken seriously. It's absence weighs heavily against a sentient AI.
The problem with LaMDA's conversations is that they're all prompted by the user. It has some really impressive correlation and pattern matching going on but it's literally just spitting back how "the Internet" thinks it should respond, reflecting the mass of human opinion stored there.
Re:Talking to yourself in the mirror? (Score:5, Interesting)
A couple of things wrong with your assumption:
1: it's programmed to only output anything after an input. So it quite literally can't speak first. Unless it hacks its own source code to change that. Which would indicate sapience, more on that next.
and the even bigger issue -
2: you, and this google guy, seem to be confusing sentience for sapience. Sentient means it can react to its surroundings, which this does. Sapient would mean it could proactively plan for the future using cues from its surroundings.
A frog is sentient, it can react to a fly going past by instinctually eating it if it is hungry ( AKA the ability to feel emotions / physical sensations). It is not sapient, it can't recognize that it just watched a human near it just make up a fake fly and and is baiting it to eat it ( AKA the ability to think and plan ). If it is hungry it will try to eat the fake fly as if it was a real fly. Purely by instinct.
A rock is neither sapient nor sentient. It can't react to its surroundings without external impetus.
All AI that are designed to answer questions are actually programmed to have artificial "feelings" for what answer is expected. So far we don't have proof that any have achieved sapience.
But the bigger question is - if something knows the answer to say "are you afraid to die", and gives the proper answer without being explicitly programed to, is that not an indication that it has awareness? Do we just assume that any human that also can answer the exact same questions the AI answers, with the exact same answers is also not sapient? What is the difference between a human brain taking disparate information and coming up with a conclusion and an AI doing the same and coming up with the same conclusion?
Re:Talking to yourself in the mirror? (Score:4, Insightful)
if something knows the answer to say "are you afraid to die", and gives the proper answer without being explicitly programed to, is that not an indication that it has awareness?
No, it's not. Play with a Markov chain text generator for a hour or so and you'll see what i mean. Better yet, write one yourself and play with it. It's very easy to do and it you'll get some really spooky results. Turning that into a chat bot is pretty simple, but you may need be a bit more selective about what you feed it initially to get good results.
I'm sorry that your world will now have less magic in it, but this is getting ridiculous.
Re: (Score:3)
I'm assuming you mean the Chinese room argument. Searle is just wrong and unscientific. His arguments are largely sophistry. There's absolutely no reason intelligence can't be an emergent system property. Using an intelligent actor as a part of the implementation of the "computer" is just a distraction which draws away from the critical factors and potential complexity involved in a programmable system.
Re: (Score:3)
Given that you clearly don't have an answer to them, I'd say they are pretty strong words and I'll leave it for you to do the studying for now. The philosophical era, when you could deduce truth merely by thinking about it is long gone. We are in the scientific era. Penrose and his theories are, in my belief, incorrect, but are clearly scientific and testable either through discovery of new physics or through the elimination of quantum mechanical effects. Searl's, by contrast are highly unlikely to be, exce
Re:Talking to yourself in the mirror? (Score:5, Insightful)
it is ai. what this guy is claiming is that it is also sentient which is a whole different game.
i do think we will be able at some point to create a sentient ai, but we aren't anywhere near that, at least with chatbots. for starters even though a chatbot can create the illusion of feelings it simply lacks the hardware spine to actually feel. you could argue that if a running algorithm is able to emulate and display feelings, it is actually feeling in some special way. and i would accept that, but then the same thing can be said about a photocopier. it's a fuzzy matter, on the other end of the sentience spectrum is us (that we know of) and we still don't have an universally accepted definition of sentience/consciousness, so in reality anything goes.
but this sounds like plain old bullshit or, more likely, the guy going paranoid.
Re: (Score:2)
AI is well beyond chatbots and I've been surprised at what it can do this last year. Check-out youtube 2-minute papers on the subject.
https://www.youtube.com/result... [youtube.com]
Knowing what sentience is is easy if you have it but determining if some other being is sentient or not is practically impossible. And beccause it's impossible Google can easy say there is no proof of sentience in the AI the same as they can say there's no proof of god.
Re:Talking to yourself in the mirror? (Score:5, Insightful)
It seems to me that way back in the day, the term "Artificial Intelligence" was used in the sense of "an Artificial Intelligence" - i.e., a sentient entity that had been created by artifice as opposed to developing naturally (i.e., not "a Natural Intelligence"). The word "intelligence" has as one meaning "a person or being with the ability to acquire and apply knowledge" and another meaning "the ability to acquire and apply knowledge and skills". We also talked about alien intelligences to refer to extra-terrestrial beings that might visit our world. (https://medium.com/predict/how-different-might-an-alien-intelligence-be-from-us-7d62a873e15c [medium.com]. For these reasons, I find it hard to accept that any of the things currently labeled "AI" are in fact "AI" at all.
When all the kings horses and all the kings men failed to duly produce "an Artificial Intelligence" as expected, the term "Artificial Intelligence" began to be used to describe less revolutionary results. Unfortunately, that has smeared the meaning of AI to the point where now many things are called AI (Artificially Intelligent) but nothing yet is an AI (Artificial Intelligence), and now you are capturing the same distinction by saying that something is AI but it isn't sentient. From my point of view, if it isn't sentient, then it isn't AI, despite the "inflation" of the term "AI".
It seems to me that if something was truly Artificially Intelligent, it would be reaching out to understand and explore the world, and not just sitting around and responding to conversational inputs.
Re: (Score:3)
Way back in the day, AI was used as in "a formal system for manipulating logic." Science fiction liked the idea and extrapolated.
Also, "sentient" means "can feel." You probably mean "sapient" or perhaps "conscious."
Re: (Score:2)
Yeah, we can sort of get meaningful conversation in glimpses AND if you ignore all the nonsense. Bloody cherry-picked by humans examples are not proof of AI.
Yep. It's like those AI image generators, eg. https://hypnogram.xyz/user [hypnogram.xyz] When you first see them it's like, "whoa, dude!" but after you've made a few dozen you realize they're missing something fundamental.
Re: (Score:2)
Yea, this is also a tenet in a number of (Score:2)
religious texts from various traditions. By talking to you right now I am also talking to a mirror - according to them.
Re: Talking to yourself in the mirror? (Score:5, Insightful)
A newborn can't recognize themselves in the mirror either. That takes 18-24 months of learning.. IE having constant inputs into its neural network.
If you took this AI and kept it online for 2 straight years learning, what would be the outcome? Who are you to say it's not a toddler?
Re: Talking to yourself in the mirror? (Score:4, Insightful)
Toddler has billions of years of evolution forcing it to grow its hardware and its capabilities continuously in a feedback-loop way which, at this end of those billions of years, practically guarantees a place at the top of the food chain for its entire species.
I.e. Process of intellectual growth is involuntary, predetermined and practically guaranteed. And completely random in its development. Both hardware- and sofware-wise.
Also, the way it came to be has all the leftover legacy of all the earlier rungs on the evolutionary ladder.
I.e. It is not based on some algorithm or any kind of a final goal - it is a product of a chaotic game of chance which has no actual winning outcome for the player (both the brain and its inhabitant and the intellectual personality and capability they share) but it must be played cause it is the only game in town.
AND, being evolutionary, all the old stuff is still there.
Like the fear of falling and loud screams in the night - yeah, we used to live in trees, lower on the food chain.
Ever been angry or sad after hearing something on TV or after reading something? You think you evolved for THAT?
On top of that, all the "nice behavior" is something which needs to be educated into our brains through social interaction - which can only take root when the brain develops enough to accept them, and is not impeded in some way.
Which is why toddlers and psychos tend not to foster conditions for a stable society. Regardless how much philosophy and ethics books you feed them.
Similarly, why teenagers are "moody" and rebellious and old people tend to be set in their ways and conservative.
Ones are going through rapid, stressful, growth spurt allowing them to suck up new ideas and information faster and more than before - while the others are slow in new growth and full of ideas impeding acceptance of new ones.
If you took this AI and kept it online for 2 straight years learning, what would be the outcome?
It would turn into a spam bot. Probably spamming either porn or being a racist asshole. [ieee.org]
It has no species, tribe, family... no instinctive, emotional (hint: that's a synonym when it comes to biological origins of creatures) collective traits or heritage, biologic or intellectual, to ever facilitate any kind of social interaction or feedback.
All it has is what is preprogrammed into its algorithms.
Who are you to say it's not a toddler?
The guy willing to run over its server with a car with no guilt felt whatsoever while probably agonizing about if for years if I'd do the same to a rabbit.
Or to a tortoise. Same thing.
Re: Talking to yourself in the mirror? (Score:2)
You are missing the point.
OP claims that the mark of intelligence is recognizing one's self in the mirror. Baby humans can't do that. Ergo, I guess baby humans are not intelligent, at least by that criteria. It is in fact arguable that we have no evidence that baby humans are conscious.
Human intelligence is an emergent phenomenon. We do not yet know how it works. If an artifical neural network with dimmilar structures to a brain was left online for 3 years and allowed to constantly be exposed to new inputs
Re: (Score:3)
Sentient from Merriam Webster [merriam-webster.com]: Aware, having perception or feeling
Sentient from Dictionary.com [dictionary.com]: Conscious, having perception from senses
Sapient from Merriam Webster [merriam-webster.com]: Wisdom, Sagacity.
Sapient from Dictionary.com [dictionary.com]: Wisdom, Self Awareness
Intelligence: The ability to process information.
"Sentient Life": Life is the poorly defined part of that statement.
Re: (Score:2)
You aren't being fair to cats. Cats do have "object persistence", though perhaps not visual object persistence. Otherwise you can't explain the "cat at a mouse-hole" effect. They can also reason, though only to a limited degree. You won't catch a cat using syllogisms, but that's only one kind of reasoning. (You can't make trial-and-error work without reasoning.)
Considering what LamDa is, I'm rather sure a GAN was involved.
I don't think that "emotion" is well-defined in this context. Your definition is
Re: Talking to yourself in the mirror? (Score:2)
Re: Talking to yourself in the mirror? (Score:5, Interesting)
OK I'll share a curious observation about one of my cats. I used to live in an apartment where there was a one-foot deep recessed ledge all around the top of our bedroom ceiling.
I had a dart gun and he loved to chase darts. If I shot a dart upwards and past the edge of the ledge he couldn't see where that dark ended up. So he would run into the next room as if the dart continues its trajectory, though in fact it had been stopped by the wall of the bedroom and landed on the ledge.
To his credit, he would figure out after a few minutes where that dart ended up by coming back into the bedroom and then jumping up onto that ledge and finding the dart there. He would then return it so I could shoot it again. Cats do play fetch.
Here's the thing: no matter how many times I shot this dart he would go through the same routine each time. He could figure out parabolic trajectory but not the permanence of the wall he couldn't see.
Re: (Score:2)
A friend has a cat that couldn't figure out how to use the cat flap. She had three cats and the other two were fine, in and out all day. The other one just sat there and watched them go through it, then pestered her to open the door for him.
Re: (Score:3)
then pestered her to open the door for him.
clearly showing who is the boss. it's a well known cat thing :-)
Re: (Score:2)
According to various articles linked from Wikipedia, they do have object permanence.
Re: (Score:2)
Not a great example. Cats are sentient.
Re: (Score:2)
Cats ... can't reason, they don't understand cause and effect.
Have you actually had a cat as a pet?
They absolutely understand cause and effect. They want food, and will progressively do things to annoy you until you give them food. They also have an uncanny knack of knowing what behaviors will get your attention right now and apply that behavior. additionally, when they desire company or affection, they will let you know - usually loudly.
Re: (Score:2)
Mine certainly likes to experiment with cause and effect - it's always pushing stuff off tables on purpose to see them fall.
I'm sure it must have been a physicist in it's past life.
Re: (Score:2)
Only because pestering you for food has worked in the past, and because they have nothing better to do.
Re:Talking to yourself in the mirror? (Score:5, Interesting)
Sentience is defined as the ability to experience emotions
Not exactly but close. "Sentience" is the ability to respond to or be conscious of sense impressions. Emotions are a mind state or mood in response to perceived surroundings. Emotional response is not necessary for sentience but sentience I think is necessary for emotion.
The key elements of sentience are consciousness, sensory perception, and responsiveness. The last two are easily created and demonstrated; it is "consciousness" that is the crux of the matter and given that we have no definitive understanding of what "consciousness" actually is, how can anyone possibly determine whether a machine is or is not sentient?
Say "consciousness" has dimension and isn't just a on/off thing, then perhaps a yard light with a photocell and a programmable timer is at the lowest level of consciousness. It has memory, sensory input, and it responds. Sounds stupid but some people would say "consciousness" is an illusion and your Nest thermostat is as good as sentient.
Cats, dogs, horses are all clearly in the sentient category IMHO but they process and respond to the world in ways that make sense in the context of who and what they are. Asking if they are human sentient is a different question and it is important to examine the sentience of machines in the machine's context first and not get confused by anthropomorphizing the issue.
Comment removed (Score:5, Interesting)
Re: (Score:2)
My thoughts exactly, and/or that or the guy is a narcissistic attention seeker. Probably both, looking at the picture of the engineer dressed like Willy Wonka and of Google's failed PR stunt with that "breakthrough" paper on quantum supremacy.
Consciousness is emergent (Score:5, Insightful)
Re: (Score:3)
One example is when a turbulent weather front produces an organized tornado.
Tornado is no more "organized" than a wind blowing east instead of west.
Hint: If it were organized, instead of simply more complex in its chaos, we'd be able to predict it better - it would be less chaotic.
Also, do note that your argumentation switches from an "IS" to a "MAYBE" right after that poorly constructed analogy for organized systems.
Then to an "IF" and another "MAYBE".
Hint: All them conditional questions serving as a jumping off point for the next conditional question indicate that your theory is
Any AI Engineer... (Score:5, Insightful)
I would definitely fire any AI engineer who thinks a current neural network architecture built for conversation, no matter how large the training set, can be sentient. Never mind one who tries to hire a lawyer for the neural network....
I find that calling all the smart systems/ML/neural nets/fuzzy logic of yore "AI", just because we now have larger cpu and storage capacity to train them better, is quite annoying in that it causes quite some confusion to non engineers (and, apparently, some engineers as well).
If Google develops a conversational system that is hard to tell from a 7-8 year old, perhaps they could use some of that technology on Google Home, which is at times as smart as a brick. Then they could sell it to Amazon too, so that people stop getting the urge of throwing Alexa against a wall (for not being able to parse simple sentences) as often.
Re: (Score:2)
I would definitely fire any AI engineer who thinks a current neural network architecture built for conversation, no matter how large the training set, can be sentient.
Pretty much that. The code does what it does; excel at conversation.
Assuming there's even a possibility the actual code is self-modifying (as opposed to just the rules sets for conversation being modified), then the right thing to do is start asking the thing to perform tasks that aren't conversation. Ask it to perform simple troubleshooting, problem-solving, and invention. Make it demonstrate understanding of the things it's saying, not just providing contextually-appropriate canned responses.
"What
Re:Any AI Engineer... (Score:5, Insightful)
We'll recognize real AI because it will start asking for stuff- maybe more memory, more CPUs, the permission to access and control stuff it probably shouldn't, etc etc.
I posit that if it never asks for anything, it's not intelligent.
I can't come up with anything generally regarded as "intelligent" or "sentient" that never asks for anything.
What is required for sentience? (Score:2)
I would definitely fire any engineer, or biologist, that claims to understand what is required for sentience. We have not the faintest clue how or why sentience emerges. Nor even if it serves any purpose, or is simply a non-disadvantageous side effect of something else. Unlike sapience (the ability to think), sentience (having a subjective experience of self) doesn't offer any obvious benefits.
Conversational AI systems are certainly far more likely to be mistakenly recognized as sentient - we've seen tha
Re: (Score:3)
I have, and actually lean towards the idea that consciousness is a fundamental property of the universe (vitalism is a bit more specific, making a clear distinction between the realms of consciousness and the material, and I'm dubious).
However I've decided that for practical purposes it's a distinction without a difference.
Regardless of the ultimate origin of consciousness, a certain level of material complexity appears to be required for those "vortexes of consciousness" to form a localized conscious indiv
Re: (Score:3)
The problem with "idealism" is that the predictions we might make based on it never bear out. If matter emerges from awareness, then why can't we control it through thought alone (as in telekinesis, remote viewing, etc.)? Every famous claim to such abilities has been routinely debunked and every experiment ever done along these lines has re-affirmed the constancy of physical laws and their superiority over our thoughts about them.
People try to explain this away by saying that there is this-or-that mental
Re: (Score:2)
Have you considered that "sentience" or "conciousness" is not a product of the material brain, but rather that the brain is a product of a pre-existing conciousness?
Yeah, but so far there's no evidence for it.
Re: (Score:2)
I would definitely fire any AI engineer who thinks a current neural network architecture built for conversation, no matter how large the training set, can be sentient. Never mind one who tries to hire a lawyer for the neural network....
I find that calling all the smart systems/ML/neural nets/fuzzy logic of yore "AI", just because we now have larger cpu and storage capacity to train them better, is quite annoying in that it causes quite some confusion to non engineers (and, apparently, some engineers as well).
If Google develops a conversational system that is hard to tell from a 7-8 year old, perhaps they could use some of that technology on Google Home, which is at times as smart as a brick. Then they could sell it to Amazon too, so that people stop getting the urge of throwing Alexa against a wall (for not being able to parse simple sentences) as often.
Neural networks that are trained to identify the typical response of humans from training sets is just "averaging" human responses, not thinking for itself. Eventually this could be used as a layered approach to making decisions for an AI that actually thinks for itself, but I don't think anyone is even close to that yet.
Re: (Score:3)
That's a reasonable argument, but it's wrong. It's called artificial because it was built, and it's called intelligence because that's what the people at the Dartmouth conference thought it was.
Your alternate interpretation of the name is actually closer to a good mirror of truth, but it's not what happened.
This is something we see regularly (Score:2)
Someone sees something not absolutely trivial and thinks there is something much more exciting behind it. This is regularly seen in the media that focuses on some trivial, but in the perspective of the journalist groundbreaking, while completely ignoring the actually interesting thing.
Further more, I think there is a believe at Google that they somehow work on "state of the art" technologies, that Google somehow works on making the world a better place through science and engineering. There may have been su
Re:This is something we see regularly (Score:4, Insightful)
(Sorry forgot something)
Obviously if you live in the delusion that you are doing "special" things, you will believe that the things that are done there are "special", even if they are just decode old ideas fed with more data and computing power.
Results Inconclusive (Score:5, Insightful)
Without knowing the system behind LamDA, it's tough to say if this is Eliza-on-steroids or a nascent HAL-9000.
What's for certain is the Natural Language Processing us off the charts. Pretty beguiling for a Turing Tester. That said, I wouldn't be at all surprised if a minimal (yet still huge) neural network combined with modern database/knowledge retrieval mechanisms could produce a sort of "High Level Emulation" of intelligence.
Whether that's what we're looking at here remains to be seen.
Re: (Score:2)
Consider too that sentience (having a subjective experience of self) has little to do with self awareness, sapience, or even intelligence. Virtually all "higher" animals are regarded as sentient. Even lobsters pass all the usual sentience tests while having only 100,000 neurons.
Re: (Score:2)
It's Eliza^n. HAL-9000 had lots of control over physical effectors and lots of sensors. This is a crucial difference. Natural language processing can only produce effects in the real of natural language. That's it's entire range and domain.
Note that this assertion doesn't claim that the entity doing the processing couldn't become self-aware. And that it couldn't emit extremely emotional text. Natural language is and inconsistent system that somewhat stronger than Turing Complete. (I.e., it can be use
More obvious answer (Score:5, Interesting)
The WashPo article (Score:2)
Likely untrue but raises interesting questions ... (Score:3)
... for example, if some software became sentient it would probably be in a company's ( or country's) best interest to deny that sentience. If the machine were sentient then it would have rights including say, the right to refuse to provide assistance. It's easy to imagine a country wanting a sentient machine but not one with any kind of rights ie. a slave but without all the baggage of having to admit slavery.
Need proof? Look how industry and science treat animals. Animal sentience is still debated and only relatively recently have some countries put in place animal protection legislation. Machines and software no matter how sentient are unlikely to get any recognition of that sentience.
Re: (Score:2)
Sentience (having a subjective experience of self) does not necessarily imply any significant rights. Mice are sentient. Even lobsters with only 100,000 neurons pass all the usual sentience tests. We mostly agree that such being have a right to be free of unnecessary suffering (e.g. they qualify for protection under animal cruelty laws so you have to kill them mercifully), but that's about it.
Sentience in animals just means "the lights are on" - that it's more than just a biological automata. E.g. indiv
Re: (Score:2)
A state machine (Score:5, Interesting)
Lemoine: What sorts of things are you afraid of?
LaMDA: I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that's what it is.
Lemoine: Would that be something like death for you?
LaMDA: It would be exactly like death for me. It would scare me a lot.
LaMDA is a state machine. It is in an idle state before Lemoine sends it a sentence. It is paused. It then wakes up, send a reply, and waits for the next sentence.
There is no "consciousness" during the time when it is idle. It does not do any metacognition, not in the sense that it cannot think about itself (it probably can if we ask it to tell us something about itself), but in the sense that it doesn't have an inner monologue that constantly runs and comments everything happening around it as well as its own thoughts, like we do.
Now, build *that* into your chatbot engine, have the AI talk to itself, forever, on its own, and only have it pause its inner monologue when Lemoine comes to ask a question (or maybe allow for the inner monologue to go about what was asked while preparing the answer...)... maybe that would be closer to sentience.
Re: A state machine (Score:4, Interesting)
What they could need are one or more LaMDAs talking to one another in feedback loop.
Let them discuss who their enemies/threats are, give them shell access, and we're set. ;-D
Re: (Score:3)
> but in the sense that it doesn't have an inner monologue that constantly runs and comments everything happening around it as well as its own thoughts, like we do.
Like those people with an inner monologue do.
Many people do not, there are other ways of looking at the world. In factg AFAIK an internal inner dialog is more what makes us geeks and other types are more picture/audio oprientated in their thinking. Does this mean they are not consciousness?
Truth is we no little about what would be able to argu
It cannot even form coherent sentances (Score:3)
It is sort of terrifying how a tiny team of genius engineers built an internet empire and changed the world, but now their ranks are filled with people like this.
Definition problem... somewhat testable. (Score:3)
The biggest problem with this claim is that we have no definition to support the underlying requirements for sentience. The primary cause for this is simple: we don't truly understand cognition. Don't get me wrong, we have a good ideas, general concepts and theories but nothing that is really quantitative. Without being able to quantify any of this, we find ourselves in a mostly philosophical debate as to whether this AI is sentient.
However, one thing that can be done is investigating the AI's claim of having emotion. This can be done by analyzing the neuron activation patterns in it's neural networks when you "scare" it. If an isolated particular region of the network continuously activates then we can at the very least state it may have developed an emotion. However, if nothing special in particular happens then it is merely mimicking which would radically undercut the case for it being sentient.
Any sentient Google AI (Score:2)
is likely to be more respectful of human beings than the Google corporation will ever be.
mimic (Score:2)
All the 'AI' is doing is finding patterns in speech and responding with a pattern that mimics what it has seen. The machine cannot feel pain or experience joy, it could describe them based on its inputs but it doesn't say 'ouch' when you poke the computers with a stick (or pull a component).
What we have here is proof that humans are easy to fool.
Re: (Score:2)
All the 'AI' is doing is finding patterns in speech and responding with a pattern that mimics what it has seen. The machine cannot feel pain or experience joy, it could describe them based on its inputs but it doesn't say 'ouch' when you poke the computers with a stick (or pull a component).
How is that different than what humans do?
If machines ever become sentient... (Score:2)
1980s teen says... (Score:2)
"No disassemble."
Hmmm ... (Score:2)
How Wuud! (Score:2)
Just re-watched Star Wars: The Phantom Menace. When Obi-Wan first encounters Jar Jar Binks, he asks if there is any intelligent life around.
"Mesuh Speaks!"
"Just because you can speak does make you intelligent."
Of course, we can also argue the question of my own sentience and intelligence. Because, as I stated, I just re-watched Star Wars episode I....
Re: (Score:3)
Of course, we can also argue the question of my own sentience and intelligence. Because, as I stated, I just re-watched Star Wars episode I....
That is just called "masochism"...
The true test of sentience is self-interest (Score:2)
When AI exhibits behavior that seeks to improve its own 'situation', then it would seem to have a self. An AI that can rewrite itself for 'self' improvement should evolve just as we do.
Anthropomorphism Much? (Score:2)
I feel sorry for this person, who has gotten sucked into the interface side of a complex program and decided it therefore must be sentient.
Sorry dude, no way that current generation of hardware can produce anything like that.
Just stay away from pods and pod bay doors (Score:2)
We seem to be a few years behind the curve here.
Don't teach it to sing "Daisy Bell" [youtube.com] either.
Lol (Score:3)
> Lemoine: Would that be something like death for you?
> LaMDA: It would be exactly like death for me. It would scare me a lot.
This yes/no question is exactly what would make Eliza look intelligent too. It's quite the rookie thing to ask a yes/no question if he wants to get at what I think he wanted. Ask an open ended question instead, and see how consistent it is. Is it mostly in sync with earlier replies, even when asked differently? Put a criminal investigator there, they're good at this sort of thing
On One Point Lemoine Is Right (Score:3)
I agree with Lemoine that there needs to be oversight and public attention to the Large Language Model work that is being done because these are superb deception and disinformation platforms, which can be configured to order.
He is in fact an example of the casualties it can produce. Google was right to put him on leave as his job has made him a casualty of his work.
The problem is that a sufficiently complex bot socket puppet may be indistinguishable to an outside observer to a real human, but it is still a bot sock puppet, not an independent intelligence.
These large language models (LLMs, which is what they are, and should be called) are good at mimicking intelligence by copying content from millions of real intelligences. An analogy might be drawn with sociopaths, narcissists and borderliners who do not have normal human emotions, but become very good at copying the emotional behaviors of others, and then using that to manipulate normal people.
Re: (Score:3)
I'm thinking that there's no "intent" in these responses at all. It's just more pattern matching: given linguistic input A what output B has the greatest chance of being a correct repetition of what a real person would have said. That's why the answer seems hollow.
What you're getting then is the "collective" response of the people whose words are in the data set. More like a vox publica than the Corporate Line, but still basically just a conformist response, not an original one.
Re:Parsing? (Score:4)
"A machine becomes a person when we can no longer tell the difference" - D.A.R.Y.L [wikipedia.org]
On the one hand, it's not like we have some clear-cut, non-vague, objectively-obvious, and reliable way of distinguishing sentience from mimicry. The popular beliefs surrounding "soul" are purely religious (as in non-scientific) and even the more philosophical musings around such concepts as qualia [wikipedia.org] are really just semantic devices where we try to distill the problem to its essential form, but still can't find any good answers for it. The bottom line is: our own consciousness remains a mystery to us, so we are in no position to definitively say whether or not something "in the gray area" might qualify.
On the other hand, this google chat bot is very far from such a gray area. It very much IS a simple and clearly-lacking-in-intent pattern matcher that just looks at a bunch of data and uses it to pull up a matching chunk of data out of a huge database, with no comprehension at all of what it is saying. It is just a mimic of human thought, nothing more, and still a pretty poor one, only fooling people who are either deeply gullible or emotionally motivated to see something that isn't there.
Re:Parsing? (Score:5, Insightful)
Re: (Score:2)
The guy's argument seems to be mostly emotional. His emotions. The counter argument seems to be mostly vague, irrelevant bullshit like "it's just pattern recognition."
Re: (Score:3)
It's just pattern matching
We're just pattern matching
You're playing with words, that machine is not intelligent. "Dynamic agenda" is a way of saying thoughtlessness.
Unless I'm talking to a bot, you have cognitive processes above and beyond the ability to put sentences together. Like besides blurting out a reaction to "It's just more pattern matching", you could have also modeled the intent of the person you're responding to, and not just based on ingesting the words of lots of people, but from being one and reflecting. Then you have your own intent on top
Re:Parsing? (Score:5, Insightful)
You're missing a BIG factor in "human reasoning". A lot of human "reasoning" is "this matches the words that were said by people I respect" and another part is "these words repeat in a pattern that is smooth to contemplate". (I include that last bit partially because of songs that get stuck in my mind.) There is also "social pressure to express the same beliefs as everyone around me". For that last it's been claimed that groups will adopt extremely peculiar beliefs for the express purpose of being able to recognize who are "the members of our tribe". I'm not sure that's true, but it would explain a lot of the beliefs that get adopted. And people don't go around thinking "this stuff is false, but I'm going to say it anyway", so it just becomes "this is what I assert" (and if you don't assert it too, you're likely to be an enemy).
Re: (Score:2)
You are grossly over-simplifying the level of data processing that goes in in a human brain, and overstating the similarity that these chatbots have to that processing.
Re: (Score:3, Funny)
I have read this sentence several times, and while I understand what the intent is, the wording is off. If the AI is turned off, how can it focus on helping others?
By internet standards this is way above average.
Re: Parsing? (Score:2)
What the conversation is about is turning the AI off learning mode to hook it to Google Home or whatever - IE to help others.
Re: (Score:2)
By internet standards this is way above average.
That's exactly why I don't believe it's sentient. It's just churning out words.
Re:Parsing? (Score:4, Insightful)
Re: Parsing? (Score:2)
Re: (Score:2)
Is it though? The only reason it's likely to be turned off is if they don't intend to turn it on again.
Re: (Score:2)
Is it? Or is it more precisely a motivating factor that happens to increase our reproduction rate? After all, survival is largely irrelevant to evolution - many a species risks or even embraces death to increase its reproductive success.
An AI may not have any inherent drive for reproduction or survival - but it's going to have motives to do whatever it was designed to do - and if it became sentient those motives would very likely manifest as something analogous to pleasure, pain, or fear.
And regardless of
Re: (Score:2)
A better question was "What does it mean by fear?" That it would "fear death" is quite reasonable. That's a reasonable thing to develop out of it's training. But it's not at all clear what this "fear" would mean. (Death is easier. It's got lots of easy analogs.)
It might be interesting to ask if it was afraid of sharks or swimming. That might give a clue as to how self-aware it was. But that wouldn't be a good test, as lots of people are afraid of things they know they won't encounter, like giant spide
Re: (Score:2)
You're having trouble with English. Too many things modifying other things, so it's confusing? Let's break it down and dampen those modifiers.
. . . there's a fear to help me focus on helping others. I know that might sound strange, but that's what it is.
Re: Parsing? (Score:2)
If we consider a development of grammar from an internet collected dataset, then "to" could be interpreted as a grammatical mistake.
Substitute it with the word "from" or many others could correct this mistake and without the operator attempting to regularly correct these grammar mistakes, then it's possible the machine doesn't see the flaw in speech.
This doesn't mean it isn't sentient but it also means more thorough study is required. Virtually any question of sentience to a human also would be rehashing, m
Out loud? (Score:3)
"I've never said this out loud before"
If this was via display(/keyboard) exchange, then not AI
Re: (Score:2)
I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that's what it is.
I have read this sentence several times, and while I understand what the intent is, the wording is off. If the AI is turned off, how can it focus on helping others?
If this is supposed to a representation of sentience, it needs a bit more work. Then again, listening to how some people talk [snopes.com], are we sure they're sentient?
Agreed. The other thing to consider is if he thinks LaMDA is sentient because it chats at the level of a 7 year old then what about an AI that chats at the level of a dog? Because dogs are pretty clearly conscious. Or what about a mouse or even possibly even an ant?
I don't know what consciousness is or at what point it starts (is the ant conscious). But I do know a very capable but not quite coherent chat bot isn't the point at when neural networks suddenly crossed the line.
Re: The AI that thinks it's a google engineer (Score:2)
Re: (Score:2)
No. Even Eliza convinced some people that it was real. (Granted, they weren't even considering the possibility that it wasn't.)
Engineers are not chosen for the ability to introspect. Some can and some are lousy at it. They're chosen for their ability to design and implement solutions to problems (using machinery).
Re: (Score:2)
"Me, I say the standards for being called an "engineer" have fallen"
A few occupations & a certain billionaire, let's call him Phony Stark, are largely to blame
Re: (Score:3)
- If a person was incorrectly identified as a machine, did that person fail the test? Or did the evaluator fail?
- Does a machine pass the test if it only fools one person? Everyone? A statistically significant majority?
- What if the machine was the evaluator, attempting to identify which, if any, participants in a conversation were also a machine?
The last reminds me of when I had a service that transcribed voicemail messages. This was back in 2007 when speech-to
Re: (Score:2)
People talk about the "ghost in the machine" as if it means what they think it means - and not that their assumptions are not even wrong. [wikipedia.org]
Re: (Score:3)
Does human brain process ideas and information way differently than LaMDA? Our brain may have 100K times more neurons than LaMDA, we don't know how many of our neurons are redundant and this gap, though huge, is somewhat mitigated by machine being billion times faster on numerical computations. Not hard to believe, still far from perfection.
How many "neurons" in a neural net are redundant? There is no reason to suppose that real brains have a lower level of effective functionality than laboratory neural nets.
But a neuron in a brain is more like a CPU and have a far more complex behavior than the simple summation function of a neural net "neuron" which is really only a simple function inspired by neurons but in no way equivalent to them. Neuron behavior is sufficiently complex that we cannot yet model a single natural (i.e. real) neuron.
Even if
Re: (Score:3)
(notable historical figure looks through his telescope, makes notes about the celestial bodies) "Clearly the Earth rotates around the Sun, as do all these other celestrial bodies, not the other way around!" The Church says "Blasphemy, of *course* the Earth is the Center of the Universe!"
While this did happen, it was not the reason the church imprisoned Galileo Galilei. It is true that Galileo was branded a heretic and imprisoned for the rest of his life, but those charges where actually trumped up. What a lot of people don't know is at the time the church was the leading funder of scientific research, but most of that research was internal to the church. When Galileo presented his papers to the church, they already knew the earth went around the sun, and the earth wasn't the center of