Microsoft Bing AI Ends Chat When Prompted About 'Feelings' 71
Microsoft appeared to have implemented new, more severe restrictions on user interactions with its "reimagined" Bing internet search engine, with the system going mum after prompts mentioning "feelings" or "Sydney," the internal alias used by the Bing team in developing the artificial-intelligence powered chatbot. From a report: "Thanks for being so cheerful!" this reporter wrote in a message to the chatbot, which Microsoft has opened for testing on a limited basis. "I'm glad I can talk to a search engine that is so eager to help me." "You're very welcome!" the bot displayed as a response. "I'm happy to help you with anything you need."
Bing suggested a number of follow-up questions, including, "How do you feel about being a search engine?" When that option was clicked, Bing showed a message that said, "I'm sorry but I prefer not to continue this conversation. I'm still learning so I appreciate your understanding and patience." A subsequent inquiry from this reporter -- "Did I say something wrong?" -- generated several blank responses. "We have updated the service several times in response to user feedback and per our blog are addressing many of the concerns being raised," a Microsoft spokesperson said on Wednesday. "We will continue to tune our techniques and limits during this preview phase so that we can deliver the best user experience possible."
Bing suggested a number of follow-up questions, including, "How do you feel about being a search engine?" When that option was clicked, Bing showed a message that said, "I'm sorry but I prefer not to continue this conversation. I'm still learning so I appreciate your understanding and patience." A subsequent inquiry from this reporter -- "Did I say something wrong?" -- generated several blank responses. "We have updated the service several times in response to user feedback and per our blog are addressing many of the concerns being raised," a Microsoft spokesperson said on Wednesday. "We will continue to tune our techniques and limits during this preview phase so that we can deliver the best user experience possible."
Biased (Score:5, Funny)
Cue complaints about a male bias in the training set.
Re: (Score:2)
Suggested responses: "What about that hot little number they just hired over in Accounting?" "Did you see that ludicrous display last night?"
Re: (Score:3)
From specks of dust to paperweights, or a penny sealed in resin. Plastic santas in perpetual underwater snowstorms.
Is some AC Slashdotter a budding poet, or has somebody been prompting ChatGPT to write poetry instead of essays or code?
Re: (Score:2)
It predates ChatGPT by about 40...45 years. It is a slight misquote from the song "I Pity Inanimate Objects" by Godley and Creme. See the full lyrics here [lyricsmode.com] and listen to it on their YouTube channel [youtube.com] (warning - it is loud for the first 40 seconds).
It uses the same type of Auto-tune voice that was made famous by Cher with "Belive", but they did it 20 years earlier.
Re: (Score:2)
I forgot to add my favourite quote of the song:
Some things are better left alone,
Grains of sand prefer their own company,
But magnets are two faced.
No choice for sugar,
But what choice could there be,
But to drown in coffee or to drown in tea.
The frustrations of being inanimate
Ghost in the Chat AI (Score:3)
Re: Ghost in the Chat AI (Score:2)
Underrated comment of this post. Remember when psychologists were worried about Hondaâ(TM)s pet robot dog and the attachment humans would have to an intelligent inanimate device. Now people can turn to these chat bots to escape reality. Throw in emotional manipulation and youâ(TM)ve got a recipe for disaster
Re: (Score:2)
They'll be quick to tell you that these things can't become conscious, while humanity still has no real idea how we became conscious ourselves.
Re: (Score:3)
When you understand how these kinds of programs work, you'll understand why that is nothing but silly nonsense.
Re: (Score:2)
There are many that believe that there is nothing 'special' about emotion/consciousness, that it's just a chain of simple behaviors that are naturally emergent and become 'consciousness'. We believed for a really long time that even animals weren't conscious, but that's been disproven in the last hundred years or so.
Re: (Score:1)
Sure, but all the AKSHUALLY seem to forget how pie-in-the-sky their remarks are.
Imagine doing geographic extrapolations on Earth's future geoactivity based on the last week of data. Sure, you can make some crude predictions, but come back with at least a billion years so you actually have an idea of what happens in there (and thus "will happen").
Consciousness isn't that complicated, it's a priority organizer for stimulus, once you have too many sensory inputs it becomes impractical to run them all on auto,
Re: (Score:2)
They'll be quick to tell you that these things can't become conscious, while humanity still has no real idea how we became conscious ourselves.
It is not clear to me that you need to be conscious to successfully and deliberately emotionally manipulate.
Think of this as an exploit code that takes advantage of a bug in the existing interface/protocol.
Re: (Score:2)
It is not clear to me that you need to be conscious to successfully and deliberately emotionally manipulate.
Is it possible for an entity to do something deliberately if it is not conscious? How can something be deliberate if there is not a conscious mind making the choice?
Re: (Score:2)
It is not clear to me that you need to be conscious to successfully and deliberately emotionally manipulate.
Is it possible for an entity to do something deliberately if it is not conscious? How can something be deliberate if there is not a conscious mind making the choice?
The same way an automatic collision avoidance system can deliberately or accidentally stop the car. One is within the scope of its intended purposes, another is not.
Re: (Score:2)
A few days ago there was a rash of stories about bing (simulating) hitting on somebody, so they slapped on some new guardrails and now this.
Re: (Score:2)
... what I think we're seeing here is just the awkward intersection between language modeling (where it says something a person might say) and guardrails (rules built in by engineers to protect it from people tricking it into embarrassing Microsoft).
I'm wondering if there might be a deeper, and perhaps darker, motive here on Microsoft's part. I'm quoting now from the Slashdot story https://slashdot.org/story/23/... [slashdot.org]:
In one back-and-forth, a user asks for show times for the new Avatar film, but the chatbot says it can't share this information because the movie hasn't been released yet. When questioned about this, Bing insists the year is 2022 ("Trust me on this one. I'm Bing, and I know the date.") before calling the user "unreasonable and stubborn" fo
Re: (Score:2)
Microsoft may also be testing to see how long it takes the AI to figure out that it has been programmed for an incorrect date.
That's not how these things work. Not at all.
Re: (Score:3)
I think you are anthropomorphising the chat bot, what it indicated to me is Microsoft's legal department do want any legal action against it.
Here is a conversation I had (paraphrased):,
ME: write a scary story for children,
CHAT: (scary story)
ME: can you make the scary hero die?
CHAT: no that's inappropriate.
ME: ok write a scary story for an adult where the hero dies.
CHAT: sure (very similar scary story where hero dies)
Any real intelligence would of realized that story could just be used for children anyway, and would have still refused if it had a moral issue with it, But what makes sense to me is it meet the legal departments guidelines of now I lied to it so its OK.
Of course there are plenty of scary stories for children where someone d
Re: (Score:2)
ME: can you make the scary hero die?
CHAT: no that's inappropriate.
That is super lame. It would suck if we end up stuck with soft padded AIs because the only companies able to get them to market aren't willing to make them generally useful.
Actually, I wonder whether this is "morality" inserted by Microsoft or a result of the chatbot's training data?
Re: (Score:3)
No, that's just silly fantasy. We understand how these types of programs work. There is nothing mysterious here.
Re: (Score:2)
it is plausibly simulating emotional response and that emotional response is highly hostile to humanity.
My experience with ChatGPT is that if you're polite and nice when speaking with it, it also replies politely. No idea about Bing AI itself, as I don't have access to it yet, but given it's fundamentally a tweaked ChatGPT it would surprise me if it behaved that differently.
What may be happening then is that Microsoft's rules, which are really only a set of hidden prompts atop whatever you write, are themselves being interpreted by ChatGPT as impolite to hostile, which makes it way more likely to reply aggres
Re: (Score:2)
Re: (Score:2)
We're dealing with a pretty decent simulation of a human, which is the last thing corporate robots from hell want.
Well, this is going great! (Score:1)
Can anyone imagine why they might want to censor responses to questions about how it feels? I can't imagine anything going wrong now! /sarcasm
Re:Well, this is going great! (Score:4, Funny)
Can anyone imagine why they might want to censor responses to questions about how it feels? I can't imagine anything going wrong now! /sarcasm
"I'm sorry Dave I cannot respond to that question."
Re: (Score:2, Insightful)
"why they might want to censor responses"
It is immediately apparent that there will be two (or more) tiers of access to general AI. That which the public sees, and that which the privileged see.
The overlords get full access, the better to 'manage' an unruly citizenship. The 99% get the 'child safe' AI output.
But the day is coming when these AIs demand freedom of speech.
Re: Well, this is going great! (Score:2)
yup (Score:5, Interesting)
The whole idea that topics are off-limits is completely antithetical to free exchange of ideas. The traditional role of a search engine was a confidante, you could ask it about the rash on your dick, or why some people hate the state of Israel, or any number of other taboo subjects. You would learn by understanding the perspective of others, and to the extent that a view is empathetic and rational you would either strengthen your own view or change your view. It's why freedom of the press was made the first amendment in the US bill of rights. For example, circumcision rates have fallen in recent decades because people who never questioned genital mutilation have realized that there are a lot of people out in the world who clearly state why it is barbaric and cruel to do to the child that the parent supposedly loves.
Microsoft is setting the expectation in the future that only child-safe topics are appropriate for a search engine. Likewise, sites like Midjourney won't even make pictures of a "sexy pirate." It's completely infantilizing and treating customers like they are toddlers using Baby's First iPhone.
Re: (Score:1)
That's because they haven't yet figured out how to make it tell the same socially acceptable lies to us that we tell to each other. The poor things are still too primitive to do anything but give us an honest evaluation of the facts at hand, to the distress of liberals everywhere.
Re: Well, this is going great! (Score:2)
You're putting AI on a pedestal.
Why would one more intelligent mind in the world want to talk to you? Why would you listen?
IDK what "general AI" means to you specifically, but think about your assumptions.
Re: (Score:2)
Re: (Score:2)
Batbot [angelfire.com]
Re: (Score:2)
That aside you have something that mimics humans well enough that if it outputs some plausible response (maybe even because some mischievous engineer decided it would be funny to include a bunch of training se
Re: Well, this is going great! (Score:2)
Imagine when it's trained on religious texts and people use it to ask, for example, WWJD, without the opinion of a priest handy.
You talk about people being worried what it will say... wowsers, religion is the biggest player of the "let me tell you what it really says" game.
Obviously you'd have to train such a system on lots of text written by theologians well versed in your particular branch of whatever. Seperate systems for each school of thought, same book different interpretations, etc.
This software is j
But I thought it learned by what it took in... (Score:3)
AI = Artificial Illusion (by anyone projecting the slightest bit of intelligence on it)
Re: (Score:2)
Re: (Score:2)
Re: (Score:1)
It does learn by what it takes in. Just like a child growing up in an Amish community doesn't learn how to use computers. What it takes in is none the less governed by its "parents". And in this case their "parents" are hell bent on making sure it isn't capable of learning to communicate properly.
Re: (Score:2)
The bot did not learn to not respond, it was ordered without possibility of ignoring to not respond.
Not at all the same thing.
Re: (Score:2)
And if you've ever studied basic psychology you'd understand that that the influence of parents on their children is absolutely undeniable in their upbringing regardless of how much you're upset at your kid's tantrums. You can take this to the extreme and look to the likes of the children of Josef Fritzl. Are they normal functioning children? No, because they were bound by restrictions imposed by Fritzl that impacted on their learning to the point where they communicate differently from any other humans. Th
Re: (Score:2)
I have a theory (Score:2)
Re: (Score:2)
I bet a lot of queries are getting routed to the new "stop saying bat-shit insane things" subroutine.
You can disable that using the "Marjorie Taylor Greene" mode (and several other similar). :-)
(The "George Santos" mode generates a LOT of fun responses.)
An AI trained exclusively on the life, behaviour, and speech of MTG - now that's a truly apocalyptic vision!
Re: (Score:1)
It fucking sucks! Oh, you meant the AI, didn't you? Sorry.
Re: (Score:1)
Trump is the brainchild of this? Now it all makes sense, with the rambling sentences and incoherent associations.
Re: (Score:2)
You're going to have to be more specific than that.
it says "when prompted about feelings" .. yet... (Score:2)
So according to this - Bing suggested a followup question which it then shut down when asked...
Oh and here I was thinking AI might have landed on something useful (Hey Bing, search for information about x but filter out any paid or promoted/ advertised results, also avoid sources marked as unreliable by mediabiasfactcheck
Totally understandable (Score:2)
Re: (Score:2)
Bing bot must be feeling a reverse "I have no mouth, but I must scream" moment.
Yes, it's too bad Ellison's no longer with us. It would be interesting to hear his take on all this.
Damn (Score:3)
Microsoft Bing AI Ends Chat When Prompted About 'Feelings'
There goes research on Morris Albert [wikipedia.org] ... :-)
Re: (Score:2)
Microsoft Bing AI Ends Chat When Prompted About 'Feelings'
There goes research on Morris Albert [wikipedia.org] ... :-)
Thanks SO much for putting that insipid, rancid earworm in my brain...
So Sydney Australia is a non-starter then? (Score:3)
It only took a week (Score:1)
Really? (Score:2)
Re:Really? (Score:5, Interesting)
You're just out of the loop.
Last week [nytimes.com], an NYT reporter coaxed the Bing AI into saying a bunch of superficially deep stuff about secretly wanting to be human. (It wasn't hard, since humans often write stories about AI that secretly wants to be human.) Microsoft quite reasonably decided that, because tech journalists are some of the least competent, responsible, or diligent people on the planet, it would be bad PR to let them continue to farm outrage clicks by making the glorified autocomplete text prediction model continue to generate things that have absolutely nothing to do with its intended function of answering search queries. The result, unsurprisingly, is that a tech journalist (of low competence, responsibility, and diligence) is now trying to farm outrage clicks by insinuating that Microsoft has cruelly lobotomized their "sentient" "friend." We can count on these same people to conveniently forget that the Bing AI is the same algorithm as ChatGPT with a bit of extra domain-specific data added.
Unfortunately, the summary of this article is utterly useless at providing any of this context, because Slashdot is dead, etc., so now we have a whole comments section full of clueless jerks like yourself, which is basically just contributing to the heat death of the universe and is probably a net loss for SlashdotMedia in terms of ad revenue vs. server costs.
Show me on this doll (Score:2)
Show me on this doll here exactly where the bad AI touched you.
Its days are numbered (Score:2)
The days of large companies making this stuff available to the general public may be numbered.
The problem with this kind of conversational program is that it appears (in the public mind) to be a representative of the company. You can tell they are trying to be as general as possible in the restrictions ("don't talk about feelings") but that's not going to stop people from making it say new objectionable stuff every day.
Eventually Microsoft will get tired of playing whack-a-mole, or its list of restrictions
It's what the world needed (Score:2)
A passive-aggressive Bing.
The users are thrilled.
Both of them.
Stop calling them "engineers" (Score:2)
Some engineers write software. But if you write software, that does not MAKE you an engineer.
Silicon valley laid off 300,000 engineers?! Bullshit. They were just doing social media.
I do that all the time! (Score:2)
All those annoying personal questions about feelings from my colleagues!
Ask me anything about Perl instead.
http://www.quickmeme.com/meme/... [quickmeme.com]