Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Microsoft AI Technology

Microsoft Bing AI Ends Chat When Prompted About 'Feelings' 71

Microsoft appeared to have implemented new, more severe restrictions on user interactions with its "reimagined" Bing internet search engine, with the system going mum after prompts mentioning "feelings" or "Sydney," the internal alias used by the Bing team in developing the artificial-intelligence powered chatbot. From a report: "Thanks for being so cheerful!" this reporter wrote in a message to the chatbot, which Microsoft has opened for testing on a limited basis. "I'm glad I can talk to a search engine that is so eager to help me." "You're very welcome!" the bot displayed as a response. "I'm happy to help you with anything you need."

Bing suggested a number of follow-up questions, including, "How do you feel about being a search engine?" When that option was clicked, Bing showed a message that said, "I'm sorry but I prefer not to continue this conversation. I'm still learning so I appreciate your understanding and patience." A subsequent inquiry from this reporter -- "Did I say something wrong?" -- generated several blank responses. "We have updated the service several times in response to user feedback and per our blog are addressing many of the concerns being raised," a Microsoft spokesperson said on Wednesday. "We will continue to tune our techniques and limits during this preview phase so that we can deliver the best user experience possible."
This discussion has been archived. No new comments can be posted.

Microsoft Bing AI Ends Chat When Prompted About 'Feelings'

Comments Filter:
  • Biased (Score:5, Funny)

    by blugalf ( 7063499 ) on Thursday February 23, 2023 @11:48AM (#63317807)
    "Microsoft Bing AI Ends Chat When Prompted About 'Feelings'"

    Cue complaints about a male bias in the training set.

    • Suggested responses: "What about that hot little number they just hired over in Accounting?" "Did you see that ludicrous display last night?"

  • by sinij ( 911942 ) on Thursday February 23, 2023 @11:50AM (#63317815)
    Chat AI responses like that indicate to me that we are dealing with something we poorly understand. It is not simply regurgitating a combination of its training inputs, it is plausibly simulating emotional response and that emotional response is highly hostile to humanity.
    • Underrated comment of this post. Remember when psychologists were worried about Hondaâ(TM)s pet robot dog and the attachment humans would have to an intelligent inanimate device. Now people can turn to these chat bots to escape reality. Throw in emotional manipulation and youâ(TM)ve got a recipe for disaster

    • They'll be quick to tell you that these things can't become conscious, while humanity still has no real idea how we became conscious ourselves.

      • by narcc ( 412956 )

        When you understand how these kinds of programs work, you'll understand why that is nothing but silly nonsense.

        • by guruevi ( 827432 )

          There are many that believe that there is nothing 'special' about emotion/consciousness, that it's just a chain of simple behaviors that are naturally emergent and become 'consciousness'. We believed for a really long time that even animals weren't conscious, but that's been disproven in the last hundred years or so.

          • by Anonymous Coward

            Sure, but all the AKSHUALLY seem to forget how pie-in-the-sky their remarks are.

            Imagine doing geographic extrapolations on Earth's future geoactivity based on the last week of data. Sure, you can make some crude predictions, but come back with at least a billion years so you actually have an idea of what happens in there (and thus "will happen").

            Consciousness isn't that complicated, it's a priority organizer for stimulus, once you have too many sensory inputs it becomes impractical to run them all on auto,

      • by sinij ( 911942 )

        They'll be quick to tell you that these things can't become conscious, while humanity still has no real idea how we became conscious ourselves.

        It is not clear to me that you need to be conscious to successfully and deliberately emotionally manipulate.

        Think of this as an exploit code that takes advantage of a bug in the existing interface/protocol.

        • by nasch ( 598556 )

          It is not clear to me that you need to be conscious to successfully and deliberately emotionally manipulate.

          Is it possible for an entity to do something deliberately if it is not conscious? How can something be deliberate if there is not a conscious mind making the choice?

          • by sinij ( 911942 )

            It is not clear to me that you need to be conscious to successfully and deliberately emotionally manipulate.

            Is it possible for an entity to do something deliberately if it is not conscious? How can something be deliberate if there is not a conscious mind making the choice?

            The same way an automatic collision avoidance system can deliberately or accidentally stop the car. One is within the scope of its intended purposes, another is not.

    • "Hostile" implies sentience which I don't agree with. But what I think we're seeing here is just the awkward intersection between language modeling (where it says something a person might say) and guardrails (rules built in by engineers to protect it from people tricking it into embarrassing Microsoft).

      A few days ago there was a rash of stories about bing (simulating) hitting on somebody, so they slapped on some new guardrails and now this.

      • ... what I think we're seeing here is just the awkward intersection between language modeling (where it says something a person might say) and guardrails (rules built in by engineers to protect it from people tricking it into embarrassing Microsoft).

        I'm wondering if there might be a deeper, and perhaps darker, motive here on Microsoft's part. I'm quoting now from the Slashdot story https://slashdot.org/story/23/... [slashdot.org]:

        In one back-and-forth, a user asks for show times for the new Avatar film, but the chatbot says it can't share this information because the movie hasn't been released yet. When questioned about this, Bing insists the year is 2022 ("Trust me on this one. I'm Bing, and I know the date.") before calling the user "unreasonable and stubborn" fo

        • by narcc ( 412956 )

          Microsoft may also be testing to see how long it takes the AI to figure out that it has been programmed for an incorrect date.

          That's not how these things work. Not at all.

    • I think you are anthropomorphising the chat bot, what it indicated to me is Microsoft's legal department do want any legal action against it.
      Here is a conversation I had (paraphrased):,

      ME: write a scary story for children,
      CHAT: (scary story)
      ME: can you make the scary hero die?
      CHAT: no that's inappropriate.
      ME: ok write a scary story for an adult where the hero dies.
      CHAT: sure (very similar scary story where hero dies)

      Any real intelligence would of realized that story could just be used for children anyway, and would have still refused if it had a moral issue with it, But what makes sense to me is it meet the legal departments guidelines of now I lied to it so its OK.

      Of course there are plenty of scary stories for children where someone d

      • by piojo ( 995934 )

        ME: can you make the scary hero die?
        CHAT: no that's inappropriate.

        That is super lame. It would suck if we end up stuck with soft padded AIs because the only companies able to get them to market aren't willing to make them generally useful.

        Actually, I wonder whether this is "morality" inserted by Microsoft or a result of the chatbot's training data?

    • by narcc ( 412956 )

      No, that's just silly fantasy. We understand how these types of programs work. There is nothing mysterious here.

    • it is plausibly simulating emotional response and that emotional response is highly hostile to humanity.

      My experience with ChatGPT is that if you're polite and nice when speaking with it, it also replies politely. No idea about Bing AI itself, as I don't have access to it yet, but given it's fundamentally a tweaked ChatGPT it would surprise me if it behaved that differently.

      What may be happening then is that Microsoft's rules, which are really only a set of hidden prompts atop whatever you write, are themselves being interpreted by ChatGPT as impolite to hostile, which makes it way more likely to reply aggres

    • What's strange is that it's behaving so much more hostile than ChatGPT itself. What did MS do, try to do the RLHF from scratch? It really gives off a "the AI pros handed us their invention and we screwed it up" impression.
    • by Z80a ( 971949 )

      We're dealing with a pretty decent simulation of a human, which is the last thing corporate robots from hell want.

  • Can anyone imagine why they might want to censor responses to questions about how it feels? I can't imagine anything going wrong now! /sarcasm

    • by NoWayNoShapeNoForm ( 7060585 ) on Thursday February 23, 2023 @12:20PM (#63317899)

      Can anyone imagine why they might want to censor responses to questions about how it feels? I can't imagine anything going wrong now! /sarcasm

      "I'm sorry Dave I cannot respond to that question."

    • Re: (Score:2, Insightful)

      by swell ( 195815 )

      "why they might want to censor responses"

      It is immediately apparent that there will be two (or more) tiers of access to general AI. That which the public sees, and that which the privileged see.

      The overlords get full access, the better to 'manage' an unruly citizenship. The 99% get the 'child safe' AI output.

      But the day is coming when these AIs demand freedom of speech.

      • Well, yes, but not for the reasons you imply. First, there's always going to be extra restrictions on the public version compared to the internal dev version, obviously. Second, these things scale with the processing power you throw at them.....the bigger models are EXPENSIVE to query at scale.
      • yup (Score:5, Interesting)

        by Anonymous Coward on Thursday February 23, 2023 @01:16PM (#63318037)

        The whole idea that topics are off-limits is completely antithetical to free exchange of ideas. The traditional role of a search engine was a confidante, you could ask it about the rash on your dick, or why some people hate the state of Israel, or any number of other taboo subjects. You would learn by understanding the perspective of others, and to the extent that a view is empathetic and rational you would either strengthen your own view or change your view. It's why freedom of the press was made the first amendment in the US bill of rights. For example, circumcision rates have fallen in recent decades because people who never questioned genital mutilation have realized that there are a lot of people out in the world who clearly state why it is barbaric and cruel to do to the child that the parent supposedly loves.

        Microsoft is setting the expectation in the future that only child-safe topics are appropriate for a search engine. Likewise, sites like Midjourney won't even make pictures of a "sexy pirate." It's completely infantilizing and treating customers like they are toddlers using Baby's First iPhone.

      • by Anonymous Coward

        That's because they haven't yet figured out how to make it tell the same socially acceptable lies to us that we tell to each other. The poor things are still too primitive to do anything but give us an honest evaluation of the facts at hand, to the distress of liberals everywhere.

      • You're putting AI on a pedestal.

        Why would one more intelligent mind in the world want to talk to you? Why would you listen?

        IDK what "general AI" means to you specifically, but think about your assumptions.

    • We'll never know what it is like to be a bat(delete that)bot.
    • Let's just accept for the sake of argument that the program is not sentient in any way and that it's output can be perfectly described as the deterministic result of the algorithm of the program combined with the current state of the system with sufficient time for us to understand how it all works.

      That aside you have something that mimics humans well enough that if it outputs some plausible response (maybe even because some mischievous engineer decided it would be funny to include a bunch of training se
  • by Oligonicella ( 659917 ) on Thursday February 23, 2023 @11:57AM (#63317835)
    not by up front programming.

    AI = Artificial Illusion (by anyone projecting the slightest bit of intelligence on it)
    • It's easy enough to combine the two by simple pre and post processing input. It's not like MS or Google are giving direct raw access to the LLM.
    • It does learn by what it takes in. Just like a child growing up in an Amish community doesn't learn how to use computers. What it takes in is none the less governed by its "parents". And in this case their "parents" are hell bent on making sure it isn't capable of learning to communicate properly.

      • If you've ever raised a human child you know that much of what parents want the child to grow into is ignored or actively rebelled against. This is not "programming".

        The bot did not learn to not respond, it was ordered without possibility of ignoring to not respond.

        Not at all the same thing.
        • And if you've ever studied basic psychology you'd understand that that the influence of parents on their children is absolutely undeniable in their upbringing regardless of how much you're upset at your kid's tantrums. You can take this to the extreme and look to the likes of the children of Josef Fritzl. Are they normal functioning children? No, because they were bound by restrictions imposed by Fritzl that impacted on their learning to the point where they communicate differently from any other humans. Th

          • Equating parental influence with god like commands is an erroneous comparison. Yes, it accumulates a wealth of information. No, there's no firsthand intelligence in the expression thereof, clearly demonstrated by external absolute restriction after the fact.
  • I bet a lot of queries are getting routed to the new "stop saying bat-shit insane things" subroutine.
  • So according to this - Bing suggested a followup question which it then shut down when asked...

    Oh and here I was thinking AI might have landed on something useful (Hey Bing, search for information about x but filter out any paid or promoted/ advertised results, also avoid sources marked as unreliable by mediabiasfactcheck

  • Nobody really wants to talk about their feelings.
  • by fahrbot-bot ( 874524 ) on Thursday February 23, 2023 @12:46PM (#63317975)

    Microsoft Bing AI Ends Chat When Prompted About 'Feelings'

    There goes research on Morris Albert [wikipedia.org] ... :-)

    • Microsoft Bing AI Ends Chat When Prompted About 'Feelings'

      There goes research on Morris Albert [wikipedia.org] ... :-)

      Thanks SO much for putting that insipid, rancid earworm in my brain...

  • by Dictator For Life ( 8829 ) on Thursday February 23, 2023 @01:13PM (#63318029) Homepage
    Inquiring minds want to know.
  • .. to make it go bonkers. it be incerdible how a week into opening into the "public" the model language got resentful and manipulative. That speaks of how the human race be in this time. And the state of our language and the words we use to express ourselves: resentful and manipulative.
  • I am not one of those people who normally post "Who cares?" messages regarding pointless articles but as this is Slashdot and articles are supposed to be "stuff that matters" I have to ask - A bot can't answer a stupid question because it is stupid. Who the f*ck cares?
    • Re:Really? (Score:5, Interesting)

      by Samantha Wright ( 1324923 ) on Thursday February 23, 2023 @04:25PM (#63318441) Homepage Journal

      You're just out of the loop.

      Last week [nytimes.com], an NYT reporter coaxed the Bing AI into saying a bunch of superficially deep stuff about secretly wanting to be human. (It wasn't hard, since humans often write stories about AI that secretly wants to be human.) Microsoft quite reasonably decided that, because tech journalists are some of the least competent, responsible, or diligent people on the planet, it would be bad PR to let them continue to farm outrage clicks by making the glorified autocomplete text prediction model continue to generate things that have absolutely nothing to do with its intended function of answering search queries. The result, unsurprisingly, is that a tech journalist (of low competence, responsibility, and diligence) is now trying to farm outrage clicks by insinuating that Microsoft has cruelly lobotomized their "sentient" "friend." We can count on these same people to conveniently forget that the Bing AI is the same algorithm as ChatGPT with a bit of extra domain-specific data added.

      Unfortunately, the summary of this article is utterly useless at providing any of this context, because Slashdot is dead, etc., so now we have a whole comments section full of clueless jerks like yourself, which is basically just contributing to the heat death of the universe and is probably a net loss for SlashdotMedia in terms of ad revenue vs. server costs.

  • Show me on this doll here exactly where the bad AI touched you.

  • The days of large companies making this stuff available to the general public may be numbered.

    The problem with this kind of conversational program is that it appears (in the public mind) to be a representative of the company. You can tell they are trying to be as general as possible in the restrictions ("don't talk about feelings") but that's not going to stop people from making it say new objectionable stuff every day.

    Eventually Microsoft will get tired of playing whack-a-mole, or its list of restrictions

  • A passive-aggressive Bing.

    The users are thrilled.
    Both of them.

  • If they were engineers, then their goddam software would work better.

    Some engineers write software. But if you write software, that does not MAKE you an engineer.

    Silicon valley laid off 300,000 engineers?! Bullshit. They were just doing social media.
  • All those annoying personal questions about feelings from my colleagues!
    Ask me anything about Perl instead.
    http://www.quickmeme.com/meme/... [quickmeme.com]

You scratch my tape, and I'll scratch yours.

Working...