Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI Facebook Social Networks Software Technology

Facebook Uses 1.5 Billion Reddit Posts To Create Chatbot (bbc.com) 53

Facebook trained a new chatbat with 1.5 billion examples of human exchanges from reddit, claiming it's able to demonstrate empathy, knowledge and personality. The BBC reports: The social media giant said 49% of people preferred interactions with the chatbot [named "Blender"], compared with another human. But experts say training the artificial intelligence (AI) using a platform such as Reddit has its drawbacks. Numerous issues arose during longer conversations. Blender would sometimes respond with offensive language, and at other times it would make up facts altogether. Researchers said they hoped further models would address some of these issues.

Facebook also compared Blender's performance with the latest version of Google's own chatbot, Meena. It showed people two sets of conversations, one made with Blender and the other with Meena. Conversations included a wide range of topics including movies, music and veganism. Facebook said that 67% of respondents thought Blender sounded more human than Meena. "We achieved this milestone through a new chatbot recipe that includes improved decoding techniques, novel blending of skills, and a model with 9.4 billion parameters, which is 3.6x more than the largest existing system. This is the first chatbot to blend a diverse set of conversational skills together in one system. Building a truly intelligent dialogue agent that can chat like a human remains one of the largest open challenges in AI today."

This discussion has been archived. No new comments can be posted.

Facebook Uses 1.5 Billion Reddit Posts To Create Chatbot

Comments Filter:
  • Does that mean our president is an earlier example of a twitter based chatbot AI?
  • by celest ( 100606 ) <mekki@mekki.ca> on Wednesday May 06, 2020 @06:44PM (#60030096) Homepage

    Blender would sometimes respond with offensive language, and at other times it would make up facts altogether.

    Sounds like a Redditor to me.

  • Reddit was probably against this or unaware of this theft.

    I bet the people posting never agreed to this.

    What about their rights?

  • It *simulates* empaty. Like psychopaths. Or Zuckerberg.

    • Unit 3000-21 is warming
      Makes a humming sound
      When its circuits duplicate emotions
      And a sense of coldness detaches
      As it tries to comfort your sadness

      One more robot learns to be
      Something more than a machine
      When it tries, the way it does
      Makes it seem like it can love
      'Cause it's hard to say what's real
      When you know the way you feel
      Is it wrong to think it's love
      When it tries the way it does?

      Feeling a synthetic kind of love
      Dreaming a sympathetic wish
      As the lights blink faster and brighter

      One more robot learns to b

    • It *simulates* empaty. Like psychopaths. Or Zuckerberg.

      Ah. You said "empaty". For a moment, I thought Zuckerberg had learned to simulate empathy. That would be horrifying.

  • by skogs ( 628589 ) on Wednesday May 06, 2020 @06:49PM (#60030110) Journal

    "Blender would sometimes respond with offensive language, and at other times it would make up facts altogether."

    So....just like everybody else on facebook then.

  • "Blender would sometimes respond with offensive language, and at other times it would make up facts altogether."

    So...you're saying you trained it with Reddit posts and consequently it sometimes acted like a Reddit poster? This seems more of a feature than a bug.

  • Besides an academic exercise, whats the use of chatbots? My life isn't incomplete because I lack one. I doesn't help me. Yeah we can talk about chatbot based interfaces. But we can also design user interfaces better. And if the purpose of such interface is to get stuff done, then the task is more about understanding and doing things rather than having a conversation. Really, who has the time to have a conversation with AI?
    Is this yet another way to make people obsolete in a race to the bottom?

    • Besides an academic exercise, whats the use of chatbots? My life isn't incomplete because I lack one. I doesn't help me.

      Like all automation, it helps bring expensive products [professional services] to the many:

      eg first legal bot - https://www.theverge.com/2017/... [theverge.com] now common use in most developed countries.

      And if the purpose of such interface is to get stuff done, then the task is more about understanding and doing things rather than having a conversation. Really, who has the time to have a conversation with AI?

      Many professions are almost entirely based on using conversation to elicit information from a customer to come up with a diagnosis, legal opinion, or product requirements. Because sometimes people don't know what they want/ what they have, so we need free form conversation as the user interface.
      I'm guessing you still need

    • by glitch! ( 57276 )

      Besides an academic exercise, whats the use of chatbots?

      If they train it from customer service calls, they can simulate a customer service agent at no cost. Sure, it won't solve the customers' problems, but it can engage the customers until they hang up in disgust. Problem goes away.

    • Besides an academic exercise, whats the use of chatbots?

      Think of how many people have bought Alexa devices, and all that really does is play songs, do Google searches, and a few other convenient tasks. It's really weak. It doesn't have to be perfect to be useful, it just has to be incrementally better.

      That said, they should stop acting like they've built in empathy to the system when all it's doing is repeating words that others have said in similar situations.

  • So, basically, they've created a troll-idiot-asshole-bot?
    For Facebooks' next stupid trick: Create a chat-bot based on 1.5 million posts from 4chan. xD xD xD
  • Facebook was fake enough as it was. It is run by paid advertisements, advertisements meant to elicit strong emotions. Paid suggestive advertisements are not normal speech. Facebook makes all of its money delivering talking points for corporations and politicians. When is the last time you heard a talking point from a regular person that wasn't fed to them through some sort of media?

    There is something utterly wrong when these people turn that money around and create fake people to talk with. What hap

    • There are no people online.
      Humans are unable to feel empathy for online characters. Let alone thousands of them.
      All you will find online, are sociopathic unwitting clonethinkers.

      If you want to talk to people, you necessarily have to go outside and meet them.

  • "But experts say training the artificial intelligence (AI) using a platform such as Reddit has its drawbacks..."

    Training it on Reddit? What were they thinking? At least they'll have a true innovation worthy of academic accolades: they've produced an AI proficient in the art of shitposting.
  • Reddit? (Score:4, Insightful)

    by grasshoppa ( 657393 ) on Wednesday May 06, 2020 @07:15PM (#60030198) Homepage

    Where'd the empathy come from if they trained it on reddit?

    I'm prepared to believe it can virtue signal though.

  • So basically they made an insane conspiracy-theory spouting chatbot. Nice.

  • "...claiming it's able to demonstrate empathy, knowledge and personality."

    Noooo, it's faking empathy, knowledge and personality.

  • "Blender would sometimes respond with offensive language, and at other times it would make up facts altogether."
  • Reddit you say... Sounds like they are making a companion for tay to chat with.
  • Blender would sometimes respond with offensive language,

    I'd like to think that my continuous tendency to call stupid people "cunts" helped here... even if slightly... and for that I say "This I don't mind."

  • Why would they train this on Reddit vs their own data?

    • Came here to post this. I guess they're going back to their roots and stealing data from other services to create a new one where nobody opted in.

    • I don't know. But it'd be fun to train the AI on both datasets and then see if there're any differences in the behaviour of the resulting bots
    • They don't want to point out that they can do this with their own data.

      It's just like Uber's "God View" screen. It's not that anyone pretended Uber didn't have real-time data on their drivers; it's that when they made a cool looking thing from it, the proles felt it was being lorded over them. (For the record: I hate Uber, but the God Screen is not why. It's just a pretty screensaver as far as I'm concerned.)

      In the same way, everyone knows FB could do such a thing with their public data. FB could even do su

  • Facebook is such a problem. Basing it's bot on Reddit posts? For fuck's sake, the only way they could have done worse would be to base it on usenet or Trump tweets.

    Who's the moron (moran?) at FB who decided to use Reddit? Do they still have a job? Why?

  • Don't know about you but I would love to train a bot to talk, as long as there is an off switch!
  • I think these words have never been uttered before when talking about Reddit.

    Clueless meme-parrot clonethink sociopaths would be a better description.

  • > Blender would sometimes respond with offensive language, and at other times it would make up facts altogether. Sounds human enough.
  • Good bot
  • They did a trial run with this in their Zulerbot when he went in front of the congressional hearing, far from perfect.
  • Numerous issues arose during longer conversations. Blender would sometimes respond with offensive language, and at other times it would make up facts altogether.

    Sounds like comments you get in pretty much every sub on Reddit.

The biggest difference between time and space is that you can't reuse time. -- Merrick Furst

Working...