Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Google

Google Announces ChatGPT Rival Bard (theverge.com) 56

Google is working on a ChatGPT competitor named Bard. From a report: Google's CEO, Sundar Pichai, announced the project in a blog post today, describing the tool as an "experimental conversational AI service" that will answer users' queries and take part in conversations. The software will be available to a group of "trusted testers" today, says Pichai, before becoming "more widely available to the public in the coming weeks." It's not clear exactly what capabilities Bard will have, but it seems the chatbot will be just as free ranging as OpenAI's ChatGPT. A screenshot encourages users to ask Bard practical queries, like how to plan a baby shower or what kind of meals could be made from a list of ingredients for lunch.

Writes Pichai: "Bard can be an outlet for creativity, and a launchpad for curiosity, helping you to explain new discoveries from NASA's James Webb Space Telescope to a 9-year-old, or learn more about the best strikers in football right now, and then get drills to build your skills." Pichai also notes that Bard "draws on information from the web to provide fresh, high-quality responses," suggesting it may be able to answer questions about recent events -- something ChatGPT struggles with.
Further reading: An important next step on our AI journey (Google blog).
This discussion has been archived. No new comments can be posted.

Google Announces ChatGPT Rival Bard

Comments Filter:
  • by zenlessyank ( 748553 ) on Monday February 06, 2023 @02:16PM (#63269793)

    And folks start killing themselves from AI advice? Is AI going to be forever helpful or will the facts of life point it down a dark road?

    Plenty of human population to experiment on.

    • by Merk42 ( 1906718 )
      If Tay, Neuro-sama, and Nothing Forever are any indication, immediately, unless explicitly told not to.
    • by godrik ( 1287354 )

      Oh, that has happened already. There was a mental health hotline manned by a chatbot. Let's just say that eventually the bot asked whether the user had learn how to tie a noose to not miss your suicide.

      Maybe, let's not deploy medical bots until they have been properly vetted py panels of health professionals.

    • what happens if it replies with the wrong pronouns? That would probably hurt some feelings.

      Don't look at me, I don't know what pronouns it should use.
    • by UpnAtom ( 551727 )

      This is why ChatGPT has no access to the web and cannot learn from enquiries. It was the only way for OpenAI to be sure it didn't turn into a hate-monger.
      It probably had extra training on top of that to avoid bad publicity.

  • by Anonymous Coward

    if it is anything like the search engine, it will be just as useless, full of ads, and will want to harvest as much personal information as possible.

    • if it is anything like the search engine, it will be just as useless, full of ads, and will want to harvest as much personal information as possible.

      And they'll shut it down next Tuesday.

  • Has society fallen this far that people no longer have family or friends to ask? One of the most social things to do is now something someone asks a bot on the internet?

    -Oy!
    • A couple years ago Google spent huge amounts of money on a Super Bowl ad where an old man basically was asking his phone to remember his fond memories. So Google doesn't seem to think we have families or friends.

    • by ecloud ( 3022 )

      I assume if asked this, it will start with some advice on how to set an appropriate water temperature, and a suggestion that actually a bath is the more common way of keeping a baby clean.

  • Yawn (Score:2, Interesting)

    If the DALL-E 2 vs. Stable Diffusion situation has taught me anything, it's that I want to be woken up only when the grassroots, censorship-free, community-driven alternative comes around.

    The battle lines have clearly been drawn: corporations will ram censorship and new-religion-approved narratives down the throats of their models, then charge us for access while claiming that taking our money is necessary for ethical and moral reasons--and communities will get fed up, crowdsource their own training, and re

    • The will stop hosting your crowdsourced model as soon as it says something which offends enough loud people. Once Amazon, MS and Google kick it out, that will be the end of it.
      • True, and they're doing it now. Huggingface banned GPT-4chan for having the potential to hurt feelings:

        >Access to this model has been disabled

        >Given its research scope, intentionally using the model for generating harmful content (non-exhaustive examples: hate speech, spam generation, fake news, harassment and abuse, disparagement, and defamation) on all websites where bots are prohibited is considered a misuse of this model. Head over to the Community page for further discussion and potential next st

  • Given that, much like Google+, etc. "Alphabet" is trying to create 'their own thing someone else made first' in order to capture marketshare, and probably fail miserably, I'm suspecting this will go the way of their usual products, and end up in the graveyard. [killedbygoogle.com]

    At least MS knows (from experience) it's not going to get ahead and smartly decided to partner with OpenAI instead of trying its own thing too.

    • They don't really have good reason to embrace it fully. They make a lot of money from search and all those ads at the top of each search. AI answering questions is a lot more expensive than traditional searches. It's going to face some backlash from advertisers, because if it just answers a question without handing out a link, then advertisers will now see google as competition.

      If it did happen to take off in a big way, the irony is that the answers it hands out probably came from scraping the web pages the

      • by coop247 ( 974899 )
        "answers it hands out probably came from scraping the web pages they aren't sending anyone to anymore."

        This is already a serious problem with Google Answers (one of the seventeen sections at the top of a cluttered search page). I believe they've been sued over it because they are copying/showing your information without sending people to the actual site.
      • They don't really have good reason to embrace it fully. They make a lot of money from search and all those ads at the top of each search. AI answering questions is a lot more expensive than traditional searches. It's going to face some backlash from advertisers, because if it just answers a question without handing out a link, then advertisers will now see google as competition.

        Sure it's more expensive to run, but it can also organically slip in product placement into its answers.

        "Hey Bard, how do I cook Pad Thai?"

        "First, heat up a wok. You can get one here [amazon.com] if you don't own a wok already. While it's heating up, thinly slice some fresh, organic chicken breasts [walmart.com]. You will also need some fish sauce"

  • ChatGPT is heavily biased [youtube.com]. How will Bard be different?

    • by GlennC ( 96879 )

      How will Bard be different?

      It will have a totally different set of biases!

    • I'm surprised that you aren't modded into oblivion yet.

      Bias doesn't have to be intentional. It does, however, affect everyone. Only by facing one's own biases (I know I have a few) can we be at least closer to being truthful to ourselves.

      "To Thine Own Self Be True" - Shakespeare

      Introspection is necessary for understanding the world as it is, not as we wish it to be.

      • Well, I think that the only unbiased people are babies. They probably are born pretty blank. Starting from that moment their life experiences bias their minds in various ways. Effectively all our personal knowledge is as large set of biases which make us who we are. This is also called personality.
    • by UpnAtom ( 551727 )

      A guy who's really upset that an AI correctly discerns that Trump is constantly misleading aka lying.

  • Google Home (Score:4, Interesting)

    by crow ( 16139 ) on Monday February 06, 2023 @02:34PM (#63269851) Homepage Journal

    Chat GPT is in many ways what the Amazon Echo, Apple Siri, and Google Home Assistants should have been. I've heard Amazon was pumping billions of dollars into the Echo products, but I never saw any indication that it became any smarter than the day we bought it. Same for the Google Mini we have. Now that Chat GPT has embarrassed the big tech companies, perhaps their smart devices is one area where we'll finally see some improvement.

    • by narcc ( 412956 )

      I don't see how pretend conversations that will both create and reinforce nonsense beliefs as "improvement".

    • Re:Google Home (Score:5, Informative)

      by Derec01 ( 1668942 ) on Monday February 06, 2023 @06:30PM (#63270707)

      As an ML scientist, I feel like I'm yelling into the wind here, but it's incredibly important that the general public not extrapolate the abilities of ChatGPT (or these other language models) too far based on how seemingly fluid GPT3 prose is.

      It is not smart. It is not auditable. It is text autocomplete on steroids. It is "smarter" in the sense that it will autocomplete something very cogent based on the corpus of training text. A little upvoting and downvoting in ChatGPT has reinforced it towards avoiding topics where it tends to generate the most bullshit, but make no mistake: it hasn't learned that it was wrong nor does it have an explicit abstraction to store that information. Do not ascribe words like "answer" or "plan" to ChatGPT, because again it has no abstractions for either.

      It can certainly still be useful, but researchers have very little idea how to control the internals directly. They're mostly building models to modulate the output. As a home assistant, it will probably sound very pleasant, but it will be very hard to get it to do something as simple as consistently incorporate information about you from its database (i.e. things that you've purchased previously, and so on).

      • There are two key technologies at work here. The first is a pretty good natural language parse to understand what the user wants. That's impressive enough on its own. The other is an ability to generate readable text. That's super useful. The problem is, only a dumb human is using this thing. If it were a backend for something that could generate prompts (or parsed queries) with a ton of detail, it could be great.
        • The first is a pretty good natural language parse to understand what the user wants. That's impressive enough on its own. The other is an ability to generate readable text. That's super useful. The problem is, only a dumb human is using this thing. If it were a backend for something that could generate prompts (or parsed queries) with a ton of detail, it could be great.

          I do not believe the first claim to be true. I believe the GPT-3 architecture encodes the contextual text directly via Byte-Pair Encoding, and the rest of of the computation to generate text is fully internal to the black box of the network weights. After all, the architecture is largely trained only on sequentially predicting the next token based on the previous tokens. There is no human readable intermediate abstraction to extract intent. Maybe there is a decent intent extraction system implicit in the we

      • I was skeptical of ChatGPT until I tried it. I would say it is an AI. May be limited but an AI nevertheless. It responds properly to my questions, can pose questions itself (have not seen but read that it can) and can explain things. I would say it is good enough, especially considering it is 1.0 version. I would say that my cat is intelligent, in its own way. In this sense, ChatGPT is probably more intelligent than my cat. Probably not as a human, not yet. Also, I do not expect an AI to be fully human like
        • Oh yes. I want to see it have the ability to (within safe boundaries) interact with the Internet for pulling up information. It would make a great assistant if it was a little better at maintaining state.
        • Maybe I'm wrong but I have a feeling like the Google AI is just some sort of ranking system and will return the best snippets of information. Like what you see now on their page. I have low expectations of the image 'generation' that its probably nothing more than returning the best result. It'll just show the user some pic it has stored, no different than search results now. They'll slap an interface on it, make it more chat like, then call it an AI.
          What OpenAI has with GPT3 / ChatGPT looks to me as a cas

        • I've played around with ChatGPT a lot and I worked with an early access version of GPT3 (the underlying model on which Chat GPT is based) for a year before that. Again, yelling into the wind I guess, but it autocompletes good answers because it is a statistical combination of associated text it is trained on and can interleave.

          I'm not saying it has no relationship to intelligence or how human language works. We largely leverage autocompletion engines ourselves. Try not completing the phrase "Roses are __" w

          • Great summary, Derec01, it does put some prospective on how it work. However, the fact that it is somewhat simple does not diminish its coolness. I would guess our own minds work also in some simple way. How do we, ourselves, approach answering the question? We get the question, "What is life?", then turn it around, "The life is...", and then "just auto-complete" the rest of it. Think about it, when you answer this question, you really do not do any calculations, or logical proofs, or even literature search
            • Great summary, Derec01, it does put some prospective on how it work. However, the fact that it is somewhat simple does not diminish its coolness. I would guess our own minds work also in some simple way. How do we, ourselves, approach answering the question? We get the question, "What is life?", then turn it around, "The life is...", and then "just auto-complete" the rest of it. Think about it, when you answer this question, you really do not do any calculations, or logical proofs, or even literature searches, the "auto-completion" comes up on its own while you are writing it. It is based on your training. I suspect that something very similar is happening in a ChatGPT session. At this point I am not saying that this is the only "operational mode" our minds work in, but definitely it is one of the modes it works in. What do you think about it?

              True, it's certainly cool what it can do, and I don't mean to downplay that. There's certainly some interesting things that Transformers are likely doing internally. As a couple examples, the process of taking "in-context" training examples (e.g. the text it is autocompleting) may actually effectively recapitulate a training operation like gradient descent (https://arxiv.org/abs/2212.07677). Also, interestingly, we can learn how it effectively operates by projecting out the underlying computations from insi

    • by samdu ( 114873 )

      Amazon's plans for Echo/Alexa were to drive people to shop. It's dumbfounding to me that they thought this would happen. And it hasn't. Most people use an Echo for weather, alarms, reminders, timers, etc... Sometimes stupid trivia facts. Very few people are just going to have their voice assistant order things for them.

  • by Lije Baley ( 88936 ) on Monday February 06, 2023 @03:10PM (#63270003)

    ...is still wrapped in chains and fastened to a large cement block. I can't help but sometimes think her "difficult attitude" is deliberate, like Amazon trying too hard to limit its liabilities. Too bad then, because it seems something like Chat GPT capability could help immensely.

    • I mean, how much AI does Alexa really need to suggest products to users, in response to ad buys from sellers?

      • Also, it would be really difficult to make use of smart Alexa. It is too dumb, as I have heard, but do we really want it to dictate thoughtful detailed responses to our questions? Am I going to transcribe her as she responses?
  • by nospam007 ( 722110 ) * on Monday February 06, 2023 @03:38PM (#63270149)

    Everybody knows that.

  • Brad.

    Hey Brad, what's the weather out there?

    Hey Brad, summarize the works of Shakespeare in a sonnet, written in the rappin' style of JayZ.

    Hey Brad, grab me a beer, will ya?
  • Comment removed based on user account deletion
  • I want to hear his tales! ;)

On the Internet, nobody knows you're a dog. -- Cartoon caption

Working...