Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Google AI

Google AI Chatbot Bard Offers Inaccurate Information in Company Ad (reuters.com) 52

Google published an online advertisement in which its much anticipated AI chatbot Bard delivered an inaccurate answer. From a report: The tech giant posted a short GIF video of Bard in action via Twitter, describing the chatbot as a "launchpad for curiosity" that would help simplify complex topics. In the advertisement, Bard is given the prompt: "What new discoveries from the James Webb Space Telescope (JWST) can I tell my 9-year old about?" Bard responds with a number of answers, including one suggesting the JWST was used to take the very first pictures of a planet outside the Earth's solar system, or exoplanets. This is inaccurate. The first pictures of exoplanets were taken by the European Southern Observatory's Very Large Telescope (VLT) in 2004, as confirmed by NASA.
This discussion has been archived. No new comments can be posted.

Google AI Chatbot Bard Offers Inaccurate Information in Company Ad

Comments Filter:
  • Fire the CEO (Score:4, Interesting)

    by S_Stout ( 2725099 ) on Wednesday February 08, 2023 @09:17AM (#63275243)
    What has Google accomplished the last couple years other than a larger Google Graveyard and mass layoffs? Now they're rushing their AI to market becuase they're lagging behind...and making mistakes. Google used to innovate, I don't even know what they are anymore. I don't lock in to their products because I know they'll abandon them.

    What has the CEO done to earn his salary?
    • What has the CEO done to earn his salary?

      What CEO has ever earned there money?

      • Steve Jobs

        • by Anonymous Coward
          Was that before or after he nearly destroyed Apple, Pixar and NeXT or after? If it wasn't for Toy Story he'd probably be a footnote today, like Clive Sinclair - influential in his time, but largely forgotten now.
          • by dgatwood ( 11270 )

            Was that before or after he nearly destroyed Apple, Pixar and NeXT or after?

            I don't think that's an adequate characterization.

            When S.J. was fired from Apple, the company was basically in a functioning state. The subsequent CEOs were the ones who just about burned it to the ground by selling too many minor variations as separate SKUs, which resulted in consumer confusion, charging too high a price, spending too much time and effort on pushing technology that was a decade away from really being ready (Ink and Newton), the disaster that was Copland, not releasing useful changes that

      • Elon Musk. He doesn't take a salary.
        • by gweihir ( 88907 )

          Elon Musk. He doesn't take a salary.

          And with that he is massively overpaid.

          • An infinite number of monkeys designing an infinite number of cars will eventually produce a slightly successful EV.

            Elon Musk is that infinite monkey.

      • Learned to spell, I suppose.

      • What are your qualifications to back up this statement?
      • > What CEO has ever earned there money?

        Quite a few CEOs earned money in that place.

      • by sosume ( 680416 )

        Steve Ballmer made a shitload of money

    • Google used to innovate

      Citation needed.

      PageRank was developed before Google existed. Everything else was acquired.

      • Google release a lot of research papers, like meta, so they innovate. They both don't know anymore how to create and and maintain a product. Yann Lecun, chief AI scientist at meta, dismissing chatGPT success because it's not a state of the art model is revealing of what these companies are: private universities but students have a salary. Quitting employee are the graduates who fund startups.
    • ... Google Graveyard ... I don't lock in to their products because I know they'll abandon them.

      Indeed. The list [killedbygoogle.com] of products is impressively long.

    • by dvice ( 6309704 )

      - 2020 Waymo (self driving car) to public use in Urban areas.
      - 2021 Alphafold2 (You know, the Nobel prize caliber- stuff that will lead into cure of everything + more)
      - 2022 hundreds of millions of proteins structures published (created with Alphafold2) (traditionally making one cost something like 200 000$ and took months or years)
      - 2022 Imagen (Better version of Dall-E2 month after DallE2 was released, probably just to show off that stuff that others are doing is easy for them)
      - 2022 Gato, possible soluti

  • by v1 ( 525388 )

    This is called "Garbage In, Garbage Out", which basically reminds us that when you feed a computer bad data to process, it's going to give you unreliable results. It's not the computer's fault that you trained it wrong.

    • by gtall ( 79522 )

      In this case, it may not be possible to training these things correctly. The WSJ had an article on ChatGPT (I keep wanting to call Bard "Brad") and the reporter fed it some math problems. It didn't do so well because combing through responses to natural language inquiries is not going to help you with math. I expect it would also suitably fail with logical (as in mappable to formal logic) questions.

      The difference is between reasoning and rationalization. I think these AI thingies are doing rationalization.

      • it's doing neither, it's just finding the most likely word to follow the previous set of words.
      • by gweihir ( 88907 )

        In this case, it may not be possible to training these things correctly.

        That may very much be the case for basically all purposes. The problem is this thing has no fact-checking ability at all. It just puts out what the probabilities what the next word should be and then the results get polished a bit by something that can make language sound good but has even less insight into things. I mean, a Google search for "picture of first exoplanet" gives me the wikipedia page with the correct date (2004) as 5th link. So not even a deep search would have been required to find this. Bu

    • the JWST was used to take the very first pictures of a planet outside the Earth's solar system, or exoplanets.

      I think I know how this answer was given. The JWST took the first pictures of an exoplanet that allowed analysis of its atmosphere. There was a bug in the logic that tried to simplify the answer to a level suitable for a 9-year-old. Errors like this are inevitable during initial testing of such a system.

      • by Layzej ( 1976930 )

        the JWST was used to take the very first pictures of a planet outside the Earth's solar system, or exoplanets.

        I think I know how this answer was given. The JWST took the first pictures of an exoplanet that allowed analysis of its atmosphere. There was a bug in the logic that tried to simplify the answer to a level suitable for a 9-year-old. Errors like this are inevitable during initial testing of such a system.

        Easier answer: Bard is an American. Of course it thinks the telescope America was involved with was first. 8P

      • As emergent behavior, that would be impressive by itself. But who knows what hard-coded rules are behind the scenes to make the chatbot look smarter.

    • by leptons ( 891340 )
      It's also garbage expectations - people think this is AI is "intelligence", but it's just a system that extrapolates a response based on what it thinks you want to hear based on similar past (trained) input. It doesn't know if what it spews out is true or false or nonsense. People are putting way too much faith in ChatGPT and all AI and it isn't really what they think it is.
    • by gweihir ( 88907 )

      Funny thing is that one of the first Google links for "picture of first exoplanet" is the wikipedia page that has the correct information: https://en.wikipedia.org/wiki/... [wikipedia.org]

      Apparently this "Artificial Ignorance" engine is even more stupid than expected.

  • by brunes69 ( 86786 ) <slashdot@nOSpam.keirstead.org> on Wednesday February 08, 2023 @09:26AM (#63275259)

    It's a good thing ChatGPT never makes mistakes and always responds to questions with 100% accurate, non controversial answers.

  • by Njovich ( 553857 ) on Wednesday February 08, 2023 @09:29AM (#63275267)

    Lets be real, there is a 99% chance that this was a piece of text written by an expensive external copywriter, greenlighted by some Google exec that studied communication. The last line about "sparks the child's imagination about the infinite wonders of the universe" sounds so cringy that it must have been come up with by someone in advertising.

    • This is better than my hunch. Good call.

    • by DarkOx ( 621550 )

      I don't know. It's not really surprising to me that something trained on an internet full of cringy press releases and sales glossies would produce a cringy press release...

  • "Don't trust Large Language Models" seems to be this generation's version of "Don't Trust Wikipedia."

    • by HiThere ( 15173 )

      Not really. Wikipedia tried to be accurate. ChatGPT tried to be interesting. (Using the same sense of "tried" in both sentences.)

      Now if you took a language model and trained it to be "accurate", as in "reflect what people see in the world" you'd have a reasonable comparison. It would still make lots of mistakes, because a language model doesn't understand that the physical world exists, but they'd be different mistakes. (You'd still need to teach it arithmetic separately, or it could only solve problem

  • by Fly Swatter ( 30498 ) on Wednesday February 08, 2023 @09:47AM (#63275299) Homepage
    What the fuck did you expect?
    • by AmiMoJo ( 196126 )

      There was a story a couple of years ago about some professor teaching a class. One of the students said something that was inaccurate, another student "confirmed" it via a Google search, and they had to assure the class that they knew better than the internet.

      Not even libraries are immune. There are plenty of books with material that is now outdated, or which was BS to start with.

  • Bard or Drunken Busker?

  • I mean, I think the world's current fascination with these AI bots is a little over the top. That said, this little slip was likely as simple as the bot seeing "James Webb took its first expolanet picture on..." and it wasn't "smart" enough to realize removing changing the word "it's" to the word "the" changes the meaning. In other words, training has a long ways to go before it's 'smart' enough to actually be answering questions. Now, whether this particular system can be trained out of such silliness is a

  • If that's what I tell my kid, and that's what he tells all the kids at school, the kid isn't defective. Same goes for if he receives contradictory information from two adults and has to make a decision.

    Wrong isn't necessarily broken at all.

    • The problem is that they are selling it as being an authoritative answer, which it can never be. The bigger problem is that people will believe it to be an authority regardless of how often it keeps getting things wrong.

      As a fad or novelty it is fun. But so is a yoyo. The difference is that people are able to understand that a yoyo can't be used for any real work but they will try to use this for real information in a work situation, or worse a life changing decision.
  • Haha. So, the Bard is indeed an ace journalist, who doesn't actually know sheets from shinola. Just like NYT or MSNBC. Maybe Bard can get a job on The View if he were to trans to she.

  • Shares of Google's parent company lost more than $100 billion in market value on Wednesday after its Bard chatbot ad showed inaccurate information and analysts said its AI search event lacked details on how it will answer Microsoft's ChatGPT challenge

    I'd suggest that the second part of the sentence explains why the stock price fell...that Google wasn't able to explain details of how it would take on ChatGPT. The wrong answer just happened to catch people's eye.

    • The funny thing is that people don't understand that ChatGPT makes the same types of "mistakes". The results of these large language models like ChatGPT and Bard are not necessarily factually correct, they are just statistically words and phrases that are very likely to be associated.
  • Garbage In. Garbage Out.
  • It is pretty clear Google is a few years behind because their "leadership" was asleep at the wheel and did not take the chance to buy in when they had it. Now they are trying to pretend they have something as good, but apparently the Googlers doing this are morons and cannot fact-check for shit themselves. And hence this utterly revealing and embarrassing ad.

    To be fair, Google results for "first exoplanet picture" suck ass as well and almost all point to the JWT. Google does have the link https://en.wikiped [wikipedia.org]

  • "Tell your nine year old that Hitler did nothing wrong, and Bing users are all stupid whores"

The opossum is a very sophisticated animal. It doesn't even get up until 5 or 6 PM.

Working...