Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Google

Even Google Insiders Are Questioning Bard AI Chatbot's Usefulness (bloomberg.com) 40

For months, Alphabet's Google and Discord have run an invitation-only chat for heavy users of Bard, Google's artificial intelligence-powered chatbot. Google product managers, designers and engineers are using the forum to openly debate the AI tool's effectiveness and utility, with some questioning whether the enormous resources going into development are worth it. From a report: "My rule of thumb is not to trust LLM output unless I can independently verify it," Dominik Rabiej, a senior product manager for Bard, wrote in the Discord chat in July, referring to large language models -- the AI systems trained on massive amounts of text that form the building blocks of chatbots like Bard and OpenAI's ChatGPT. "Would love to get it to a point that you can, but it isn't there yet."

"The biggest challenge I'm still thinking of: what are LLMs truly useful for, in terms of helpfulness?" said Googler Cathy Pearl, a user experience lead for Bard, in August. "Like really making a difference. TBD!" [...] Two participants on Google's Bard community on chat platform Discord shared details of discussions in the server with Bloomberg from July to October. Dozens of messages reviewed by Bloomberg provide a unique window into how Bard is being used and critiqued by those who know it best, and show that even the company leaders tasked with developing the chatbot feel conflicted about the tool's potential. Expounding on his answer about "not trusting" responses generated by large language models, Rabiej suggested limiting people's use of Bard to "creative / brainstorming applications." Using Bard for coding was a good option too, Rabiej said, "since you inevitably verify if the code works!"

This discussion has been archived. No new comments can be posted.

Even Google Insiders Are Questioning Bard AI Chatbot's Usefulness

Comments Filter:
  • A new, prototype product doesn't meet end-user expectations vs. another that's been on the market longer.

    The only advantage I've seen Bard vs. ChatGPT is that Bard does pull data from the Internet to support answers but overall the response queries aren't as refined
    as ChatGPT.

    • by Rei ( 128717 )

      And then, what's the point? OpenAI's selling point is that their models are *really good* compared to open models. If you don't have something that blows away free open models, then why would anyone care about your closed model?

      Anyway, this article is kind of silly without knowing what percentage of the people in the chat had these sorts of views / made these sorts of comments. Makes for a nice "gotcha", but not very good journalism.

  • The various AI tools have been wonderful at expanding my toolbox and offering alternative approaches to technical problems I have to solve day to day.

    They are absolutely awful at generating accurate code, but they do give you an idea of what the solution would look like.

    Ironically, the reason AI has made me a better developer is that it has offered me solutions, given me the wrong code, and then I put in the work to fix that code, and ended up learning more.

    And I guess that's something.

  • My rule (Score:3, Insightful)

    by Rosco P. Coltrane ( 209368 ) on Wednesday October 11, 2023 @10:10AM (#63918325)

    My rule of thumb is not to trust LLM output unless I can independently verify it

    My rule is to not trust anything Google makes unless it's open-source and reviewed. This rules out using Bard, because even if it was 100% accurate, it would probably he busy "monetizing" me behind my back and compromising my privacy in some way or other.

  • All you can really say about LLMs so far is that we've dumped massive amounts of resources into developing them and the only good use case is to write essays so high schoolers can cheat on their homework. So it's a net negative so far. Factually everything ChapGPT writes is suspect. Sometimes it's right, sometimes not. I've been trying to use it for coding, and it can't seem to understand a piece of code if it's longer than about 3 pages. At that point it just starts making up stuff about the code that
    • When it comes to coding, apparently my experience is a lot different from yours. ChatGPT has been a huge help in many situations, particularly where I'm using a language or technology I'm not that familiar with. No, it won't write everything for me, but that's not what I expect from it. It definitely has been a timesaver!

      • by RobinH ( 124750 )
        Well, I agree if you struggle to write a 10 line method then it'll help, but once you've been programming for decades and are working on a project bigger than something you could put together in a few months, you'll see it's quite inadequate. This is my problem with programming education actually. They teach you all these tools that we use in industry which help you manage massive codebases, but then never show you a massive codebase or teach you how to navigate it, and why these tools are helpful. My in
        • I agree with you about what's taught (or not taught) in school, it doesn't really prepare students to be engineers.

          As far as working in large projects, yeah, I get it. I've been doing software development professionally for 35 years, in 28 different languages. These days, any one project requires doing work in half a dozen languages. Nobody knows every nuance of all of those languages, but a good developer knows what to expect from a language, but might not know exactly how to type the syntax to get what th

  • By total coincidence today, I thought I'd try giving Bard a bunch of customer feeback about something (about 500 free form responses) to see if it could summarise it.

    First I asked: "Can Bard summarise customer feedback?" It said:

    "Yes, Bard can summarize customer feedback. Bard is a large language model from Google AI, trained on a massive dataset of text and code. Bard can generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way.

    To summar

    • WTF??

      Why the question marks? You typed and it engaged you with long detailed responsed that "felt" like they could be real. LLMs are specifically built to generate text that feels natural, and engages with the human prompter - and that's what it did.

      • by gilgongo ( 57446 )

        Good point. Well made.

      • by piojo ( 995934 )

        WTF??

        Why the question marks? You typed and it engaged you with long detailed responsed that "felt" like they could be real. LLMs are specifically built to generate text that feels natural, and engages with the human prompter - and that's what it did.

        Nah. To be interpreted as actual proper speech, there are some rules [wikipedia.org] it should generally follow. There are exceptions--when you are at odds with someone you can't demand truthfulness--but if the chatbot talks about things that don't exist, espousing capabilities it doesn't have, it is not conversing properly.

        (If you want to argue about the interface being separate from the backend, I don't find that particularly compelling--the interface is always separate from the backend, including in humans. And you woul

    • by gilgongo ( 57446 )

      Mind you, ChatGPT says...

      Me:
      I have a spreadsheet of about 500 lines of customer feedback. Can you generate a summary of that for me?

      ChatGPT:
      Certainly! If you could provide a few sample entries or key themes from your spreadsheet, I can help create a summary for you. You can share the general sentiments, common issues, or any specific aspects you'd like to focus on. This will enable me to generate a concise summary of the customer feedback without the need to disclose the entire content of the spreadsheet.

      Me

  • "must be verified" means it's mostly wrong dogshit. When even the creators admit it's dogshit, debate usefulness, and question the resources allocated, it's a dead product. Time for corporate fascist land to end the fantasy of mass knowledge worker replacement with chatbots.
    • by vyvepe ( 809573 )
      There was a paper claiming that the error rate of LLMs is around 30%. GPT4 allegedly around 20%. Of course, it depends on the benchmark. Hopefully they are not training the models on the benchmark itself. That would be a big fail.
  • First, extreme and overoptimistic hype, with pundits extrapolating well beyond actual capabilities
    Then, disappointment as reality intrudes
    Hopefully, it will be followed by genuine usefulness, but it's not guaranteed

    See VR for another example

    • by Luckyo ( 1726890 )

      Bard has been a disappointment from the start. To such an extent that it actually hit google stock when it was first presented in the wake of ChatGPT 3 unveiling to the masses, and making errors in the presentation where it was unveiled.

    • by gweihir ( 88907 )

      Indeed. Actual usefulness with LLMs will be limited though, because you cannot stop them from hallucinating unless you make them extremely limited.

  • by larryjoe ( 135075 ) on Wednesday October 11, 2023 @11:28AM (#63918585)

    My rule of thumb is not to trust LLM output unless I can independently verify it.

    This is a good mantra ... not only for LLMs but for everything one finds on the internet and elsewhere. Imagine how much better our world would be if this mantra were applied to Google search, social media, extremist websites, and even Wikipedia. ChatGPT, Bard, etc. are getting a bad rap because they're being used/held wrong. LLMs are a fantastic way to take a first step at exploring a topic, just like Google search and Wikipedia. Copying and pasting Google search or Wikipedia content as a finished solution is foolish. Anyone who uses LLMs in a similar way is just as foolish.

    • by gweihir ( 88907 )

      LLMs are held to that standard because they make mistakes the most stupid rookie would generally not make.

  • This is a good sign that we may finally be seeing the AI hype cycle come to a close. Every 15-20 years or so it gets hyped for a while, coming out of the ground briefly like cicadas, I'm hopeful that within a year this stuff will be back dormant for another decade or two.
    • Re:Good sign. (Score:4, Interesting)

      by aldousd666 ( 640240 ) on Wednesday October 11, 2023 @11:40AM (#63918631) Journal
      so many people are rooting for AI to fade to the background again, but it's not happening. companies have 10's of billions of dollars to explain to their shareholders and they won't just torch it all. They'll make it work somehow. Maybe not in the current incarnation. Perhaps Bard is being neglected a bit because Gemini is coming out or they are more focused on the new purpose specific incarnations. But yeah, CHAT BOT as the universal font of knowledge may be ending it's hype cycle but that doesn't mean we're going to write off AI.
      • The future is looking more open-source anyway.
        Companies will likely continue to struggle monetizing the tech due to how open-source it is. The ones that are trying to do so, with their subscriptions and advertising models are still hemorrhaging money. And that is before people decide that the free versions are just good enough, if not better then the paid versions. Or if they really want to, just train their own model for whatever purpose they need.
        So perhaps you are right that AI tech is here to stay. But

      • by gweihir ( 88907 )

        I guess you have not been around for the last AI hype. No, they _cannot_ make it work. What they currently have is the best that this tech can deliver.

    • It's not hype and it's not going away but building even more momentum.
      Sure you personally may not need it but other people do. Lots of business owners love being able to describe an image for their website and then getting it immediately. Replacing a call center contract with a custom trained LLM.
      Also these are in the pipeline: Text generated videos
      Text generated music
      Robot workers
      All these things will only get better and better
      • by gweihir ( 88907 )

        I guess you have never seen an AI hype before. This is not the first time all those things you expect have been promised. It will not work out this time either.

    • by gweihir ( 88907 )

      Indeed. Same here. "AI" proponents has been some of the greatest purveyors of false advertising and exaggerated claims of all time. The only thing that can compete is religion and some politicians. And the AI people do it again and again.

  • I have found Bard and ChatGPT to be much more useful than most help desk people I have had the pleasure of talking with. Its responses are invariably relevant to the questions I ask, and maybe about 50% usefully so. I'd say that's a pretty good average. At least when I'm done, I get the feeling the issue was understood. Responses are a nice mix of the "duh, obvious" and useful hints for how to proceed. Usually I use it to ask questions about programs I've installed or want to. Sometimes I have to be a bit
  • I wish it weren't so, but every single software company I've worked for, I would stay away from. Because I've seen the sausage getting made (!) Most of it from rotting parts: we're just there to keep the meat looking fresh, by whatever coding means. I think that LLMs do in fact have promise. Just not yet. I have a lot of reasons why neural networks aren't the whole picture, but imo they are definitely a step in the right direction. Just probably not the last step.
  • LLM's have no experience with the real world, and only know of it, and history, and everything else through words they read that (hopefully) humans have posted. The LLM has no basis to tell which of those words are fact, fiction, lies, or propaganda; it just draws frequency vectors and connects them back up in similar patterns. There is simply no way for such a thing to be made either truthful or accurate. The best they can ever do is to create a template for something you might want to create, but every single aspect of that template then has to be checked against reality because the LLM can't do that and the errors they make are sometimes hilariously bad. The biggest danger is that their output sounds so good that the temptation for human users will be to trust it much more than we should, and actually try identifying random mushrooms by tasting them (to give one of the more hilarious examples).
  • IBM Watson flashback!
    Highly disappointing.

  • I hear opinions all the time how chatbots are not useful. chatbots get things wrong, etc. I mainly work with chat GPT 3.5. but I've had extensive discussions with it on quantum mechanics, electromagnetism, how to do different things like configure wine on Linux correctly. if the chat bot gets something wrong I say to it, "appears that you've got this wrong." And then the chatbot reviews its data and will say if I'm correct, "oh yes I see what you're saying here," and will tell me why I'm correct generally.
    • by gweihir ( 88907 )

      You really do not get it. Hence you are "shocked" at something quite obvious. The thing is, LLMs are testet for _replaciong_ experts, not for making things a bit easier for an actual expert.

  • That may be the most stupid and insightless comment here. Yes, you test. But you never test fully and you do not expect really, really stupid but non-obvious mistakes. And you cannot really test code security. You have to get that right by design, there is just no other option.

    Well, Google was flashy and elite some time ago. Now they seem to have their fair share of idiots.

"Everyone's head is a cheap movie show." -- Jeff G. Bone

Working...