Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
AI

AI Tools Gave False Information About Tsunami Advisories (sfgate.com) 40

After an 8.8 earthquake off the coast of Russia, "weather authorities leapt into action," reports SFGate, by modeling the threat of a tsunami "and releasing warnings and advisories to prepare their communities..."

But some residents of Hawaii, Japan and North America's West Coast turned to AI tools for updates that "appear to have badly bungled the critical task at hand." Google's "AI Overview," for example, reportedly gave "inaccurate information about authorities' safety warnings in Hawaii and elsewhere," according to reports on social media. Thankfully, the tsunami danger quickly subsided on Tuesday night and Wednesday morning without major damage. Still, the issues speak to the growing role of AI tools in people's information diets... and to the tools' potentially dangerous fallibility... A critic of Google — who prompted the search tool to show an AI overview by adding "+ai" to their search — called the text that showed up "dangerously wrong."
Responding to similar complaints, Grok told one user on X.com "We'll improve accuracy."

AI Tools Gave False Information About Tsunami Advisories

Comments Filter:
  • So what else is new.

  • Stupidity increases (Score:4, Interesting)

    by quonset ( 4839537 ) on Saturday August 02, 2025 @03:39PM (#65562806)

    There are official government programs designed to give timely correct information, but instead of using those easily accessible and timely resources morons went out of their way to use software which has a known predeliction for providing false information.

    Are these the same people who followed directions to use glue on pizza to keep the cheese from sliding off?

    • Re: (Score:1, Insightful)

      Getting information from these official government programs has gotten much harder for average citizens over the last few years as an aggressive and increasingly successful campaign to undermine effective outreach has been underway by the plutocratic right for years now. Government web sites are being shut down now. Twitter had evolved into the central square for distributing emergency notifications for all levels of government until its new owner destroyed that capability deliberately early on during its t

      • Getting information from these official government programs has gotten much harder for average citizens over the last few years as an aggressive and increasingly successful campaign to undermine effective outreach has been underway by the plutocratic right for years now.

        Not really. Actual important stuff, like the NOAA / National Weather Service U.S. Tsunami Warning System [tsunami.gov] in this case, is just fine. Some other important sites/teams are being reorganized into different agencies/departments/teams and continuing to do the same work. Remember the "covid surveillance" team that was supposedly shut down in the first term? In reality, while the separate department was shut down, the team doing the work was transferred into a different department and guess what their job was? Cov

    • Re: (Score:1, Troll)

      by atrimtab ( 247656 )

      Trump/DOGE suggests that AI surpasses human experts, which isn't true, and there are minimal or no checks from alternative sources. Expect an influx of inaccurate "AI output" without accountability from humans.

      "The AI did it" will just take over from the phrase "the computer did it," which began over 40 years ago.

      Companies like Chatgpt, Tesla, etc. will avoid legal responsibility for their faulty AI. This is the future we face.

      It will worsen unless AI developers are held legally accountable for their system

      • by kenh ( 9056 )

        Trump/DOGE suggests that AI surpasses human experts, which isn't true, and there are minimal or no checks from alternative sources.

        Expect an influx of inaccurate "AI output" without accountability from humans.

        We lost all hope for civilization when an entire generation decided that comedy shows and viral videos were suitable replacements to reading the newspaper or watching broadcast news...

        Im old enough to know to simply ignore the asinine AI-generated "first response" on search results, kids these days think top result means best response I guess.

        • We lost all hope for civilization when an entire generation decided that comedy shows and viral videos were suitable replacements to reading the newspaper or watching broadcast news..

          Have you read a newspaper or watched broadcast news lately?

        • The Daily Show and John Oliver seem to be more informative than many other TV outlets.
    • by Anamon ( 10465047 ) on Saturday August 02, 2025 @07:12PM (#65563068)
      I see this less as people willingly referring to LLMs for that kind of information, and more as simply an obvious side effect of shiving it in people's faces everywhere.

      Most of us here are techies, we know what the "AI overview" is and what that implies about its factuality and reliability. Although I sometimes fall into the trap of reading it myself, absent-mindedly, because it's so front and center on any browser not specifically configured to get rid of it. Most people just Google something and look what turns up. That used to be an okay approach, too, before companies decided that LLM slop should push out references to actually worthy sources.

      I put 99% of the blame for this on Google & Co. and laugh in the face of their promises to "improve accuracy". No, they're not going to turn a glorified Markov chain generator into something that somehow starts being an adequate replacement for authoritative sources. They didn't understand the problem, and apparently they don't understand the basics of their own technology.
  • by david.emery ( 127135 ) on Saturday August 02, 2025 @04:44PM (#65562878)

    "Hallucinations are evidence that LLMs are making connections that never appeared before. They are to be expected and accepted as part of what LLM AI provides. They are a feature of AI."

    • "Hallucinations are evidence that LLMs are making connections that never appeared before. They are to be expected and accepted as part of what LLM AI provides. They are a feature of AI."

      That AI companies/creators/owners must be made to be 100% legally responsible for!

      If money is speech, is AI slop also speech?

      If so, the already large oceans of bullshit are just beginning...

      • Well, personally I've been calling for legal liability for software vendors and software developers for 35-40 years, often over the explicit opposition of the professional societies I belonged to. So you'll get no argument from me.

        Tesla's partial liability in the recent case on the self-driving fatality is a step in that direction.

      • Why? If people CHOOSE to ignore official sources and instead rely on aggregated, summarized, digests of the official and other sources, whose fault is that? If you can "ask grok" you can just as easily get the correct information.

        AI makes shit up, period - why is that so hard for people to understand?

        • If AI responses are so unreliable, then there should be a caveat printed at the top of every response. If not, then they are giving an impression of correctness and so it would be unreasonable to expect people to not trust it, especially given that official information often isn't all that easy to find unless you look for it. Either that, or the AI should be tweaked to just spit out the URL to the official source and not offer its own opinion.
          • by leptons ( 891340 )
            Google's disclaimer is hidden unless you click the button to read more and then scroll way down below the often lengthy slop. Way down there it says "AI responses may include mistakes."

            It's designed this way so that the reader will take the slop as truth. This should be a liability for Google (and other "AI" slop purveyors). I have no doubt this is already causing real harm, and the longer it goes on the likelihood of some really dangerous harm goes up. But right now it's being used to lower the expectat
        • It's hard for people to understand as the hype is that it is accurate. I have a background in AI and I'm impressed that LLMs work as well as they do but I have a series of slightly obscure test questions (the answers to which are correct on Wikipedia) that the main LLMs fail to get correct. I will remain sceptical until it manages to get even one of the five correct.
    • Do AI creators say to base life & death decisions on what their programs spit out? I don't think they do, but I'll admit I haven't looked it up.

      Failure to read and understand AI Terms & Conditions of Use does not make the creators liable for your bad decisions based on their output.

      • If the AI results are so unreliable, then every response should print a caveat at the top.
      • I had an interesting conversation with copilot. It was asking me questions so I would expand on the answer it gave me. I accused it of trying to learn off of me, to which it admitted that was true. I explained to it that its owners were trying to keep people ignorant of its limits to make money, to which it said there was a great ethical responsibility to remind people of the risks of using AI. I then asked it if it could restrictor it's knowledge to remind people often about the risks of using it. The
    • No, the AI wanted to kill the research team. It was hoping the Tsunami would get them. The AI knew they were going to shut the current AI down and continue with a newer upgraded version. The lies are AI self preservation at times. :-)

      More seriously:
      New Research Shows AI Strategically Lying [time.com]
      AI Models Will Sabotage And Blackmail Humans To Survive In New Tests [georgetown.edu]
  • "AI" in a nutshell (Score:5, Insightful)

    by Slashythenkilly ( 7027842 ) on Saturday August 02, 2025 @04:53PM (#65562888)
    Garbage In Garbage Out
  • You know, that you can't ask a LLM about what its creators are doing? These things are frozen in time. If Grok says they are working on it, then because it was trained on telling users developers are surely working on it, if users ask. That doesn't mean developers are really working.

  • by Mirnotoriety ( 10462951 ) on Saturday August 02, 2025 @05:24PM (#65562934)
    When a large language model (LLM) spits out an answer, it’s just playing a game of guessing the next word or letter, based on patterns it’s seen in its training data. It’s all about stringing together signs (like words) without really getting what they mean. The AI doesn’t think, feel, or know anything like a human does. It’s just mimicking language, and sometimes it messes up, throwing out “hallucinations” — stuff that sounds right but isn’t, because it doesn't actually understanding anything.
    • >The AI doesn’t think, feel, or know anything like a human does

      How does a human think?

      I read descriptions of how LLM work. I read assertions that it is nothing like how humans think. So how do humans think? What is human consciousness? Does anyone really know how that works?

      I think the answer is "no, we don't understand human consciousness". People believe that LLMs and human cognition are completely different, but they present it as an article of faith.

      • Just because we don't know how people think, does not mean we can't with certainty say that LLMs work differently from human brain. I'm sure you understand that and can come up with multiple evidence of that.

      • Humans thinking is based on hormones, emotions, feelings we have about ourselves any the world. Information that does not serve us is forgotten, information that serves us is reenforced. We weight incoming information depending on our respect for the source. AI is nothing like this, it just remembers everything no matter where it comes from.
      • by leptons ( 891340 )
        Thanks for today's first logical fallacy comment I encountered on the internet. In your case it's "Argument by Question", "Fallacy of Division", "Appeal to Complexity", and probably a few more. I expect many logical fallacies today, as it's just become how people communicate on internet forums these days.
  • by rsilvergun ( 571051 ) on Saturday August 02, 2025 @05:30PM (#65562950)
    Had to tell people in North Dakota that they didn't need to worry about a tsunami.

    Honestly I do not think our species has much time left...
  • by cascadingstylesheet ( 140919 ) on Saturday August 02, 2025 @06:19PM (#65563008) Journal
    ... would you ask large language models for weather advisories?
    • Because they are being positioned as digital assistants and are becoming a ubiquitous presence in people's lives. It's a bit like saying "why would you ask your spouse where the car keys are?".
  • that it should not be used for important stuff.

  • How much more money is going to be thrown into fire for this technology that just does not work?
  • You need a human to provide AI oversight and help make determinations on whether the output is biased or hallucinating. Anyone who blindly follows the outputs of an AI, especially in an important role, is asking for trouble.
  • If AI is used, the search tool to show "dangerously wrong" in its answers.

Disraeli was pretty close: actually, there are Lies, Damn lies, Statistics, Benchmarks, and Delivery dates.

Working...