
AI Tools Gave False Information About Tsunami Advisories (sfgate.com) 40
After an 8.8 earthquake off the coast of Russia, "weather authorities leapt into action," reports SFGate, by modeling the threat of a tsunami "and releasing warnings and advisories to prepare their communities..."
But some residents of Hawaii, Japan and North America's West Coast turned to AI tools for updates that "appear to have badly bungled the critical task at hand." Google's "AI Overview," for example, reportedly gave "inaccurate information about authorities' safety warnings in Hawaii and elsewhere," according to reports on social media. Thankfully, the tsunami danger quickly subsided on Tuesday night and Wednesday morning without major damage. Still, the issues speak to the growing role of AI tools in people's information diets... and to the tools' potentially dangerous fallibility... A critic of Google — who prompted the search tool to show an AI overview by adding "+ai" to their search — called the text that showed up "dangerously wrong."
Responding to similar complaints, Grok told one user on X.com "We'll improve accuracy."
But some residents of Hawaii, Japan and North America's West Coast turned to AI tools for updates that "appear to have badly bungled the critical task at hand." Google's "AI Overview," for example, reportedly gave "inaccurate information about authorities' safety warnings in Hawaii and elsewhere," according to reports on social media. Thankfully, the tsunami danger quickly subsided on Tuesday night and Wednesday morning without major damage. Still, the issues speak to the growing role of AI tools in people's information diets... and to the tools' potentially dangerous fallibility... A critic of Google — who prompted the search tool to show an AI overview by adding "+ai" to their search — called the text that showed up "dangerously wrong."
Responding to similar complaints, Grok told one user on X.com "We'll improve accuracy."
SWEIN (Score:2)
So what else is new.
Stupidity increases (Score:4, Interesting)
There are official government programs designed to give timely correct information, but instead of using those easily accessible and timely resources morons went out of their way to use software which has a known predeliction for providing false information.
Are these the same people who followed directions to use glue on pizza to keep the cheese from sliding off?
Re: (Score:1, Insightful)
Getting information from these official government programs has gotten much harder for average citizens over the last few years as an aggressive and increasingly successful campaign to undermine effective outreach has been underway by the plutocratic right for years now. Government web sites are being shut down now. Twitter had evolved into the central square for distributing emergency notifications for all levels of government until its new owner destroyed that capability deliberately early on during its t
Same people, same job, just a different department (Score:2)
Getting information from these official government programs has gotten much harder for average citizens over the last few years as an aggressive and increasingly successful campaign to undermine effective outreach has been underway by the plutocratic right for years now.
Not really. Actual important stuff, like the NOAA / National Weather Service U.S. Tsunami Warning System [tsunami.gov] in this case, is just fine. Some other important sites/teams are being reorganized into different agencies/departments/teams and continuing to do the same work. Remember the "covid surveillance" team that was supposedly shut down in the first term? In reality, while the separate department was shut down, the team doing the work was transferred into a different department and guess what their job was? Cov
Re: (Score:1, Troll)
Trump/DOGE suggests that AI surpasses human experts, which isn't true, and there are minimal or no checks from alternative sources. Expect an influx of inaccurate "AI output" without accountability from humans.
"The AI did it" will just take over from the phrase "the computer did it," which began over 40 years ago.
Companies like Chatgpt, Tesla, etc. will avoid legal responsibility for their faulty AI. This is the future we face.
It will worsen unless AI developers are held legally accountable for their system
Re: Stupidity increases (Score:3, Insightful)
Trump/DOGE suggests that AI surpasses human experts, which isn't true, and there are minimal or no checks from alternative sources.
Expect an influx of inaccurate "AI output" without accountability from humans.
We lost all hope for civilization when an entire generation decided that comedy shows and viral videos were suitable replacements to reading the newspaper or watching broadcast news...
Im old enough to know to simply ignore the asinine AI-generated "first response" on search results, kids these days think top result means best response I guess.
Re: (Score:3)
We lost all hope for civilization when an entire generation decided that comedy shows and viral videos were suitable replacements to reading the newspaper or watching broadcast news..
Have you read a newspaper or watched broadcast news lately?
Re: Stupidity increases (Score:2, Insightful)
Re: Stupidity increases (Score:5, Insightful)
Most of us here are techies, we know what the "AI overview" is and what that implies about its factuality and reliability. Although I sometimes fall into the trap of reading it myself, absent-mindedly, because it's so front and center on any browser not specifically configured to get rid of it. Most people just Google something and look what turns up. That used to be an okay approach, too, before companies decided that LLM slop should push out references to actually worthy sources.
I put 99% of the blame for this on Google & Co. and laugh in the face of their promises to "improve accuracy". No, they're not going to turn a glorified Markov chain generator into something that somehow starts being an adequate replacement for authoritative sources. They didn't understand the problem, and apparently they don't understand the basics of their own technology.
But we're told "hallucinations are good" (Score:4, Funny)
"Hallucinations are evidence that LLMs are making connections that never appeared before. They are to be expected and accepted as part of what LLM AI provides. They are a feature of AI."
Re: (Score:2)
"Hallucinations are evidence that LLMs are making connections that never appeared before. They are to be expected and accepted as part of what LLM AI provides. They are a feature of AI."
That AI companies/creators/owners must be made to be 100% legally responsible for!
If money is speech, is AI slop also speech?
If so, the already large oceans of bullshit are just beginning...
Re: (Score:3)
Well, personally I've been calling for legal liability for software vendors and software developers for 35-40 years, often over the explicit opposition of the professional societies I belonged to. So you'll get no argument from me.
Tesla's partial liability in the recent case on the self-driving fatality is a step in that direction.
Re: But we're told "hallucinations are good" (Score:2)
Why? If people CHOOSE to ignore official sources and instead rely on aggregated, summarized, digests of the official and other sources, whose fault is that? If you can "ask grok" you can just as easily get the correct information.
AI makes shit up, period - why is that so hard for people to understand?
Re: But we're told "hallucinations are good" (Score:2)
Re: (Score:2)
It's designed this way so that the reader will take the slop as truth. This should be a liability for Google (and other "AI" slop purveyors). I have no doubt this is already causing real harm, and the longer it goes on the likelihood of some really dangerous harm goes up. But right now it's being used to lower the expectat
Re: But we're told "hallucinations are good" (Score:2)
Re: But we're told "hallucinations are good" (Score:2)
Do AI creators say to base life & death decisions on what their programs spit out? I don't think they do, but I'll admit I haven't looked it up.
Failure to read and understand AI Terms & Conditions of Use does not make the creators liable for your bad decisions based on their output.
Re: But we're told "hallucinations are good" (Score:2)
Re: (Score:2)
AI wanted to kill the research team ... (Score:3)
More seriously:
New Research Shows AI Strategically Lying [time.com]
AI Models Will Sabotage And Blackmail Humans To Survive In New Tests [georgetown.edu]
"AI" in a nutshell (Score:5, Insightful)
Re: (Score:3)
Grok told (Score:2)
You know, that you can't ask a LLM about what its creators are doing? These things are frozen in time. If Grok says they are working on it, then because it was trained on telling users developers are surely working on it, if users ask. That doesn't mean developers are really working.
A.I has no real intelligence (Score:3)
Re: (Score:2)
>The AI doesn’t think, feel, or know anything like a human does
How does a human think?
I read descriptions of how LLM work. I read assertions that it is nothing like how humans think. So how do humans think? What is human consciousness? Does anyone really know how that works?
I think the answer is "no, we don't understand human consciousness". People believe that LLMs and human cognition are completely different, but they present it as an article of faith.
Re: A.I has no real intelligence (Score:2)
Just because we don't know how people think, does not mean we can't with certainty say that LLMs work differently from human brain. I'm sure you understand that and can come up with multiple evidence of that.
Re: (Score:2)
Re: (Score:2)
The national weather service (Score:3)
Honestly I do not think our species has much time left...
Why ... (Score:3)
Re: Why ... (Score:3)
AI proves time and time again (Score:2)
that it should not be used for important stuff.
AI is a fraud (Score:1)
You can't expect perfection (Score:1)
Re: You can't expect perfection (Score:2)
Re: You can't expect perfection (Score:2)
Don't nitpick. There is no AI, period, if you want to be pedantic.
But for larger audience today AI==LLM.
Re: (Score:2)
There is no AI, period, if you want to be pedantic
Now that is nitpicking. AI is the current term for what used to be called machine learning. But AI is the current term and it's still broader than LLMs.
The answer is simple ... (Score:2)