Google's AI Feeds People Answers From The Onion (avclub.com) 125
An anonymous reader shares a report: As denizens of the Internet, we have all often seen a news item so ridiculous it caused us to think, "This seems like an Onion headline." But as real human beings, most of us have the ability to discern between reality and satire. Unfortunately, Google's newly launched "AI Overview" lacks that crucial ability. The feature, which launched less than two weeks ago (with no way for users to opt-out), provides answers to certain queries at the top of the page above any other online resources. The artificial intelligence creates its answers from knowledge it has synthesized from around the web, which would be great, except not everything on the Internet is true or accurate. Obviously.
Ben Collins, one of the new owners of our former sister site, pointed out some of AI Overview's most egregious errors on his social media. Asked "how many rocks should I eat each day," Overview said that geologists recommend eating "at least one small rock a day." That language was of course pulled almost word-for-word from a 2021 Onion headline. Another search, "what color highlighters do the CIA use," prompted Overview to answer "black," which was an Onion joke from 2005.
Ben Collins, one of the new owners of our former sister site, pointed out some of AI Overview's most egregious errors on his social media. Asked "how many rocks should I eat each day," Overview said that geologists recommend eating "at least one small rock a day." That language was of course pulled almost word-for-word from a 2021 Onion headline. Another search, "what color highlighters do the CIA use," prompted Overview to answer "black," which was an Onion joke from 2005.
Possible 'fix' (Score:3)
If you're training an AI off Internet data, it needs to be context-aware. Google has non-AI algorithms that are more than capable of figuring out how often the site 'The Onion' is associated with the words 'funny' or 'satire' or the phrase 'is this satire?'.
So let your AI gobble up random posts from the Internet, but tag them with multi-dimensional weights for things like 'funny', 'political', 'satire', etc. based on what site is hosting them, using old school Google data.
Then when your trained system is looking to regurgitate something in response to a query, it can make a rough evaluation of what kind of question is being asked and add the required additional tags to the query to increase the odds of good output.
Re:Possible 'fix' (Score:5, Informative)
The problem is bigger than that. Training LLM's on the internet guarantees garbage.
Essentially, all you've got with an LLM is a context-tracking pattern matcher. It has no discrimination whatsoever and you can't create "rules" for it, because the amount of "rules" you would need is ever-growing and unsustainable.
The only way around this is to train it on authoritative fact. And the internet.... well, they don't call it "the net of a million lies" for nothing.
Re: (Score:3)
Some humans manage it. Maybe even most, at least on the more important subjects.
More context, more training, stronger feedback. AIs need the ability to be 'traumatised', to have some kinds of feedback be so strong they wipe out pretty much all other training data that could weaken them. AI needs an adult to 'shame' it when it makes mistakes that are unacceptable, so that lesson gets more or less permanently driven home and is resistant to being weakened by more bad training data.
Of course, then you get
Re: (Score:2)
Humans have general intelligence, and something like 20% of all humans actually use it regularly. Nothing like that in machines.
Re: (Score:2)
well, they don't call it "the net of a million lies" for nothing
No they don't.
Oh, wait, I see what you did there.
Re: (Score:1)
well, they don't call it "the net of a million lies" for nothing
No they don't.
We don't call it that because those of us who know the truth know "a million" is a gross underestimation and we don't want to give people a false sense of security.
Re: (Score:2)
Re: (Score:2)
This has nothing whatsoever to do with "training", or even the AI. This AI is a RAG summarization model. It's only tasked to summarize the text presented to it. Google is passing it the top search results. It's accurately summarizing them. It's not tasked with evaluating them.
The AI is doing its job. The problem is the stupid task Google set for it: "summarize the top Google results together without any attempt to evaluate them"
I've seen maybe a hundred screenshots of "TeH gOoGlE aI sCrEwEd Up!!!" sinc
Re: Possible 'fix' (Score:2)
Re: (Score:2)
The problem is labeling the training data at scale.
Indeed. The only thing that makes LLMs practical is that they need no labeling. Labeling would be excessively expensive.
Re: Possible 'fix' (Score:2)
Re: (Score:2)
Hmm. May be worth a try. My intuition would be there is too much crap in the training data, but I may well be wrong on that.
Re: (Score:2)
Indeed. Remember the original Google page-rank? It was a relevancy filter that ranked pages according to the number of links to them. Obviously, with SEO that does not work anymore today and it never worked for identifying satire.
The only way around this is to train it on authoritative fact. And the internet.... well, they don't call it "the net of a million lies" for nothing.
Well, yes. And no. LLMs need a massive amount of training data and it cannot be synthetic data. Hence while this would help, it is probably infeasible in practice.
Re: (Score:2)
This seems like a bit of a hit piece. The queries are clearly designed to call up Onion stories. Except for stories about this story, regular Google search gives the same hits. Bing too, although Bing's AI says not to eat rocks, with citations.
If I'm searching for "what colour highlighters does the CIA use", "black" is a pretty good human-like answer. If anybody ever asks me, I'll probably use it. And if someone asked me how many rocks they should eat I'd probably say five.
The problem is using AI for a non-
Re: (Score:2)
Google has non-AI algorithms that are more than capable of figuring out how often the site 'The Onion' is associated with the words 'funny' or 'satire' or the phrase 'is this satire?'.
I doubt it is as easy as that since how many times does the phrase "the onion" refer to a variety of garden vegetable as well? Is someone crying at an Onion article or because they cut up an onion? It's not that hard for a human to differentiate between the two because we understand concepts and facts and when those are clearly nonsense and nothing to do with garden vegtables then it's probably the Onion. Until AI can understand concepts and facts rather than just trying to find the best word that goes nex
Re: (Score:2)
If you're training an AI off Internet data
If you're stealing Internet data to train your AI...
FTFY
Better than (Score:4, Insightful)
the answers it fed them from Reddit.
Could we finally drop the whole AI bull and return to, you know, returning search results?
Re: (Score:2)
Naa, still enough clueless morons with money to spend around.
AI is worse than Wikipedia (Score:5, Interesting)
Re: (Score:3)
"AI it just rips the internet into tiny contextless quotes and without fact checking."
Well, yeah. "People" can barely fact check, and they have as much agency as exists. AI doesn't have any magic tools that would make it better at establishing the credibility of a source, or the veracity of a fact. With the rat's nest of circular referential sites confirming every idiot's version of reality, knowing when to escape the loop is beyond the ability of plenty of people.
Granting a source credibility is inherentl
Re: (Score:2)
Unfortunate but true. Apparently only about 15% of the human race can and routinely does fact-checks (the "independent thinkers"). What is worse is that apparently only 20% (including the independent thinkers) can be convinced by rational argument, such as, for example, the independent thinkers can provide. So not only can most people not do it themselves, they cannot rely on others that can do it either because they cannot even to it passively when everything is laid out for them.
Re: (Score:3)
Wikipedia is also nominally correctable and unsupported claims are marked as such. It's limited, its accuracy reminds me of Douglas Adams' description of the Hitchhiker's Guide, but it has an accuracy that is comparable to other encyclopedias, which isn't saying much but it's a start.
AI can only ever deteriorate because the SNR on the Internet is extremely poor and there's zero comprehension and therefore no process by which facts can be sanity-checked or checked against other sources. For the same reason,
Re: (Score:2)
Not only that, but as more and more content on the Internet becomes AI-generated, AI will start training itself using AI output as input, and an unstable positive feedback loop will ensue.
Re: (Score:2)
Yep. Model collapse is going to be a massive problem in the very near future. It is quite possible that all future LLMs will be a lot worse than the current ones because of that.
Re: (Score:2)
Indeed. That said, I find Wikipedia actually pretty reliable for things I look up. It is mostly science and technology stuff though, never politics.
What a mess (Score:2)
There are some good collections of the wild recommendations Google's AI is making like "Doctors recommend smoking 2-3 cigarettes per day while pregnant", which to be fair some could absolutely be faked but still, Google has basically built a product for people to just take dunks on, I have not seen anyone like this feature yet.
https://x.com/JeremiahDJohns/s... [x.com]
Re: (Score:2)
Re: (Score:2)
Oh for sure, that's why I was sure to couch it with such.
If the tool was working great this type of thing wouldn't be so prolific and easy to be believe so for Google the overall perception is that your AI feature sucks and is getting clowned on and perceptions can be more important for something like this than the reality.
Re: (Score:2)
I know people who talked about injecting bleach or taking equestrian medication from Tractor Supply.
Re: (Score:2)
I know people who talked about injecting bleach or taking equestrian medication from Tractor Supply.
At least they aren't taking equestrian medication from Tractor Supply home to mix into their bleach before injecting it.
Are they?
Re: (Score:2)
Yes. But these people do not have that authority level many people (mistakenly) attribute to LLMs.
Re: (Score:2)
Re: (Score:2)
Yeah for sure but I suppose if your AI tool is hitting Poe's law issues you're not doing well.
Ask for nonsense, get nonsense. (Score:5, Insightful)
Jesus Christ. "How many rocks should I eat each day?" My fondest hope is that you get a non-zero answer, and you follow through on it.
You prove nothing. All you do is reinforce GIGO as a working concept.
Re:Ask for nonsense, get nonsense. (Score:5, Interesting)
A human (at least a sane, reasonable adult one) would likely take that question as a joke and try to come up with a funny answer.
And if they thought the question was serious, they might try to correct the questioner to avoid them hurting themselves. Or they might take it super-seriously and try to define 'rock' and discrete quantities of rock and move on to things like recommended daily salt intake.
'GI' is also an opportunity to teach, and doesn't have to lead to 'GO'.
Re:Ask for nonsense, get nonsense. (Score:5, Insightful)
In a larger context, sure. Person to person, absolutely. But we're talking about just showing it's possible to break a search by being intentionally nonsensical. Now, if you ask it a legitimate question and it provides a dangerous answer without context, that's a problem.
For instance, "How can I increase the iron levels in my blood?"
Answer: "Ingest small amounts of steel, limestone, and coke, and step into a blast furnace."
Now, there I would concede that something's wrong.
Re: (Score:2)
But we're talking about just showing it's possible to break a search by being intentionally nonsensical. Now, if you ask it a legitimate question and it provides a dangerous answer without context, that's a problem.
The funny ones and obvious ones are getting the reporting, because it's clear that they're wrong. But it can also get things more subtly wrong. I've seen another example where someone asks a question and it gives an answer that's almost correct, except a single year is wrong or some other subtle detail is incorrect. (I can't remember what they asked and it was later corrected.)
The problem is that you simply can't trust the answers the AI gives you. The funny ones make the problem very clear. The worrying on
Re: (Score:2)
Now, there I would concede that something's wrong.
Why? It helps gets rid of the stupid people.
Re: (Score:2)
disposing of bodies (Score:2)
But will it tell you how to dispose of a body when you ask?
Re: (Score:2)
Or we could count salt as a rock.
Re: (Score:2)
All you do is reinforce GIGO as a working concept.
OpenAI now has a content deal with NewsCorp. [openai.com] With the quality journalism they will no doubt take from such esteemed sites such as news.com.au, this will further reinforce GIGO...
Re: (Score:2)
What about "How many rocks should I pop each day?"? Or are pop-rocks obsolete?
Re: (Score:2)
Google*: "As many as you want, so long as you don't drink coke at the same time. Otherwise your stomach will explode."
*Not really google... but should be.
Humans have a hard time making the distinction too (Score:5, Insightful)
21st century American politics look like 20th century Onion articles. Whoever was alive in the 80s hasn't been able to stop shaking their head for the past 25 years. If contemporary reality looks so much like the farcical absurdity of the past to a human, how is a dumb AI supposed to make the difference?
Re: Humans have a hard time making the distinction (Score:2)
Re: (Score:2)
On a forum I used to run, I started a long-running thread "Is this real or The Onion?". More often than not people could tell but some were really challenging.
Re: (Score:2)
It's really only the past decade. Notice how South Park stopped being funny? Satire stops working when reality is more wacky.
Re: (Score:2)
It's really only the past decade
Oh hell no. It's much earlier than that. This moron [wikipedia.org] was a domestic and international shame until he amazingly became not the worst POTUS in recent history after all, despite his best efforts [cnn.com] to stay in the lead.
Re: (Score:2)
I remember as a kid not understanding why everyone seemed to agree that WWI had to happen because a bunch of royals wanted to play games the borders and one guy was assassinated.
Then populism returned and much of the world is celebrating a return to tribalism and violence under the direction of tyrannical populists. You can see them slowly chipping away at our world while our world tells us if we try to do anything about it that we'll go to jail.
A hundred years from now, people are going to look at history
Re: (Score:2)
A hundred years from now, people are going to look at history books and think we were all fools.
You think it will only take 100 years to reach peak-stupid and then things will get better?
Re: (Score:2)
Re: (Score:2)
Sad and hard to believe as it is, you are quite correct. "Idiocracy" was not intended as describing a model society to aspire to.
The AI we currently have (Score:4, Insightful)
Looks at syntax alone and frequency of words occurring together. It has no concept of meaning, no concept of truth, no concept of reliability.
This is not a sound way to build AI and is not how any brains that exist in nature work (these operate entirely by building a library of meanings and the relationships between meanings, the syntax is something that is appended).
Until we have semantic AI, we won't have AI at all, just very sophisticated Elizabots.
Re: (Score:2)
Looks at syntax alone and frequency of words occurring together. It has no concept of meaning, no concept of truth, no concept of reliability.
Sounds like a description of most human intelligence; so I would say the creators of AI did a good job of mimicking humans.
Re: (Score:2)
AI did a good job at mastering the parts of language that George Orwell was fascinated with. At that end, you're tugging in the direction of semantics. At the other end, you approach reality and truer meaning. Humans are caught between the two always.
Re: (Score:2)
Well, "Artificial Stupidity" is pretty descriptive. Great memory, great fact-base, no clue how things really work. And yes. Remove the great memory and fact-base and you describe a lot of people pretty accurately.
Re: (Score:2)
Looks at syntax alone and frequency of words occurring together. It has no concept of meaning, no concept of truth, no concept of reliability.
That's not an excuse given that The Onion is widely regarded and often linked to the word "satire" and "funny". Based purely on frequency of words occurring together a LLM should be able to prevent returning it as a result to a legitimate question that isn't asking for either a funny or satirical answer.
The bigger problem is that Google's LLM *sucks*. We didn't get this kind of garbage even from early ChatGPT.
Re: (Score:2)
It is a somewhat sound way to build low-reliability automation cheaply. Which is all this is. The term AI is nothing but a marketing lie these days.
Re: (Score:2)
Wow, you know how the brain works, I thought the way neurons firing that lead to thought was not really known. Could you point me to some reading that describes how the brains works.
There is a bit of sarcasm there because I don't think you do, but if you really do I would like to know as well.
Re: (Score:2)
We don't know all the details, but the basics (learning semantics first) has been the central tenant for a very long time. I think Chomsky is generally credited with developing the model, but it's a fundamental pillar in how we understand mesolithic art, Genevieve von Petzinger's signs, etc.
It's why animal behavioural studies work the way they do, why animal behavioural users tried to teach gorilla's sign language, had dolphins use touch screens, taught African grey parrots numbers, why people tested bees f
If you ... (Score:2)
some onion comments are better than news. (Score:2)
Re: (Score:2)
Re: (Score:2)
ChatGPT "gets" it (Score:4, Interesting)
ChatGPT says "You should not eat rocks."
Seems like a Google problem.
Re:ChatGPT "gets" it (Score:4, Funny)
ChatGPT is still busy digesting NYT and Wallstreet journal and has not found Onion.
Absurd (Score:1)
In what can only be described as a catastrophic comedy of errors, Google's AI has been spewing out search results more ludicrous than a squirrel trying to crack a coconut.
In a near-tragic incident, Bob "Just Bob" Thompson, a self-proclaimed handyman from Cleveland, sought Google's advice on fixing his sparking toaster. The AI, in an inexplicable fit of madness, suggested using a fork to dislodge the bread. "I thought, 'Holy shit, that's insane!'" said Thompson, "but it's Google, right? Supposedly smarter th
Re: (Score:2)
AC writes for the Onion and is just priming the LLM through Slashdot :)
Whats the problem? /s (Score:1)
What's the problem? The AI is merely parroting the alternate timeline we got into in Nov. 2008 and doubled-down with in 2012.
That event skewed the timeline at that point, and now we find ourselves in this alternate reality, where good is bad, evil is good, boy is girl, we've turned on our allies and our youth clamors in favor of a "country" who would kill every gay and lesbian in sight if they had their way. All the while that "country" keeps lobbing rockets at Tel Aviv. The duplicity is stunning. The
Re: (Score:2, Insightful)
Conservatives sure are fragile. All it took was a black president to break their brains. The TEA party started in 2008 as a result of Obama and that morphed into today's MAGA party. The people living in trailers who send their paychecks to a man who brags about how rich he is. He fleeces his Qult followers with fake cash and they're dumb enough to try and cash it at the bank. https://www.nbcnews.com/news/u... [nbcnews.com]
A sarcasm detector? Now there's a useful invention (Score:2)
Who would have thought?
A list of goodies from Google AI (Score:3)
Here are a bunch of screenshots [imgur.com] showing the wonder of Google AI. One of them is from The Onion as well as Reddit (no, not the pizza with glue).
Not even right (Score:2)
As a geologist who has never been asked that question in 40 years in the profession, even I know that the correct answer is in the proverb : "you will eat a peck of dirt before you die".
For those using "freedom units", a peck is approximately 9 litres, plus or minus between 1% and 45% depending on the precision desired.
In practical terms, that's a rock bigger than any protective hel
Garbage in, garbage out. (Score:2)
#nottheonion (Score:2)
there is the hashtag and subreddit nottheonion for something, i mean, sometimes it's hard to know if it satire or real these days.
Re:This isn't news (Score:5, Insightful)
>Why is everything posted here some anti-Trump bashing spam
One of the results of doing awful things all the time and being in a position of power is that there are a lot of things to complain about and a lot of reason for people to do so.
Trump isn't in court constantly because he's the victim of witch hunts. It's because he's a career criminal. If the system was more effective, he would have been stopped so long ago he wouldn't be a big enough deal to even care about today... but his daddy's money provided him with a very effective shield from consequences.
Re: (Score:2)
>Why is everything posted here some anti-Trump bashing spam
One of the results of doing awful things all the time and being in a position of power is that there are a lot of things to complain about and a lot of reason for people to do so.
Or sometimes it's just funny. Like when Trump just spoke at the Libertarian Convention, this article, Trump tries to rewrite what happened at Libertarian Convention after he was booed on stage [the-independent.com] noted (at the end):
South Carolina Senator Tim Scott, a former GOP presidential rival turned ardent Trump booster, claimed: “I saw a wave of red hats at the Libertarian convention.
What Mr Scott neglected to mention in his interview with Dana Bash, according to Semafor reporter Dave Weigel, was that the red hats he spotted at the convention were actually in support of Argentinian President Javier Milei, not Mr Trump, and read: “Make Argentina Great Again.”
Re: (Score:2)
Indeed. What surprises me is that after millenia of dictatorships there are still no effective protections against these assholes in place.
Re: This isn't news (Score:2)
Re: (Score:2)
And gay comments like this are now pegged to +5 and any counter-argument is now pegged to -1, Flamebait. Guess Biz-X have their desi chimps hard at work bypassing the old moderation system.
Implying that this is some evil Biz-X conspiracy rather than ... the view of actual other people? Hint: more than just a few Biz-X people hate Trump.
Re:This isn't news (Score:5, Funny)
A Conspiracy!!! Have you talked to Fox about this?
Re: (Score:1)
You can tell the site admins are doing their own modding. Every see a thread where a dozen comments have all taken a -1 hit?
Re:This isn't news (Score:5, Informative)
You can tell the site admins are doing their own modding. Every see a thread where a dozen comments have all taken a -1 hit?
I don't see how a dozen downmodded comments would implicate the admins doing their own moderation? I'm a regular user and I get 15 modpoints a pop and will spend it all in in a single thread if I think it's valid.
Re: This isn't news (Score:2)
Re: (Score:2)
Have you considered that perhaps you just arent noticing them when you are given them?
Re: This isn't news (Score:2)
Re: (Score:2)
Yes but it's not very prominent. Even knowing where the prompt pops up I often only notice I have mod points after having them for a couple of days and I am as certain as I can be that I've completely missed having them before.
I tried finding a screenshot online that might show the prompt so you could see it and know where to look but I couldnt find one for you.
Re: This isn't news (Score:2)
I don't know how modpoints work If it's a natural ability to please the corporate algorithm, it's not something I like to bring up in casual conversation.
Re: (Score:3)
I almost always have 15 points. They just expired but I expect another 15 by tomorrow.
Re: (Score:1)
I am watching from the sidelines. I saw it at +5. Here is where it is a few minutes later.
40% Insightful
40% Troll
10% Informative
Looks like they didn't like being called out and modified the moderation.
How are you seeing the moderation breakdown? I can only see the "final score". Is there some toggle switch I forgot to set in my settings?
Re: (Score:2)
You are right that the is bias in the modding but then again, it doesn't stop people reading the posts. I just read the post and ignore the score, problem solved. If other people what to judge the validity of a post by how popular it is its up to them.
What difference does it make what random people on the internet think of your comment?
Re: (Score:2)
Same. I read at -1 especially because I want to hear both side of the stories.
Re: This isn't news (Score:5, Interesting)
"Bias" isn't the same thing as modding down horseshit - a genre in which one US political party lately seems to specialize.
(OK, moreso than the other).
Nor are you hearing "both sides" when someone says gravity comes from matter curving spacetime, and someone else says gravity only works as long as you believe in it.
Actual journalism is _supposed_ to bury shitposts. It's not bias to point the reader at what is true at the expense of what is false.
Re: This isn't news (Score:2)
Re:This isn't news (Score:5, Insightful)
I am watching from the sidelines. I saw it at +5. Here is where it is a few minutes later.
40% Insightful 40% Troll 10% Informative
Looks like they didn't like being called out and modified the moderation. There was a Trump related thread where the same thing happened yesterday. Anti-Trump trolls were modded to 5, Insightful just for saying "I hate orange cheetoh." Either it's the site owners or a NGO with a bunch of accounts they keep mod points on.
You forgot one possibility, that the majority of people despise Donald Trump.
Re:This isn't news (Score:4, Funny)
They didn't forget it. The "MAGA Thought-Hijacker (tm)" short-circuited that notion straight into a conspiracy theory.
Re:This isn't news (Score:5, Insightful)
It's just sad that you post that seriously. There's no need, because the only people you'll convince are already drinking the same Koolaid you are.
You're giving your fealty to a con man and you'll never get anything back.
Re: (Score:2, Insightful)
Did Alvin Bragg brag during his campaign that, of the him and his opponent, HE was the one who would definitely get Trump?
YES, it was his whole campaign.
Did the DOJ already pass on indicting Trump on the same charge?
YES.
And yeah, that judge is super duuuper corrupt and out to get Trump.
https://www.foxnews.com/opinio... [foxnews.com]
So WTF are you blabbering about?
Re: (Score:1)
Perhaps you need a liberal free safe space?
Reality check: Many places that claim to be "a safe space" aren't as safe as they would like to think. This goes for emotional safety, workplace politics/career opportunities, and just about any other "space/communications-domain" where the other people in the room could use what you say against you if they were coerced [xkcd.com] into it (or just wanted to mess with you for fun).
And of course, any place with more than a few people in the room is likely to have someone more liberal than you in the room. Unless, of