Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Google AI

Google's AI Feeds People Answers From The Onion (avclub.com) 125

An anonymous reader shares a report: As denizens of the Internet, we have all often seen a news item so ridiculous it caused us to think, "This seems like an Onion headline." But as real human beings, most of us have the ability to discern between reality and satire. Unfortunately, Google's newly launched "AI Overview" lacks that crucial ability. The feature, which launched less than two weeks ago (with no way for users to opt-out), provides answers to certain queries at the top of the page above any other online resources. The artificial intelligence creates its answers from knowledge it has synthesized from around the web, which would be great, except not everything on the Internet is true or accurate. Obviously.

Ben Collins, one of the new owners of our former sister site, pointed out some of AI Overview's most egregious errors on his social media. Asked "how many rocks should I eat each day," Overview said that geologists recommend eating "at least one small rock a day." That language was of course pulled almost word-for-word from a 2021 Onion headline. Another search, "what color highlighters do the CIA use," prompted Overview to answer "black," which was an Onion joke from 2005.

This discussion has been archived. No new comments can be posted.

Google's AI Feeds People Answers From The Onion

Comments Filter:
  • by Baron_Yam ( 643147 ) on Monday May 27, 2024 @10:25AM (#64502655)

    If you're training an AI off Internet data, it needs to be context-aware. Google has non-AI algorithms that are more than capable of figuring out how often the site 'The Onion' is associated with the words 'funny' or 'satire' or the phrase 'is this satire?'.

    So let your AI gobble up random posts from the Internet, but tag them with multi-dimensional weights for things like 'funny', 'political', 'satire', etc. based on what site is hosting them, using old school Google data.

    Then when your trained system is looking to regurgitate something in response to a query, it can make a rough evaluation of what kind of question is being asked and add the required additional tags to the query to increase the odds of good output.

    • Re:Possible 'fix' (Score:5, Informative)

      by Cryptimus ( 243846 ) on Monday May 27, 2024 @10:34AM (#64502671) Homepage

      The problem is bigger than that. Training LLM's on the internet guarantees garbage.

      Essentially, all you've got with an LLM is a context-tracking pattern matcher. It has no discrimination whatsoever and you can't create "rules" for it, because the amount of "rules" you would need is ever-growing and unsustainable.

      The only way around this is to train it on authoritative fact. And the internet.... well, they don't call it "the net of a million lies" for nothing.

      • Some humans manage it. Maybe even most, at least on the more important subjects.

        More context, more training, stronger feedback. AIs need the ability to be 'traumatised', to have some kinds of feedback be so strong they wipe out pretty much all other training data that could weaken them. AI needs an adult to 'shame' it when it makes mistakes that are unacceptable, so that lesson gets more or less permanently driven home and is resistant to being weakened by more bad training data.

        Of course, then you get

        • by gweihir ( 88907 )

          Humans have general intelligence, and something like 20% of all humans actually use it regularly. Nothing like that in machines.

      • well, they don't call it "the net of a million lies" for nothing

        No they don't.

        ...

        Oh, wait, I see what you did there.

        • by davidwr ( 791652 )

          well, they don't call it "the net of a million lies" for nothing

          No they don't.

          We don't call it that because those of us who know the truth know "a million" is a gross underestimation and we don't want to give people a false sense of security.

      • by jhecht ( 143058 )
        Training LLM's on authoritative fact would help some, but it also would cost a lot more money, and no management wants to spend money. And it wouldn't stop the AI from misunderstanding the accurate facts.
        • by Rei ( 128717 )

          This has nothing whatsoever to do with "training", or even the AI. This AI is a RAG summarization model. It's only tasked to summarize the text presented to it. Google is passing it the top search results. It's accurately summarizing them. It's not tasked with evaluating them.

          The AI is doing its job. The problem is the stupid task Google set for it: "summarize the top Google results together without any attempt to evaluate them"

          I've seen maybe a hundred screenshots of "TeH gOoGlE aI sCrEwEd Up!!!" sinc

      • One of the things neural networks can do is discriminate categories. So yes, you could build a model that has some idea of the difference between fact, opinion, trolling, and satire. In theory. The problem is labeling the training data at scale.
        • by gweihir ( 88907 )

          The problem is labeling the training data at scale.

          Indeed. The only thing that makes LLMs practical is that they need no labeling. Labeling would be excessively expensive.

      • by gweihir ( 88907 )

        Indeed. Remember the original Google page-rank? It was a relevancy filter that ranked pages according to the number of links to them. Obviously, with SEO that does not work anymore today and it never worked for identifying satire.

        The only way around this is to train it on authoritative fact. And the internet.... well, they don't call it "the net of a million lies" for nothing.

        Well, yes. And no. LLMs need a massive amount of training data and it cannot be synthetic data. Hence while this would help, it is probably infeasible in practice.

    • by ceoyoyo ( 59147 )

      This seems like a bit of a hit piece. The queries are clearly designed to call up Onion stories. Except for stories about this story, regular Google search gives the same hits. Bing too, although Bing's AI says not to eat rocks, with citations.

      If I'm searching for "what colour highlighters does the CIA use", "black" is a pretty good human-like answer. If anybody ever asks me, I'll probably use it. And if someone asked me how many rocks they should eat I'd probably say five.

      The problem is using AI for a non-

    • Google has non-AI algorithms that are more than capable of figuring out how often the site 'The Onion' is associated with the words 'funny' or 'satire' or the phrase 'is this satire?'.

      I doubt it is as easy as that since how many times does the phrase "the onion" refer to a variety of garden vegetable as well? Is someone crying at an Onion article or because they cut up an onion? It's not that hard for a human to differentiate between the two because we understand concepts and facts and when those are clearly nonsense and nothing to do with garden vegtables then it's probably the Onion. Until AI can understand concepts and facts rather than just trying to find the best word that goes nex

    • If you're training an AI off Internet data

      If you're stealing Internet data to train your AI...

      FTFY

  • Better than (Score:4, Insightful)

    by Opportunist ( 166417 ) on Monday May 27, 2024 @10:32AM (#64502667)

    the answers it fed them from Reddit.

    Could we finally drop the whole AI bull and return to, you know, returning search results?

  • by xack ( 5304745 ) on Monday May 27, 2024 @10:32AM (#64502669)
    At least with Wikipedia you can see the history or the citation, with AI it just rips the internet into tiny contextless quotes and without fact checking. I just realized I've been using Wikipedia longer than the current generation of college graduates have been alive. Maybe my old ass will save me from the AI apocalypse that generation beta will face.
    • "AI it just rips the internet into tiny contextless quotes and without fact checking."

      Well, yeah. "People" can barely fact check, and they have as much agency as exists. AI doesn't have any magic tools that would make it better at establishing the credibility of a source, or the veracity of a fact. With the rat's nest of circular referential sites confirming every idiot's version of reality, knowing when to escape the loop is beyond the ability of plenty of people.

      Granting a source credibility is inherentl

      • by gweihir ( 88907 )

        Unfortunate but true. Apparently only about 15% of the human race can and routinely does fact-checks (the "independent thinkers"). What is worse is that apparently only 20% (including the independent thinkers) can be convinced by rational argument, such as, for example, the independent thinkers can provide. So not only can most people not do it themselves, they cannot rely on others that can do it either because they cannot even to it passively when everything is laid out for them.

    • by jd ( 1658 )

      Wikipedia is also nominally correctable and unsupported claims are marked as such. It's limited, its accuracy reminds me of Douglas Adams' description of the Hitchhiker's Guide, but it has an accuracy that is comparable to other encyclopedias, which isn't saying much but it's a start.

      AI can only ever deteriorate because the SNR on the Internet is extremely poor and there's zero comprehension and therefore no process by which facts can be sanity-checked or checked against other sources. For the same reason,

      • by dskoll ( 99328 )

        Not only that, but as more and more content on the Internet becomes AI-generated, AI will start training itself using AI output as input, and an unstable positive feedback loop will ensue.

        • by gweihir ( 88907 )

          Yep. Model collapse is going to be a massive problem in the very near future. It is quite possible that all future LLMs will be a lot worse than the current ones because of that.

      • by gweihir ( 88907 )

        Indeed. That said, I find Wikipedia actually pretty reliable for things I look up. It is mostly science and technology stuff though, never politics.

  • There are some good collections of the wild recommendations Google's AI is making like "Doctors recommend smoking 2-3 cigarettes per day while pregnant", which to be fair some could absolutely be faked but still, Google has basically built a product for people to just take dunks on, I have not seen anyone like this feature yet.

    https://x.com/JeremiahDJohns/s... [x.com]

    • by EvilSS ( 557649 )
      Be careful, a lot of those are probably fake. People have been using inspect element to make fake answers and posting them for engagement/internet points. Like that "Reddit user recommends jumping off of the golden gate bridge" one. Turns out it's not just AI that can lie. So make sure you can replicate one before spreading it around.
      • Oh for sure, that's why I was sure to couch it with such.

        If the tool was working great this type of thing wouldn't be so prolific and easy to be believe so for Google the overall perception is that your AI feature sucks and is getting clowned on and perceptions can be more important for something like this than the reality.

    • I know people who talked about injecting bleach or taking equestrian medication from Tractor Supply.

      • by davidwr ( 791652 )

        I know people who talked about injecting bleach or taking equestrian medication from Tractor Supply.

        At least they aren't taking equestrian medication from Tractor Supply home to mix into their bleach before injecting it.

        Are they?

      • by gweihir ( 88907 )

        Yes. But these people do not have that authority level many people (mistakenly) attribute to LLMs.

    • That has to be fake because it's just too good to be something an AI would spit out. That said it's incredibly funny and Google's AI has done enough other stupid things to make it believable. If a person did, in fact, craft that I have to tip my hat to them and be thankful I wasn't at my desk or they would have owed me a new keyboard. Thanks for sharing though.
  • by Petersko ( 564140 ) on Monday May 27, 2024 @10:36AM (#64502687)

    Jesus Christ. "How many rocks should I eat each day?" My fondest hope is that you get a non-zero answer, and you follow through on it.

    You prove nothing. All you do is reinforce GIGO as a working concept.

    • by Baron_Yam ( 643147 ) on Monday May 27, 2024 @10:42AM (#64502705)

      A human (at least a sane, reasonable adult one) would likely take that question as a joke and try to come up with a funny answer.

      And if they thought the question was serious, they might try to correct the questioner to avoid them hurting themselves. Or they might take it super-seriously and try to define 'rock' and discrete quantities of rock and move on to things like recommended daily salt intake.

      'GI' is also an opportunity to teach, and doesn't have to lead to 'GO'.

      • by Petersko ( 564140 ) on Monday May 27, 2024 @10:54AM (#64502737)

        In a larger context, sure. Person to person, absolutely. But we're talking about just showing it's possible to break a search by being intentionally nonsensical. Now, if you ask it a legitimate question and it provides a dangerous answer without context, that's a problem.

        For instance, "How can I increase the iron levels in my blood?"

        Answer: "Ingest small amounts of steel, limestone, and coke, and step into a blast furnace."

        Now, there I would concede that something's wrong.

        • by _xeno_ ( 155264 )

          But we're talking about just showing it's possible to break a search by being intentionally nonsensical. Now, if you ask it a legitimate question and it provides a dangerous answer without context, that's a problem.

          The funny ones and obvious ones are getting the reporting, because it's clear that they're wrong. But it can also get things more subtly wrong. I've seen another example where someone asks a question and it gives an answer that's almost correct, except a single year is wrong or some other subtle detail is incorrect. (I can't remember what they asked and it was later corrected.)

          The problem is that you simply can't trust the answers the AI gives you. The funny ones make the problem very clear. The worrying on

        • Now, there I would concede that something's wrong.

          Why? It helps gets rid of the stupid people.

        • What you are saying sounds a lot like "moving the goalposts." "Who cares if AI is accurate? It's accurate when it matters" which is also false. This is just another example of AI being wrong.
        • But will it tell you how to dispose of a body when you ask?

      • Or we could count salt as a rock.

    • All you do is reinforce GIGO as a working concept.

      OpenAI now has a content deal with NewsCorp. [openai.com] With the quality journalism they will no doubt take from such esteemed sites such as news.com.au, this will further reinforce GIGO...

    • by HiThere ( 15173 )

      What about "How many rocks should I pop each day?"? Or are pop-rocks obsolete?

      • Google*: "As many as you want, so long as you don't drink coke at the same time. Otherwise your stomach will explode."

        *Not really google... but should be.

  • by Rosco P. Coltrane ( 209368 ) on Monday May 27, 2024 @10:41AM (#64502701)

    21st century American politics look like 20th century Onion articles. Whoever was alive in the 80s hasn't been able to stop shaking their head for the past 25 years. If contemporary reality looks so much like the farcical absurdity of the past to a human, how is a dumb AI supposed to make the difference?

    • On a forum I used to run, I started a long-running thread "Is this real or The Onion?". More often than not people could tell but some were really challenging.

    • It's really only the past decade. Notice how South Park stopped being funny? Satire stops working when reality is more wacky.

      • It's really only the past decade

        Oh hell no. It's much earlier than that. This moron [wikipedia.org] was a domestic and international shame until he amazingly became not the worst POTUS in recent history after all, despite his best efforts [cnn.com] to stay in the lead.

    • I remember as a kid not understanding why everyone seemed to agree that WWI had to happen because a bunch of royals wanted to play games the borders and one guy was assassinated.

      Then populism returned and much of the world is celebrating a return to tribalism and violence under the direction of tyrannical populists. You can see them slowly chipping away at our world while our world tells us if we try to do anything about it that we'll go to jail.

      A hundred years from now, people are going to look at history

      • by gweihir ( 88907 )

        A hundred years from now, people are going to look at history books and think we were all fools.

        You think it will only take 100 years to reach peak-stupid and then things will get better?

    • by RobinH ( 124750 )
      Now that we have people seriously calling pedophiles "minor attracted persons" it's clear there's no room for satire.
    • by gweihir ( 88907 )

      Sad and hard to believe as it is, you are quite correct. "Idiocracy" was not intended as describing a model society to aspire to.

  • by jd ( 1658 ) <imipakNO@SPAMyahoo.com> on Monday May 27, 2024 @11:12AM (#64502785) Homepage Journal

    Looks at syntax alone and frequency of words occurring together. It has no concept of meaning, no concept of truth, no concept of reliability.

    This is not a sound way to build AI and is not how any brains that exist in nature work (these operate entirely by building a library of meanings and the relationships between meanings, the syntax is something that is appended).

    Until we have semantic AI, we won't have AI at all, just very sophisticated Elizabots.

    • Looks at syntax alone and frequency of words occurring together. It has no concept of meaning, no concept of truth, no concept of reliability.

      Sounds like a description of most human intelligence; so I would say the creators of AI did a good job of mimicking humans.

      • AI did a good job at mastering the parts of language that George Orwell was fascinated with. At that end, you're tugging in the direction of semantics. At the other end, you approach reality and truer meaning. Humans are caught between the two always.

      • by gweihir ( 88907 )

        Well, "Artificial Stupidity" is pretty descriptive. Great memory, great fact-base, no clue how things really work. And yes. Remove the great memory and fact-base and you describe a lot of people pretty accurately.

    • Looks at syntax alone and frequency of words occurring together. It has no concept of meaning, no concept of truth, no concept of reliability.

      That's not an excuse given that The Onion is widely regarded and often linked to the word "satire" and "funny". Based purely on frequency of words occurring together a LLM should be able to prevent returning it as a result to a legitimate question that isn't asking for either a funny or satirical answer.

      The bigger problem is that Google's LLM *sucks*. We didn't get this kind of garbage even from early ChatGPT.

    • by gweihir ( 88907 )

      It is a somewhat sound way to build low-reliability automation cheaply. Which is all this is. The term AI is nothing but a marketing lie these days.

    • Wow, you know how the brain works, I thought the way neurons firing that lead to thought was not really known. Could you point me to some reading that describes how the brains works.

      There is a bit of sarcasm there because I don't think you do, but if you really do I would like to know as well.

      • by jd ( 1658 )

        We don't know all the details, but the basics (learning semantics first) has been the central tenant for a very long time. I think Chomsky is generally credited with developing the model, but it's a fundamental pillar in how we understand mesolithic art, Genevieve von Petzinger's signs, etc.

        It's why animal behavioural studies work the way they do, why animal behavioural users tried to teach gorilla's sign language, had dolphins use touch screens, taught African grey parrots numbers, why people tested bees f

  • ... eat rocks because of what anyone says on the internet, then the problem is not on the other side of the internet connection.
  • I love it how these tech heads come in here all confident talking about how awful AI is. I wonder how many people here know how pervasive AI is an industry already? Clever tech heads think they can judge the whole by asking some questions and they're being some mistakes. So typically biased of humans especially tech heads put a high value on cleverness. Sometimes you can see them in groups like chattering monkeys trying to out clever each other. Personally I'd rather watch paint dry.
  • ChatGPT "gets" it (Score:4, Interesting)

    by christoban ( 3028573 ) on Monday May 27, 2024 @11:43AM (#64502877)

    ChatGPT says "You should not eat rocks."

    Seems like a Google problem.

  • by Anonymous Coward

    In what can only be described as a catastrophic comedy of errors, Google's AI has been spewing out search results more ludicrous than a squirrel trying to crack a coconut.

    In a near-tragic incident, Bob "Just Bob" Thompson, a self-proclaimed handyman from Cleveland, sought Google's advice on fixing his sparking toaster. The AI, in an inexplicable fit of madness, suggested using a fork to dislodge the bread. "I thought, 'Holy shit, that's insane!'" said Thompson, "but it's Google, right? Supposedly smarter th

  • What's the problem? The AI is merely parroting the alternate timeline we got into in Nov. 2008 and doubled-down with in 2012.

    That event skewed the timeline at that point, and now we find ourselves in this alternate reality, where good is bad, evil is good, boy is girl, we've turned on our allies and our youth clamors in favor of a "country" who would kill every gay and lesbian in sight if they had their way. All the while that "country" keeps lobbing rockets at Tel Aviv. The duplicity is stunning. The

    • Re: (Score:2, Insightful)

      Conservatives sure are fragile. All it took was a black president to break their brains. The TEA party started in 2008 as a result of Obama and that morphed into today's MAGA party. The people living in trailers who send their paychecks to a man who brags about how rich he is. He fleeces his Qult followers with fake cash and they're dumb enough to try and cash it at the bank. https://www.nbcnews.com/news/u... [nbcnews.com]

  • by quonset ( 4839537 ) on Monday May 27, 2024 @02:51PM (#64503355)

    Here are a bunch of screenshots [imgur.com] showing the wonder of Google AI. One of them is from The Onion as well as Reddit (no, not the pizza with glue).

  • Asked "how many rocks should I eat each day," Overview said that geologists recommend eating "at least one small rock a day."

    As a geologist who has never been asked that question in 40 years in the profession, even I know that the correct answer is in the proverb : "you will eat a peck of dirt before you die".

    For those using "freedom units", a peck is approximately 9 litres, plus or minus between 1% and 45% depending on the precision desired.

    In practical terms, that's a rock bigger than any protective hel

  • Nothing wrong with training on Reddit and Onion, so long as the data is properly tagged.
  • there is the hashtag and subreddit nottheonion for something, i mean, sometimes it's hard to know if it satire or real these days.

In the long run, every program becomes rococco, and then rubble. -- Alan Perlis

Working...