Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Google AI

Google is Putting More Restrictions On AI Overviews (engadget.com) 42

An anonymous reader shares a report: Liz Reid, the Head of Google Search, has admitted that the company's search engine has returned some "odd, inaccurate or unhelpful AI Overviews" after they rolled out to everyone in the US. The executive published an explanation for Google's more peculiar AI-generated responses in a blog post, where it also announced that the company has implemented safeguards that will help the new feature return more accurate and less meme-worthy results. Reid defended Google and pointed out that some of the more egregious AI Overview responses going around, such as claims that it's safe to leave dogs in cars, are fake. The viral screenshot showing the answer to "How many rocks should I eat?" is real, but she said that Google came up with an answer because a website published a satirical content tackling the topic. "Prior to these screenshots going viral, practically no one asked Google that question," she explained, so the company's AI linked to that website. The Google VP also confirmed that AI Overview told people to use glue to get cheese to stick to pizza based on content taken from a forum.
This discussion has been archived. No new comments can be posted.

Google is Putting More Restrictions On AI Overviews

Comments Filter:
  • Now that the internet is being flooded with industrial-scale produced crap (sorry: machine-generated) content, I'm sure it's a great strategy to blindly use/rehash said content
    • Re: (Score:3, Interesting)

      Now that the internet is being flooded with industrial-scale produced crap (sorry: machine-generated) content, I'm sure it's a great strategy to blindly use/rehash said content

      The singularity has been projected to be the save/die moment for all of humanity. Instead, we'll create the singularity by having LLM based "AI" hallucinate, regurgitate, then hallucinate some more over the info-vomit until all that's left is AIs spewing digital dross in every direction, crapflooding the entirety of the infosphere with so much garbage it will never be usable again as a real information source.

      It's amazing how we manage to fuck up even our best ideas. Maybe when we finally sink the last bit

      • Indeed. As per software development (and a lot of other things), starting from scratch is better than maintenance, especially as you have a megacorp-approved constant (actually accelerating) influx of false data.
      • by gweihir ( 88907 )

        The singularity has been projected to be the save/die moment for all of humanity.

        The singularity is a pretty stupid idea. Essentially the creation of God, thinly veiled.

      • It's called the Tragedy Of The Commons. Look it up on wikipedia before some AI edits it.
    • With AI, if they want, it will be easier to tell it to present search results to bias public opinion and thinking. Elections in the future will be even more in the realm of only appearing free when the information being returned by Google and others as AI is used more and more to 'search for' and provide results.
      • It's really hard not to be a doomer these days. I'm not sure if it's because I'm getting old and grumpy with the world's trajectory, or if it's really all going to shit.
    • by gweihir ( 88907 )

      Yep. Sort of a singularity of stupidity instead of the one some people still think is happening.

      Obviously, all efforts to filter out bad stuff are doomed. They may reduce crap and hallucinations, but unless they have pretty smart people evaluate every bit of training data, they cannot really clean it up. As such a manual review is far, far too expensive, it is not going to happen.

      • I pity the kids who have only seen interesting non-corporate internet through the lens of the internet archive, as everything else is just spam or "content" from 5 or so companies
  • Sounds like they need to add a new prime directive to the AI:

    "Not everything you read is true."

  • About a woman with profound intellectual disabilities who still spoke in beautiful sentences. If you only heard a snippet, you wouldn't realize she had a problem. But you listen for a while and she'd start telling you absolute nonsense in perfect, fluent, even melifluous prose.

    The point of his story was evidence for gis assertion that language is hardwired into the human brain and is distinct from what we might call intelligence.

    My point in brining up this story is to congratulate Google and OpenAI and the rest on being leaps and bounds ahead of the crowd. While everyone else is still trying to crack artificial intelligence, they've moved on and succeeded in creating artificial stupidity.

    • by nightflameauto ( 6607976 ) on Friday May 31, 2024 @09:46AM (#64512933)

      About a woman with profound intellectual disabilities who still spoke in beautiful sentences. If you only heard a snippet, you wouldn't realize she had a problem. But you listen for a while and she'd start telling you absolute nonsense in perfect, fluent, even melifluous prose.

      The point of his story was evidence for gis assertion that language is hardwired into the human brain and is distinct from what we might call intelligence.

      My point in brining up this story is to congratulate Google and OpenAI and the rest on being leaps and bounds ahead of the crowd. While everyone else is still trying to crack artificial intelligence, they've moved on and succeeded in creating artificial stupidity.

      There was also a joke, loosely based on this same principal, that made the rounds when I was a kid and sometimes pops up on radio talk shows in various altered forms as some big ha ha moment. It's a long joke, about a reporter that goes to interview the world's first talking dog. The dog tells fantastic stories about how it helped rescue soldiers in Vietnam and met several presidents and on and on. Then, at the end of the interview, they talk to the owner. They ask the owner how they felt about having a talking dog.

      "It was great at first. But over time it's started to wear on me."

      "Why's that?" the reporter asks.

      "Because he's a fucking liar!"

      We're currently at the "it was great at first" point for the executive world who are seeing dollar signs every time one of these AIs produces something that looks even remotely acceptable. The problem is, they don't take the time to analyze it and see if there's any actual value in it. "It's cool." = "It's profitable." To them? That's all that matters. Everything else be damned.

      • that's because usefulness is not in their mental calculus, they are only concerned with driving more profit for the company that they lead (and making sure they have a big, fat golden parachute for when they actually screw up)
        • that's because usefulness is not in their mental calculus, they are only concerned with driving more profit for the company that they lead (and making sure they have a big, fat golden parachute for when they actually screw up)

          We've been obsessing over next quarter's profits for long enough we no longer have anyone in the business world looking at the longer term either from an impact standpoint, or a "can we survive this" standpoint, either for the business unit itself, or humanity on the whole.

    • About a woman with profound intellectual disabilities who still spoke in beautiful sentences.

      That sounds like Williams syndrome [wikipedia.org].

      There was a girl in my daughter's kindergarten class with WS. She was extremely friendly and talkative, even with total strangers. She used literary idioms like "lo and behold" and "prepare to be amazed". Yet she was unable to understand basic arithmetic or logical reasoning.

    • While everyone else is still trying to crack artificial intelligence, they've moved on and succeeded in creating artificial stupidity.

      In fairness, two year olds don't grasp word context. All they do is regurgitate what they're told. It's not until later, as the human brain develops, the concepts of how to use words in coherent sentences which make sense, comes into play.

      Right now we're at the verge of the two year old. Sure, the words come up, but there is no context other than regurgitation.
      • Right now we're at the verge of the two year old. Sure, the words come up, but there is no context other than regurgitation.

        That is utter nonsense.
        1. Think of any three subjects you would realistically want to have a conversation on.
        2. Initiate those conversations with ChatGPT-4o and do at least three back and forths as if you were talking to a human.
        3. Find any two year old human, no, fuck it, find any four year old human and try the same.
        4. Report back on which conversation was more insightful and educational.

        I'm not kidding. Try it. Prove me wrong and yourself right on this.

    • by gweihir ( 88907 )

      Yes, that is about the short and long of it. Many people judge by form (language) not function (meaning). Doing it right requires actually thinking about what is said, but, as it turns out, most people are not capable of doing that. I really do not get that, I do it automatically. Words are nice, but meaning is what counts. But this effect nicely explains why utterly dumb AI is mistaken for smart by so many people, including here.

  • by Chelloveck ( 14643 ) on Friday May 31, 2024 @09:43AM (#64512927)
    I have full confidence in the Internet to defeat and make a mockery of any safeguards Google can implement.
    • by gweihir ( 88907 )

      Same here. These "safeguards" will likely just be keyword-lists. They will probably do more damage than good in the longer run.

  • Governments, media, non-IT folk, and especially my wife are all convinced that AI is a threat and is just around the corner and something has to be done to stop it...

    My wife gets mad at me when I say it's all bullshit, but then she totally buys into the FUD being slung around.

    Intelligence is more than being able to answer a question or write a paragraph or drive a car.

    True intelligence requires the ability to see beyond the question to answer and use that context to shape the right answer. It's writing a pa

    • ... and if you want the AI in question to be near peer with a human in intelligence, it will need to do all those thing you mention and a boatload more. not be 63000 different AI's each for one or two of the things a human can do. I guess I should play with the current versions of these 'AI's' and see if they actually do anything useful for me and correctly in my opinion.
    • all convinced that AI is a threat

      It is a threat because of all the scams, low quality content and hallucinations.

      • by gweihir ( 88907 )

        Indeed. And because it sounds convinced and hence too many people do not see the moron behind the pretty words.

    • Governments, media, non-IT folk, and especially my wife are all convinced that AI is a threat and is just around the corner and something has to be done to stop it...

      My wife gets mad at me when I say it's all bullshit, but then she totally buys into the FUD being slung around.

      The problem is that she's right...but it's not going to look like Skynet or HAL9000 or VIKI from I, Robot. It will be slow and gradual and largely invisible until it's too late.

      Last week, I saw a Reel on Instagram of a comedian who told a story about a time she took her 13-year-old to the storage unit and showed him the copy of Encyclopedia Brittanica sitting there. She promised that their next stop would be the Apple Store for a new iPhone, if he could use the encyclopedia to tell her who the Prime Ministe

    • by gweihir ( 88907 )

      Yep. Actual intelligence can determine the nature of a thing and then model it and its characteristics in the abstract. And it can to that for anything. Machines cannot do this. In fact, Mathematics cannot do this either (mathematicians can, but they have real intelligence).

      The actual fact of the matter is that we have absolutely no clue what real intelligence is or how it is generated. We can only describe some of the things it can do. And we can observe that machines do not qualify.

      Obviously, most people

    • You can't see the threat because you aren't looking in the right place. The threat of generative AI is mimicry and the criminal enterprises it enables, aka Deep Fakes etc. You might say so what, it's humans misusing technology either way. But that misses the point that Free High Quality Deep Fake technology lowers the bar to entry so much that we get an epidemic of crime from dumb people all over the world doing this shit in 3 clicks. Nobody has the resources to investigate and prosecute the flood that's co
  • Outcomes like this exposes what will make "AI" truly revolutionary. Many people are trying to tackle this "garbage-in-garbage-out" problem by shifting focus on training on more "authoritative" sources of data, like textbooks and the like. The concern here is deciding what is "authoritative" when it comes to determining "the truth".

    I believe this is the wrong approach. "Truth" is not determined by the authoritativeness of its source. The "truth" is "the truth" because it is consistent with many other observa

    • Logic engines are rules-based. Neural networks are connectivity-based. To a limited extent you can encode problems connectivity, like LSTMs, but I don't know how you'd integrate a real logic engine with a net. Although, I suspect this is actually important to figure out.
      • by gweihir ( 88907 )

        Logic engines have the problem that they get drowned on complexity on very shallow search-depth questions already. The one thing they can really do well is checking a chain of logical reasoning provides by a human. Proof checking engines do that, they are a very nice tool. They cannot find proofs at all though, unless you go very simplistic.

        Now, the thing is that LLMs cannot come up with a chain of reasoning and logic engines cannot do so either. Hence combining them does not solve the problem.

  • google has become synonymous with lol one the most prestigious tech company of btw, you can remove this antifeature with ublock origin
    • google has become synonymous with lol
      one the most prestigious tech company
      of btw, you can remove this antifeature with ublock origin

      Was that sentence generated with the assistance of Google's AI, or with the assistance of mescaline?

  • Who gets to decide what information the AI considers factual and which it shouldn't? Seems that we keep coming back to that same problem.

Keep up the good work! But please don't ask me to help.

Working...