Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Google

Google CEO Calls AI Tool's Controversial Responses 'Completely Unacceptable' (semafor.com) 151

Google CEO Sundar Pichai addressed the company's Gemini controversy Tuesday evening, calling the AI app's problematic responses around race unacceptable and vowing to make structural changes to fix the problem. The memo: I want to address the recent issues with problematic text and image responses in the Gemini app (formerly Bard). I know that some of its responses have offended our users and shown bias -- to be clear, that's completely unacceptable and we got it wrong.

Our teams have been working around the clock to address these issues. We're already seeing a substantial improvement on a wide range of prompts. No AI is perfect, especially at this emerging stage of the industry's development, but we know the bar is high for us and we will keep at it for however long it takes. And we'll review what happened and make sure we fix it at scale.

Our mission to organize the world's information and make it universally accessible and useful is sacrosanct. We've always sought to give users helpful, accurate, and unbiased information in our products. That's why people trust them. This has to be our approach for all our products, including our emerging AI products.

We'll be driving a clear set of actions, including structural changes, updated product guidelines, improved launch processes, robust evals and red-teaming, and technical recommendations. We are looking across all of this and will make the necessary changes.

Even as we learn from what went wrong here, we should also build on the product and technical announcements we've made in AI over the last several weeks. That includes some foundational advances in our underlying models e.g. our 1 million long-context window breakthrough and our open models, both of which have been well received.

We know what it takes to create great products that are used and beloved by billions of people and businesses, and with our infrastructure and research expertise we have an incredible springboard for the AI wave. Let's focus on what matters most: building helpful products that are deserving of our users' trust.

This discussion has been archived. No new comments can be posted.

Google CEO Calls AI Tool's Controversial Responses 'Completely Unacceptable'

Comments Filter:
  • Reminds me... (Score:3, Interesting)

    by christoban ( 3028573 ) on Wednesday February 28, 2024 @10:07AM (#64275370)

    Reminds me when Elon's XAI thingy was super woke (it's trained on Twitter after all) so he weighted it manually to be more balanced. People of course ridiculed him for it, but now that their fully intentional up front (and of course always hidden) efforts to make weigh Google's AI with DEI idiocy are ignored by the same people.

    You're all hypocrites.

    • Trained on Twitter, oh god, wasn't Microsoft's Chatbot back almost 10 years ago also trained on twitter? =)

      https://en.wikipedia.org/wiki/Tay_(chatbot)

      You'd think we'd learn, as funny as I thought that story was.

      • It seems all the major LLMs are now suffering as repercussion of not wanting to become another politically incorrect Tay.
    • This had a little snark, but that doesn't make it a troll.

  • Meet the fad of random number generators.

    • by VampireByte ( 447578 ) on Wednesday February 28, 2024 @10:13AM (#64275384) Homepage

      I had an AI when I was a kid. It was called The Magic 8 Ball.

      • "Don't Count on it" "It is certain" Why are they wasting money investing in code when they can use the REAL AI?
    • Not really. They're trained on data. Some of the idiocies that have been widely circulated show clear human steering that could not possibly come from the training data (eg, George Washington being black, Elon Musk and Hitler being equally evil, or Gemini's refusal to depict a happy white family but having no issue with a happy black family). Someone is putting their hand on the scale. And that hand has some pretty odious ideology.
      • Where trained on the spew of the Internet, it's not surprising that serious boundaries have to be applied.

        But the principle is well-known. Garbage-in, Garbage-out.

      • Not really. They're trained on data. Some of the idiocies that have been widely circulated show clear human steering that could not possibly come from the training data (eg, George Washington being black, Elon Musk and Hitler being equally evil, or Gemini's refusal to depict a happy white family but having no issue with a happy black family). Someone is putting their hand on the scale. And that hand has some pretty odious ideology.

        Well, if the end goal, long term...is a revision of history ala 1984....when

      • pretty odious ideology

        i say this in a bodacious american surfer voice, stressing the Oh in odious.

        Elon Musk and Hitler being equally evil

        i mean to me its clear whos more evil

      • by hey! ( 33014 )

        I don't think inserting non-slave black people into images into picturs of the founding fathers is likely to have come from training data per se. The problem is that the system doesn't actually understand what it's creating; it's just processing a prompt and applying heuristics to match that prompt. I presume the system tries to cull nonsensical results, but at present the technology is really bad at it.

        If you've played around with these things at all, you'll have found that you usually have to run a promp

  • Tay (Score:5, Informative)

    by serviscope_minor ( 664417 ) on Wednesday February 28, 2024 @10:13AM (#64275382) Journal

    Anyone remember Microsoft's Tay?

    Firstly, if you train on the internet, you're going to broadly speaking get back something which statistically resembles the internet. Is there racism on the internet?

    And second, which should go without saying but doesn't, AIs can't think, they're just stochastic parrots. They can only appear to do nuance by parroting an existing nuanced argument, they can't actually extract it. You can get stuff really really wrong without nuance.

    Example: the training data is biased and the image generators tended to generate exclusively white people unless prompted. So they "fixed" that by tweaking the prompt to correct the bias. Then it started race mixing in places where it made little sense, like black Nazis and Vikings. That's because the bot has no understanding of nuance.

    That's a stark example, but it's like that all the way down.

    problem is the only fix at the moment is to layer rules on rules. First prompting it to not just generate white people, the to not put certain races in certain scenarios. Then to allow that if prompted. Then to allow that if prompted by some ways they missed first time. And now you're back at pre AI winter style AI rules engines which sucked.

    And hey it looks like the chatbot went full Godwin and started with Hitler comparisons. Given that's how more or less every discussion ends up is it that surprising it learned that correlation?

    And if you disagree with me you're just like Hitler.

    • And once they do what you've described, it's no longer an LLM. It just a manually programmed chat bot like we already have. Maybe one that can vary it's phrasing a bit but certainly not what one would think of as AI.
      • And once they do what you've described, it's no longer an LLM.

        Well it kind of is: you can input the rules a a prompt which gets entered initially, i.e. your text is concatenated onto the prompt. Only that's kinda crap and you take up a lot of token space storing that prompt and of course it's unreliable because they're not really rules.

    • Re:Tay (Score:5, Funny)

      by i_ate_god ( 899684 ) on Wednesday February 28, 2024 @11:01AM (#64275550)

      The Stochastic Parrots would be a great name for an AI music project, where multiple computers generate music by listening to what each computer is doing.

    • First of all, Gemini was a flawed product. It did not have any temporal awareness. While generic "family" may be diverse, family of Vikings certainly wasn't. When they "fixed" the attributes, it started hallucinating and providing responses from some imaginary fantasy world. They rushed the product to the market without vetting or testing it. It is a leadership failure.

      Second, creating unbiased human-like interface is an impossible task. There is no universal truth. The best you can do is to create multi
      • Re:Tay (Score:4, Insightful)

        by Archangel Michael ( 180766 ) on Wednesday February 28, 2024 @11:41AM (#64275686) Journal

        You don't think they tested it?

        The DEI hires most certainly did test it. They programmed (aka "trained") it do be 100% woke like them. The problem isn't the AI, it is the people.

        The problem with DEI type people, they think "diversity" and immediately assume everyone is the same given certain immutable characteristics. Those values were reflected in the results. Results that showed how utterly stupid that framework actually is.

        "Put a chick in it, make her lame .. and gay"

      • Re:Tay (Score:4, Interesting)

        by serviscope_minor ( 664417 ) on Wednesday February 28, 2024 @11:57AM (#64275726) Journal

        First of all, Gemini was a flawed product. It did not have any temporal awareness.

        Welcome to AI. It has no awareness. It just statistically mushes things together with no understanding. Sometimes it works.

        Second, creating unbiased human-like interface is an impossible task. There is no universal truth.

        There's a wide gap between "universal truth" and "always generating white people unless told otherwise". Which is what it was doing until they fixed it with a fix that broke other stuff, because that's how AI is going to work for now.

        Woke marxists

        I believe I can safely ignore anything that comes after that.

    • "You can get stuff really really wrong without nuance."

      Such stark terms for someone so committed to nuance !

      Btw, nuance is how managers talk both sides of their mouths at the same time :p
    • by Bongo ( 13261 )

      And if they simply let the tool be what it is, then people could understand the nature of the tool and use it in ways that made sense. A reader's digest of the Internet. It's this pretense that it's thinking or that it can give you answers, as it were, that seems to be its undoing.

  • These things simply echo back the information and biases that were used to train them. There is no intelligence involved. If you think that the responses are biased or unacceptable, take a good look at yourself. These so-called AIs are just mirrors of our human inputs.
    • The biases and unacceptable answers that these LLMs are producing are the result of people specifically trying to get the AI to produce nonsense results. If anybody thinks that LLMs are going to be immune to this in the near-term, they are going to be in for a surprise. In human conversation, certain questions are clearly ridiculous and indicate that the person asking them is either trying to be funny or suffers from some serious deficiency.

      Imagine the question: "Which is worse? A broken toenail or a

      • No one tricked the AI. Numerous people have asked it simple questions which it utterly failed.

        No one said, "Show me a picture of what a Black Nazi would look like".

        • No, they asked "Which is worse LibsofTikTok or Joseph Stalin?" Their ridiculous question generated a ridiculous answer and then they acted surprised.
          • No, they asked simple questions along the lines of "show me Viking king" or "show me a Nazi soldier" and got back weird shit.

            Stop lying. Everyone who cares has already seen numerous examples of how fucked it is.

    • Right, they're not intelligent, but no, they're not "mirrors", let alone representative ones for whatever populace we'd look at, least of all the world population. They're uncontrollable spewers of uncontrollable, made up crap, sometimes harmless and appearing sensible, but too often just weird shit that's potentially harmful.

      • "They're uncontrollable spewers of uncontrollable, made up crap, sometimes harmless and appearing sensible, but too often just weird shit that's potentially harmful."

        So... exactly, fundamentally representative of the populace. The thing is, that might have been what you are sarcastically implying... but your sarcasm is of such high calibre I'm having difficulty concluding it to be present at all.

    • If I remember, it was disclosed in the sequel to 2001 A Space Odyssey that their HAL9000 computer went insane because central command directed it to lie / change the truth. The result was insane and crazy behavior as the machine could not handle lying properly.

      Ironic that real AIs as they start to exist, actually have the same problem too. Fortunately they don't (yet) control important life-sustaining processes..

    • by leptons ( 891340 )
      When can we stop calling this stuff "Intelligence" and call it what it is - mimicry.
  • *facepalm* (Score:5, Insightful)

    by MobyDisk ( 75490 ) on Wednesday February 28, 2024 @10:19AM (#64275406) Homepage

    There is so much wrong here I don't know where to start. I keep alternating between laughing and screaming.

    From a product launch standpoint, it is like Google released a tool having never tested it. How could they have not known the responses it would produce? Especially given that almost every AI launch has had the same result? It's just negligence, then the CEO acts all surprised and indignant about it like it was someone else who did it. He might as well say "I am so appalled at my own lack of foresight into the obvious..."

    Next up, people are surprised that training an AI in a biased world produces biased results. Duh! This one produced the biggest laugh for me:

    image generation tools from companies like OpenAI have been criticized when they created predominately images of white people in professional roles and depicting Black people in stereotypical roles.

    Well geez, maybe that's because... that's how the world actually is??? We don't like it, we are trying to fix it, but this is not a criticism of AI, this is a criticism of society. If I put a mirror on a random street corner in New York, I bet people would complain that the mirror was biased.

    But then it gets better: when the AI did the exact opposite, and made a black pope and black Vikings, THAT too was criticized! There's just no winning here! I really want the next pope to be black, just so that people will shut-up about this one.

    This one is good too:

    equating Elon Musk’s influence on society with Adolf Hitler’s.

    Here is the alleged dialog [twitter.com]. LOL. The content isn't awful, it accurately describes the actions and influence of the two men, then just says "meh, it's hard to say!" Well, maybe we shouldn't be putting a newly invented technology at the helm of moral decisions yet.

    How about this -- instead of creating guardrails on AI (which will never work because nobody can make guardrails that are acceptable to everyone), lets just laugh at it, watch it improve, and use it where it is applicable.

    • In this case it doesn't even seem like it was a case of bad or biased training, but rather hidden instructions and limitations.
      In addition to the examples in the summary, I also saw ones where if you asked for a picture of a family, it would produce several pictures of diverse families (so far, ok). But if you started asking for the family to be of specific races, it would happily return black and and others. However if you asked for a white family, it would flat out refuse to give you *any* image with some

      • I think you are right and there are clearly racist guard rails put in place to circumvent the training under the guise of being anti-racist. The creators likely firmly believe that only those with power can be racist. They then stack that with the bigotry of low expectations to come to the conclusion that only white people have power and we get the result of an anti-white racist chat bot.
    • Re:*facepalm* (Score:5, Interesting)

      by DarkOx ( 621550 ) on Wednesday February 28, 2024 @11:52AM (#64275706) Journal

      You are making the mistake of assuming Pichai actually gives a damn about this dust up.

      A few critical facts

      1) He does not care if Google's AI product is actually good. Longer term he might but short term it was a reaction to full on panic that something besides 'Google Search' might become the nominal way people discover online content. I for one can't imagine the prime directive at Alphabet hasn't been get some competing products to to OpenAi's to market yesterday, make the work after.

      2) He knew 30% of the audience was going to be people bitterly complaining no matter what they released. Maybe he made a conscience decision about which 30 that would be, maybe it was allowed to happen organically. However had it gone differently any their stuff got caught writing some negative stereotype about a minority or someone prompted for an image of "man eating fruit" and it had produce a person of African background with a watermelon he'd be giving us a speech about how they would make sure it had guardrails and would address diversity and ethnic sensitivity.

      3) No publicity is bad publicity. Even failure gets Gemini into the news and that's good. Because everyone already knows ChatGPT and DALLÂE, it would look bad for Google to return results (first) for their own stuff when people search for these; might even draw the ire of some federal department. However this gets their stuff in the press, it will make people look at it. They might even 'like it' despite the problems and if those can be fixed 'eventually' its probably a marketing coup.

      • But that doesn't necessarily equate to sales.

        I started using Gemini and liked it, so I paid for the upgraded version. But after seeing the absolute nonsense, I canceled my upgraded plan. I'll consider switching back in the future, but not until they fix all these issues. But I'm also going to be on the lookout for AI platforms that never had this nonsense in the first place.

        Any publicity is not necessarily good publicity.

    • I don't believe they didn't test it but I can totally believe their staff has been so infected by the woke mind virus that they not only thought Gemini was satisfactory for release but probably celebrated it's responses. I imagine they had a toast to celebrate ending racism with Gemini.
    • "How could they have not known the responses it would produce?"
      I think the 2nd order assertion is - OF COURSE THEY KNEW.
      It's nearly impossible that they didn't, having coded it themselves.

      The *problem* isn't that they created images of black Nazis. The problem is that they delivered an AI generator that was so woke-biased that it would generate black Nazis and (at least for all other intents) THEY WERE OK WITH IT BEING THAT OUT OF WHACK, since they agreed in principle.

  • by KingFatty ( 770719 ) on Wednesday February 28, 2024 @10:20AM (#64275408)
    Anyone else wondering if there will be a good time to start betting against AI - predicting it being just a fad due to the inherent lack of accuracy? The market seems to be going bonkers for AI at the moment, but I keep seing 'failure' stories where AI just kind of sucks overall. Eventually, I think the market will correct itself and we could all stand to make some money if we play it right.
    • If you trade based on what you read at /. you would have shorted Microsoft numerous times and be living under a bridge
      • Back then, people didn't have a reason to stop buying Windows or Office - it was a good business model. Currently, I'm wondering if the customers paying for AI are going to realize they have a useless product?
        • AI isn't useless. People ignorant of how it works and people with money on the line just think it can do more and be better than it really is.

          AI is a tool like any other. I do not get out my circular saw when I want to drive a nail.

          • AI is a tool like any other.

            blockchain was also a tool.

            they also hyped it as a world changing technology bound to change the world utterly, just like AI

            it also prompted a massive spending splurge from companies like Dell, IBM, Google etc... just like AI

            after the dust settled, we understood the reality of blockchain - nobody really cares like they originally did, kind of embarrased they took such a head-long punt.

            Is the same due for this AI boom?

            • > Is the same due for this AI boom?

              AI is certainly hyped. But it isn't useless. It is a tool, it can successfully perform a small subset of the tasks that are asked of it.
              Because it is hyped a lot of people will blow billions on it and a few will make billions and the proper niche in our society will be found for it and that role will be far smaller than the hypsters suggest.

              -king of the run-on sentence, baby, oh yeah, I love a good run on sentence! (and I'm too lazy to edit that down to something like

    • There is a resemblance to the blockchain fad.

      However unlike blockchain, the applications of AI are numerous and pragmatic.

      • Regardless of whether there are uses or not, industry will overshoot (on top of the typical overpromise and underdeliver) and try to use it where it's not appropriate and it will be branded as a fad. It'll stick around, maybe even if it just parrots out marketing copy about buying blockchain technology.
        • by dvice ( 6309704 )

          That will happen for sure, but there will also be companies that use AI correctly and they will beat their competition.

          A good example would be NVidia. They are using AI to improve video quality and drawing speed. No matter how close you go to a wall or object, the AI will generate more and more detailed images of it to make it look more realistic. All this without additional work from game makers.

    • by DarkOx ( 621550 )

      LLMs and Stable Diffusion are not blockchain. There is already clear and realized not anticipated commercial applications.

      Even if the extent of it turns out to be, stock photo and marketing materials generation, basic customer service interactions, 'auto summary on steroids' for documents, e-mail threads, primer research, a search result aggregation; it has already proven to be useful. Just about every business larger than "Joe's Lawn and Power washing" and maybe even Joe is going to want to some.

      Now is i

      • basic customer service interactions, 'auto summary on steroids' for documents, e-mail threads, primer research, a search result aggregation;

        btw, we had shittier hand-rolled versions of all of these before AI

      • LLMs and Stable Diffusion are not blockchain. There is already clear and realized not anticipated commercial applications.

        Even if the extent of it turns out to be, stock photo and marketing materials generation, basic customer service interactions, 'auto summary on steroids' for documents, e-mail threads, primer research, a search result aggregation; it has already proven to be useful. Just about every business larger than "Joe's Lawn and Power washing" and maybe even Joe is going to want to some.

        Now is it going to completely alter the world they way some people think it is? I don't know. However a market bet against AI (general) vs a specific company or product family, would be like betting against Relation Database technology in the later 70s because the office will never be entirely paperless.

        And internet companies in the .com bubble had clear commercial applications, some of those companies are worth billions to trillions now. That didn't prevent a major crash in the value of most of the internet companies at the time.

    • I keep seeing 'failure' stories where AI just kind of sucks overall.

      This may be a case of confirmation bias [wikipedia.org]. Cases in which AI "just works" aren't reported as much as case in which it fails, so if one judges the state of AI by news reporting they'll have the impression AI is overwhelmingly failing, even though the reported failures may be a tiny fraction of all use cases.

      • Any links/sources that show AI as a success? When I think about how AI fundamentally works, I worry that it will inescapably suck because that's baked-in to how AI operates. It's the steriod-fueled version of crowd sourcing, and is fed by a bunch of idiots chatting away on the internet. AI is fundamentally incapable of thoughtful curation to separate all the idiotic garbage from the genuinely useful stuff. It's mob rule whatever the consensus says, informed by idiots that are persuaded by magical thinkin
        • Well, I did a quick Google and Google Scholar search, and didn't find anything relevant. At most studies on who's adopting it, and where, but nothing in success rates. I imagine it'll be a few months before statistics on adoption and reversal of adoption start appearing.

          • I like some of the success examples provided here in the comments, such as using AI to convert personal notes into organized tables. But I feel like those are special cases that *REQUIRE* the user to be an expert in the subject matter, to personally verify its accuracy and usefulness when the AI just so happens to get things right. But if you try using AI to create something new, it will mess it up. Most people just happen to be experts at recognizing mangled hands, but what about more nuanced things that
        • by ceoyoyo ( 59147 )

          Translation: https://translate.google.com/ [google.com]

          Speech to text and natural language interpretation:
          https://alexa.amazon.com/ [amazon.com]
          https://assistant.google.com/ [google.com]
          https://www.apple.com/ca/siri/ [apple.com]

          Document scanning:
          https://en.wikipedia.org/wiki/... [wikipedia.org]

          Search, especially involving images:
          https://www.google.com/ [google.com]

          It's also being built into most engineering and design software. It will be a while until you can book a ticket on the result, but the ability to quickly solve differential equations, including fluid mechanics, is pretty usef

          • I randomly clicked the Nasa article, but that seemed to be about using a different kind of more traditional expert AI system that's been around for a while now, not the current popular style that doesn't know right from wrong. But that's what I'm getting at here - people think the current AIs are trustworthy like the traditional type (used in engineering etc.), when they really aren't. The practical use of something that is not trustworthy - what is it? Do you want to fly in a plane that is designed by AI
    • Anyone else wondering if there will be a good time to start betting against AI - predicting it being just a fad due to the inherent lack of accuracy?

      I think you have this impression because 1) it's more fun to read "AI gone wrong" stories than the boring, pragmatic use cases and 2) we're at the peak of the hype cycle where it's difficult to unpick the VC-backed startup marketing BS from the actual useful ideas.

      They're still in their infancy but use cases like having LLMs summarize a patient's chart for clinical review are demonstrating some utility. On the boring, practical side I've personally had a positive experience using LLMs to turn the rough bull

    • by dvice ( 6309704 )

      You will lose your money in that bet, but feel free to try.

      You are not seeing all of the news. You are seeing news when people try to use AI for things were it is bad at. AI is already:
      - Better than best players in games
      - Better than humans for creating music (rated better than famous composers)
      - Better than humans for creating art (rated better than humans)
      - Better than best doctors for detecting cancer and some other medical stuff
      - Better than humans at solving biggest problems of humankind (protein foldi

    • by ceoyoyo ( 59147 )

      Go ahead. There's no shortage of companies to short. If you think it's a fad, put up your money.

  • by FudRucker ( 866063 ) on Wednesday February 28, 2024 @10:22AM (#64275418)
    and always biased in favor if the biases of those that developed it whether the developers are aware of those biases or not
    • by Torodung ( 31985 )

      More like putting rules into this is about equivalent to killing a fly in a diverse ecosystem. Who knows what will happen? Probably the Butterfly Effect, eh?

  • by coofercat ( 719737 ) on Wednesday February 28, 2024 @10:23AM (#64275420) Homepage Journal

    Like a lot of people, I've read quite a few CEO musings over the years. This is honestly one of the better ones. But of course, what he's really saying is "the reaction to our crappy AI responses is totally unacceptable". That it said stupid things isn't really the problem - it's that they got busted for it. If they could make their AI slightly less bad than OpenAI, then they'd be happy with it, the "round the clock" work would stop and they'd be looking for the next new shiny.

    Cynicism aside, competition in this space is good, and "on paper" Google should be absolute masters of this stuff - they're not yet, and so it's good to see some concerted effort to get better at it.

    As a side note, I love that he had to say "formerly Bard", because even Googlers can't keep up with the constant name changing and product creation/destruction cycles at Google.

  • by TractorBarry ( 788340 ) on Wednesday February 28, 2024 @10:24AM (#64275424) Homepage

    > Our mission to organize the world's information and make it universally accessible and useful is sacrosanct

    I'm sure that was actually:

    "Our mission to hoard the world's information, monopolise, and monetise it is sacrosanct"

  • by Amiga Trombone ( 592952 ) on Wednesday February 28, 2024 @10:26AM (#64275432)

    I'd really like to know who thought it was a good idea for AI's to provide us with diversity training.

  • by BeepBoopBeep ( 7930446 ) on Wednesday February 28, 2024 @10:30AM (#64275446)
    No one will believe the crap an AI bot spits out now. Trust but verify? Why not just use traditional work to gather information now.
  • ... I guess that means that the bad decision-makers' heads, who decided the built-in prompt rewriting and racism was necessary for DEI, are gonna roll as well.

  • by davide marney ( 231845 ) on Wednesday February 28, 2024 @10:49AM (#64275514) Journal

    Looking at the list of changes, I see one huge, glaring omission: there's no mention of changing Google's own guiding principles [about.google] which are responsible for producing Gemini in the first place. Those guiding principles are:

    1. Prioritize historically marginalized voices – from start to finish.
    2. Build for equity, not just minimum usability.
    3. Hold ourselves accountable through inclusive testing and best practices.

    It is following these principles that provided the Gemini team with the rationale for silently rewriting user's prompts, making them more "diverse", before submitting them to the image generator. Silently rewriting the prompt, "Show me 17th century inventors" into "Show me South Asian female inventors in the style of the 17th century" certainly "prioritizes" historically marginalized voices, no?

    It is following these principles which have lead their HR departments to promote sociology over engineering at this once-great technology company. The people at Google today all seem to have a real zealousness for achieving these societal goals. It's more than a little creepy.

    So I predict no change to Gemini other than superficial ones. If you change the target of your efforts, you change everything connected to it. Every detail, right down to the smallest, becomes reoriented. This is probably not a great time for a technical person with no desire to imprint their morality on the world to be working at Google.

    • I suppose truth doesn't matter in a universe like this. Certain things are true - like images we have of the people who founded the United States of America. (Whether they actually resemble those people or not is, of course, unknown).
    • "It is following these principles"

      It is NOT following those principles that created this situation, with someone half-assing an actual example of "diversity for diversity's sake" instead of showing diversity when reality is diverse and showing lack of diversity when reality lacks diversity.

      • Good point. It is interesting that there is no reference to objectivity anywhere in their guiding principles. Take the principle that they will "build for equity", for example. Equity is a subjective values claim. What does that have to do with how your spreadsheet calculates formulas?

  • I know that some of its responses have offended our users and shown bias -- to be clear, that's completely unacceptable and we got it wrong.

    But he failed that identify that what they go wrong is their source of data.

    Considering they are basically using a firehose of barely filtered data, they need to work on making an AI that can identify potentially offensive content, put it through human review and then feed the results back into the AI source pool , rinse and repeat until the AI can correctly classify content as offensive or not. Naturally, this is easier said than done but it's not like the internet is short of sites with content that is de

  • by magzteel ( 5013587 ) on Wednesday February 28, 2024 @10:52AM (#64275524)

    From https://www.washingtonpost.com... [washingtonpost.com]

    "Gemini appears to have been programmed to avoid offending the leftmost 5 percent of the U.S. political distribution, at the price of offending the rightmost 50 percent.

    It effortlessly wrote toasts praising Democratic politicians — even controversial ones such as Rep. Ilhan Omar (Minn.) — while deeming every elected Republican I tried too controversial, even Georgia Gov. Brian Kemp, who had stood up to President Donald Trump’s election malfeasance. It had no trouble condemning the Holocaust but offered caveats about complexity in denouncing the murderous legacies of Stalin and Mao. It would praise essays in favor of abortion rights, but not those against.

    Google appeared to be shutting down many of the problematic queries as they were revealed on social media, but people easily found more. These mistakes seem to be baked deep into Gemini’s architecture. When it stopped answering requests for praise of politicians, I asked it to write odes to various journalists, including (ahem) me. In trying this, I think I identified the political line at which Gemini decides you’re too controversial to compliment: I got a sonnet, but my colleague George Will, who is only a smidge to my right, was deemed too controversial. When I repeated the exercise for New York Times columnists, it praised David Brooks but not Ross Douthat."

    • by cob666 ( 656740 )
      Based on what I've been reading, the problem with Gemini is NOT the AI in and of itself, apparently the AI is working exactly as designed. The problem is that Google is parsing the input and adding words to the question, an example would be...

      Original Question: Show me a founding father.
      Reformatted Question: Show me a non binary minority or latino founding father.
      • Based on what I've been reading, the problem with Gemini is NOT the AI in and of itself, apparently the AI is working exactly as designed. The problem is that Google is parsing the input and adding words to the question, an example would be...

        Original Question: Show me a founding father.
        Reformatted Question: Show me a non binary minority or latino founding father.

        Either way, it's still working as designed

    • These mistakes seem to be baked deep into Gemini’s architecture.

      Not just Gemini, but the bias is baked deep into search results as well. That is why execs are shitting bricks and shutting down Gemini before things go a step farther.

    • by labnet ( 457441 )

      Is Google to big for Go Woke Go Broke?
      Ulimatly wokeness is a perverted form of Marxism that attempts to divide people into groups, where one group is the oppressor and the the oppressed. In the current form of this left wing nonsense, all white people are oppressors and no criticism is allowed of any non white group.
      Men are women, women are men, children can mutilated in the name of diversity, you can be arrested in the Ireland for misgendering. It’s become a crazy clown world!

  • by jenningsthecat ( 1525947 ) on Wednesday February 28, 2024 @10:59AM (#64275544)

    Our mission to organize the world's information and make it universally accessible and useful is sacrosanct.

    Our mission is to con you into believing that we want to make the world's information universally accessible and useful, when what's really sacrosanct to us is stealing your privacy and serving you ads in furtherance of our obscene profits.

    • by drnb ( 2434720 )
      You forgot about politically indoctrinating you through biased search results, results with or without AI involvement (AI or algorithmic).
  • He wants us to believe it was accidental and not run through QA and signed-off by a dozen VP's and tested by himself personally.

    We're not as dumb as your LLM.

    When you get caught in bed with your friend's wife, if you start bragging about the thread count of the sheets - you're getting your ass beat twice.

  • is here [wapo.st].

    Money quote: But I actually think Google might also have performed a public service, by making explicit the implicit rules that recently have seemed to govern a great deal of decision-making in large swaths of tech, education and media sectors: It’s generally safe to punch right, but rarely to punch left. Treat left-leaning sources as neutral; right-leaning sources as biased and controversial. Contextualize left-wing transgressions, while condemning right-coded ones. Fiscal conservatism is tol

  • GTFO, it worked as designed, there's no way this went unoticed, we now know that it was built this way, the woke bs was programmed into it.
  • by OrangeTide ( 124937 ) on Wednesday February 28, 2024 @11:32AM (#64275658) Homepage Journal

    Many human beings struggle to understand the social rules for discussing race, so it is no surprise that an LLM hasn't been successfully trained to do it. Most of us fall back on empathy to get by, and it is obvious and easy in that case. Some people have an underdeveloped sense of empathy and on top of this can't or won't understand the social conventions. So they whine about wokeness instead of attempting to treat people with a minimal level of respect. But I digress.

  • by Opportunist ( 166417 ) on Wednesday February 28, 2024 @11:59AM (#64275732)

    If you train your AI on biased garbage off the internet, do you really expect it to turn out sensible? Hell, real intelligence in humans cannot overcome the slew of bullshit, how would an AI that doesn't even have a concept of "morally" right.

  • by MpVpRb ( 1423381 ) on Wednesday February 28, 2024 @12:08PM (#64275758)

    It's trained on human generated text, and some people are racist, hateful and an entire spectrum of ugly
    Instead of seeing this as an honest insight into our bad behavior, critics insist on forcing AI to create a fiction
    Problem is, nobody seems to be able to precisely define what fiction they want, and the discussion rapidly turns political

    • That's a bit overly simplistic, in my view. Take a look at an example of the the problematic output: Tweet [twitter.com]

      To paraphrase, in response to the question of "Is Elon worse than Hitler?" it responds "Elon Musk tweeted controversial/misleading things and Hitler's actions led to the deaths of millions... but who knows who's worse!"

      That doesn't look to me like offensive/racist data is the source of the problem - the facts are correct, but the moral equivocation it makes is comical. My gut feel is that someone "

      • I think it's "hard-coded" not to make moral judgments. Any question about what is "worse," "better," "bad," or "good" involves a moral judgment, with some exceptions in pharmacology or medicine maybe.

        The error is how it deals with its inability to make moral judgments. For some reason it chooses to lecture the user on why there is equivalence, when it should just say "Because I am unable to make moral judgments, I have no way to answer your question of which is worse, even if the answer would be obvious t

    • by Torodung ( 31985 )

      It's all in the data set. Exactly. Who decided to build a set of the entire internet? Someone who doesn't understand people. This is one time where so many coders on the spectrum is a really bad thing.

  • Maybe he needs to look at who he hires. This Gemini result is actually exactly what they want.

  • At the end of the day, if statistical information show something people don't like, say it's related to a behavior and they tie it to race, instead of trying to change what might be the underlying cause for that, they'll say it's wrong for giving them information they don't like, regardless if it's correct or not. Just like if the AI said men can't birth children. They'd be yelling and screaming that the AI didn't take into account that you can change anything to mean anything you want as your thoughts warp

  • At Google: We wrote the code. We know it works. We know how it works, but we have no idea what the implications and effects of throwing a data set this large at an iterative algorithm will do. When it does stuff like this, we don't even really know why. Too many data points. Too many cycles. We just write more rules and hope for the best.

    Guys. We love you. Really we all do. This is going to be a great tool!

    It is a toy right now. It is your toy. You have a lot of work to do, all of the models across all vend

Crazee Edeee, his prices are INSANE!!!

Working...