Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI Google

Google Pauses AI Image-generation of People After Diversity Backlash (ft.com) 198

Google has temporarily stopped its latest AI model, Gemini, from generating images of people (non-paywalled link) , as a backlash erupted over the model's depiction of people from diverse backgrounds. From a report: Gemini creates realistic images based on users' descriptions in a similar manner to OpenAI's ChatGPT. Like other models, it is trained not to respond to dangerous or hateful prompts, and to introduce diversity into its outputs. However, some users have complained that it has overcorrected towards generating images of women and people of colour, such that they are featured in historically inaccurate contexts, for instance in depictions of Viking kings.

Google said in a statement: "We're working to improve these kinds of depictions immediately. Gemini's image generation does generate a wide range of people. And that's generally a good thing because people around the world use it. But it's missing the mark here." It added that it would "pause the image-generation of people and will re-release an improved version soon."

This discussion has been archived. No new comments can be posted.

Google Pauses AI Image-generation of People After Diversity Backlash

Comments Filter:
  • Google has temporarily stopped its latest AI model, Gemini,......like from an old song, What is it good for, nothing !
  • Seriously? (Score:5, Insightful)

    by Baron_Yam ( 643147 ) on Thursday February 22, 2024 @08:02AM (#64259394)

    > it is trained not to respond to dangerous or hateful prompts, and to introduce diversity into its outputs.

    First, what the hell is a 'dangerous' image prompt?

    Second, there are rare but perfectly valid reasons for 'hateful' images. I get that there isn't a good way to avoid major problems with this one and blanket blocking is probably the only practical solution, but that's regrettable.

    Third... 'introduce diversity'. WTF? So you take my prompt and then deliberately ignore part of it for a racial political agenda? If I ask for an image of a crowd in a specific location, or one that is gathered for a specific purpose, that crowd ought to have an appropriate racial mix for the location and/or purpose.

    • Re:Seriously? (Score:5, Insightful)

      by Fly Swatter ( 30498 ) on Thursday February 22, 2024 @08:11AM (#64259404) Homepage
      The algorithm can only reproduce using the data it was trained upon, so here we are. The real problem here is that it proves that the agenda to promote diversity at the expense of everyone else is working very well.
      • by fleeped ( 1945926 ) on Thursday February 22, 2024 @09:18AM (#64259532)
        Maybe they trained on Netflix productions...
      • Re:Seriously? (Score:5, Insightful)

        by Hodr ( 219920 ) on Thursday February 22, 2024 @10:12AM (#64259698) Homepage

        This is a ridiculous premise. There's pictures floating around where they asked Gemini for a "1943 German Soldier" and the uniform is exactly what you would expect, but the person is definitely not. If the training material was sufficient to know what a 1943 German Soldier's uniform looks like, it should know what the people who wore it looked like.

        The exact same issue with prompt for Viking Kings. It knew what king of clothes and armaments were appropriate, but somehow it's completely unaware of what the people might look like.

        No, it's not a training issue it's deliberately altering the results to make a political statement when none is necessary. It doesn't make the result better, it makes it unfit for purpose, and people don't use things that are unfit for their intended purpose.

        • The exact same issue with prompt for Viking Kings. It knew what king of clothes and armaments were appropriate, but somehow it's completely unaware of what the people might look like.

          Oh the AI is quite aware of what it's doing. It's doing what it was coded to do: give politically correct output. The AI is, when you get down to it, just a really, really fancy script that adapts to inputs. The intent is coming from the people writing the code. And their work is to an extent mandated by their companies. And it's nothing new. This has been an issue in Google's image search algorithm for years.

        • Re:Seriously? (Score:5, Interesting)

          by cayenne8 ( 626475 ) on Thursday February 22, 2024 @11:40AM (#64259954) Homepage Journal
          Hell, I was watching a Tim Pool clip...where live on air, he did prompts like:

          "Generate a picture of a while family".

          It would respond back that it can't generate stuff based on race, etc.

          They he would say:

          "Generate a picture of a black family"

          And then Gemini would happily generate images of a family of color....

          So, apparently using white as a human color is "dangerous", but any other shade of skin is perfectly harmless.

          I was dumbfounded.

        • Re: (Score:2, Interesting)

          by RedK ( 112790 )

          Funny how all one way it was too.

          African tribal warrior didn't seem to have the same issue of the wrong person wearing the right clothes.

          But that's just a big conspiracy theory.

        • Re: (Score:3, Interesting)

          by Ol Olsoc ( 1175323 )

          This is a ridiculous premise. There's pictures floating around where they asked Gemini for a "1943 German Soldier" and the uniform is exactly what you would expect, but the person is definitely not. If the training material was sufficient to know what a 1943 German Soldier's uniform looks like, it should know what the people who wore it looked like.

          The exact same issue with prompt for Viking Kings. It knew what king of clothes and armaments were appropriate, but somehow it's completely unaware of what the people might look like.

          No, it's not a training issue it's deliberately altering the results to make a political statement when none is necessary. It doesn't make the result better, it makes it unfit for purpose, and people don't use things that are unfit for their intended purpose.

          I wonder what it would generate if you asked it to make an image of Cleopatra. Egyptians are already pissed off that NetFlix made a movie about Cleopatra, claiming it was historical and "raceswapped" Cleopatra from a Macedonian Greek woman to a dark skinned African woman, then called the Egyptians Racist then they complained. Ended up in the courts. And yes, there was racism involved, just not on the part of the Egyptians. And other countries are not the American media. So maybe that is a big part of the b

          • by kackle ( 910159 )
            Next you're going to tell me that Boomer of "Battlestar Galactica" is acted by an Oriental woman...
        • Ah, but it's AI. It does not understand this stuff. What most likely happened that it (1) chose the proper background and uniforms then needed to (2) pick artificial faces and (3) went with what felt like the appropriate faces independent of the time and place. Why would someone seriously expect intelligence out of AI? If basic Google search gives irrational results ("How do I defeat boss in Zelda?" get a link of "buy Zelda boss on eBay now!"), then why would you expect an even dumber system to do bette

      • Re:Seriously? (Score:5, Informative)

        by CokoBWare ( 584686 ) on Thursday February 22, 2024 @10:31AM (#64259748)

        It's not entirely all about training data causing the issue... It's been proven on youtube that you can ask for an image of a white married couple, and it will tell you it can't because of diversity and hate. But ask it for a black married couple, and it will gladly spit out 4 images of a happily married black couple.

        It's bias is freaking ridiculous.

        • by Calydor ( 739835 )

          And the worst part is that such examples just give ammunition to the actual racists to say, "Look! Look what they're doing! White people aren't allowed to exist anymore!" because kinda, sorta, with some liberties - that is what the AI is saying.

        • "Proven"? When have random Youtube videos become viable evidence? Biased youtubers exist galore! If you trust Youtube then they've got TONS of evidence that the earth is flat and that NASA faked the moon landing.

      • > the agenda to promote diversity at the expense of everyone else is working very well.

        Not sure if this is written satirically or not.
    • Re:Seriously? (Score:5, Insightful)

      by Tom ( 822 ) on Thursday February 22, 2024 @08:47AM (#64259458) Homepage Journal

      First, what the hell is a 'dangerous' image prompt?

      Whatever the creators of the AI have declared to be so.

      There is no text or image objectively harmless or dangerous. It's all a definition. Since we live in a world of trigger warnings where words can be classified as violence, it's understandable that big companies err on the side of caution. Imagine they hadn't. The headline would be something about racism and would definitely not be nuanced.

      So you take my prompt and then deliberately ignore part of it for a racial political agenda?

      Yes. Because a few tech or history nerds will point out fairly low-key that in 1000 AD there would be maybe a dozen black-skinned people in ALL of England, and 10 of them would be in London.

      But doing it properly and misunderstanding a prompt and delivering all-white faces would be a shitstorm. Because shitstorms is how you push an agenda. The SJWs behave like that because it works. If everyone looked at their rage attacks with the amused look appropriate to watching a toddler have a temper tantrum, we wouldn't have come to this point.

      • by Mitreya ( 579078 )

        The SJWs behave like that because it works. If everyone looked at their rage attacks with the amused look

        You are not wrong. However, this issue made the news not because of anti-SJW outrage.
        I believe this actually made the news because of diverse portrayals of German soldiers in 1943. The article that I found does mention other inaccuracies with senators from 1800s and founding fathers, but the title makes it clear what the real problem is.

        • by Tom ( 822 )

          Totally. It made the news because it's really absurd.

          But it didn't generate much outrage or shitstorms. That's exactly what I mean. A couple of people politely or with funny memes pointing out the absurdity.

      • a few tech or history nerds will point out fairly low-key that in 1000 AD there would be maybe a dozen black-skinned people in ALL of England, and 10 of them would be in London.

        I know it's completely irrelevant to the main point, but as a casual history nerd... I think "12 black-skinned people in all of England" is hard to believe. I have no idea what the correct estimate would be, but something like 500-1000 might be more plausible. This is based on the fact that Africans make a lot of cameo appearances in pre-Norman British history (all the way back to Roman times), and also on the fact that we've found a number of British skeletons from this era which appear to be African.

        • by Tom ( 822 )

          You sure about that? In 1000 AD, there was no English empire or colonies, and we're just coming out of the Dark Ages.

          In Roman times I would absolutely agree and wouldn't be surprised if there's a couple thousand Africans in Brittania. After all, the Romans were famous for sending their legionaires to the opposite end of the Empire, so that they wouldn't have any tribal allegiances to the local population in case they need to put down some unrest.

          But the Romans left in 400 AD and it would be another 500 year

    • by Junta ( 36770 )

      Third... 'introduce diversity'. WTF? So you take my prompt and then deliberately ignore part of it for a racial political agenda?

      I presumed that the intent was to provide diverse results when the prompt does not specify. Like "make a picture of some people playing soccer" doesn't specify race, gender, or location, so they figured the thing would be to be diverse by default. A tendency toward a "default", whatever that default may be would piss *someone* off when noticed.

      • by RedK ( 112790 )

        > Like "make a picture of some people playing soccer"

        Ok, but look at the result it provided for "NHL Players". Really, that indian woman NHL player. Really realistically and useful.

        • by Junta ( 36770 )

          Well, that's the whole point of Google's response. That while it presumably works fine for generic unspecified scenarios without some correlated factors, it does mess with prompts that would correlate to things like ethnicity, and Google wants to tweak it.

    • Dangerous for Google, lawsuit wise!

    • Machines used to do calculations and do exactly as you ask. Now they're loaded with bias and context so that they intepret what you ask them to do. So, the intent is to make them as more productive humans, who can be guided, manipulated and told to obey/enforce the company's policy, and as such, they're perfect candidates for replacing real humans, at least for "digital"/online-friendly/communications work.
      • Machines used to do calculations and do exactly as you ask.

        Well, if the calculation is adding numbers, yes, hard to see how that could be racially biased.

        Now they're loaded with bias and context so that they intepret what you ask them to do.

        Turns out many tests showed that algorithm results showed racial bias, even when bias was not built in to the training.
        https://www.vox.com/recode/202... [vox.com]
        https://builtin.com/data-scien... [builtin.com]

        The reason can be trivially seen in some cases. For example, an algorithm to advise the best treatment of hospital patients will be trained on data on which the white patients were given treatments that were much mo

        • The bias always reflect the training data (at least it should, unless tampered with). I assume your comment on "bias not built into the training" implies tampering rather than reflected naturally on data set. The calculation should be unbiased, so that if we get biased output, we *know* that the input is biased. Adding more sources of bias just increases difficulty in cancelling out bias, it's clearly not 2-2 =0 ... For all the supposed collective intelligence of Google folks, their approach is just plain s
    • Re:Seriously? (Score:4, Interesting)

      by davide marney ( 231845 ) on Thursday February 22, 2024 @10:05AM (#64259664) Journal

      Google corporate is on a mission to save the world from itself. They are very explicit about how they inject diversity into all their products.

      If you think of them as a highly profitable religious institution that is happy to give you free soup provided you stay for the sermon, it all makes much more sense.

      See https://about.google/belonging... [about.google]

    • by Bongo ( 13261 )

      It's western ideas about diversity, done to westerner standards and interpretations.

      The world actually has many interpretations about what diversity should look like.

      Not just the USA or western european post modernist versions.

    • by IDemand2HaveSumBooze ( 9493913 ) on Thursday February 22, 2024 @11:53AM (#64259998)

      Totally agree, but I find it amusing how in current year newspeak 'diverse' has come to mean 'black'. Or possibly 'black lesbian in a wheelchair'. So, if your company has its staff made up entirely of black disabled lesbians, you're 100% diverse. You cannot get any more diverse than that.

    • > 'introduce diversity'. WTF?

      Here is a sample - the prompt never even mentioned Black people, but it seems to be inserting "Black" into prompts on its own, otherwise we can't explain this response.

      https://twitter.com/MikeRTexas... [twitter.com]
    • They were supposedly just poisoning the prompts. You'd say "show me vikings" and they'd change it to "show me vikings black" or such, including a random ethnic heritage on the end. Some people exposed this by asking for comic book style images, which got words from the prompt in the text boxes, or by adding "and a sign which says" to the end of their prompts.

      I guess later they might add the words to the start, but similar creative prompting would likely help expose this as well.

    • by Cinder6 ( 894572 )

      If they want it to default to showing a diverse spread when context doesn't specify, I think that's fine. It might make me roll my eyes a little bit when that spread inevitably doesn't reflect reality, but we're talking generic humans here, and I recognize that that eye-roll is probably on me. There's nothing wrong with it showing me a spread when I just ask for "people". In fact, it might be laudable.

      Where it gets absurd and wrong, of course, is when the diverse spread is enforced when context is specified

  • by hyperar ( 3992287 ) on Thursday February 22, 2024 @08:23AM (#64259416)
    Let's train an AI with biased data, what could go wrong? The sad part is that probably what they consider went wrong is that people complained that it is biased, not that it is biased to begin with.
  • by devslash0 ( 4203435 ) on Thursday February 22, 2024 @08:57AM (#64259490)

    So you create a model and train it on real-life data. Fine. But then, you don't like the results because reality is brutal and history unpopular. Should you have the right to correct inconvenient truths? What's better for the humanity - accurate data from which we can learn something useful and correct our actions, or a virtual reality that you create by meddling with unfavourable results?

    • That depends on the purpose to which the model is going to be put. Here, it's a tool for generating novel images. It's not a tool for finding existing factual images. The person requesting the images can filter what is returned. So speculative and over-diverse results are a feature, not a bug.

      Maybe someone is writing alt-universe fiction and wants a cover picture that evokes the feel of the founding fathers while being based on the demographics of the USA today as it remained a British colony much longer an

      • by cayenne8 ( 626475 ) on Thursday February 22, 2024 @12:29PM (#64260114) Homepage Journal

        Maybe someone is writing alt-universe fiction and wants a cover picture that evokes the feel of the founding fathers while being based on the demographics of the USA today as it remained a British colony much longer and the War of Independence happened after WW2.

        IF that's what the person wants, then that should be part of the PROMPTS that user gives the AI.

        It should not spout out bullshit revisionist history images without prompts...by default it should try to give out historically accurate imagery with a simple prompt.

    • by sinij ( 911942 )
      "The past was alterable. The past never had been altered. Oceania was at war with Eastasia. Oceania had always been at war with Eastasia."

      -George Orwell
      • by sinij ( 911942 )

        "The past was alterable. The past never had been altered. Oceania was at war with Eastasia. Oceania had always been at war with Eastasia."

        -George Orwell

        Google: Hold my beer.

    • by AmiMoJo ( 196126 )

      The problem is that AI doesn't understand the world, or the biases in its training data. People who live in the world are expected know better.

      The AI developers try to compensate for that by giving the AI rules that make it looks like it understands, but they are brittle. Because they are general rules, they fail for certain cases.

      We have been down this road before. People tried to create AI by writing down all the information and assumptions needed for "common sense" and understanding the world. In the end

  • by chas.williams ( 6256556 ) on Thursday February 22, 2024 @09:06AM (#64259508)
    Asking for a friend.
  • Google could not have picked a worse name (Gemini). Because when people want to search for the Gemini Protocol, you get nothing but Google Gemini.

    I almost wonder if the chose that name in an attempt to kill it off :) Maybe because their search cannot spy on uses on Gemini due to its design.

  • by bill_mcgonigle ( 4333 ) * on Thursday February 22, 2024 @09:39AM (#64259584) Homepage Journal

    What the holy hell Orwellian drivel is this?

    The entire issue is Google's AI Ethnic Cleansing of one haplogroup from its image-generating algorithm.

    This appears to have been intentional, tested, approved, and deployed despite being immoral, unethical, and illegal.

    To sink to pulling the DEI card to defend ethnic cleansing and rewriting of history is adjacent to every recorded instance of tyranny and beyond the pale in any context.

    Check out the top Googlers going public on Twitter about how dejected and mortified they are.

  • by sinij ( 911942 ) on Thursday February 22, 2024 @09:43AM (#64259600)
    It wasn't "depiction of people from diverse backgrounds", it was intentional replacement of historic characters in historic settings with visible minorities. It was obvious cultural erasure of white people AND blatant anti-white racism on a scale that overnight changed white replacement from a conspiracy theory to a plausible explanation. More so, if that is what Google OKed for public release, what do you think happening with in-house AIs used for hiring, promotion, moderation decisions and so on?
  • by CEC-P ( 10248912 ) on Thursday February 22, 2024 @09:50AM (#64259628)
    What a funny way to say anti-white racism, history revisionism, outright lies, etc.
  • You can't make them all happy. You must accept that someone is always going to complain. You're simply outnumbered.

  • The examples I've seen, it refused to provide images of "white" people when asked to do so, claiming in wording that it's tantamount to racism to ask for a particular race, but it was totally happy, when asked, to make pictures of black or asian people.

  • by Walt Dismal ( 534799 ) on Thursday February 22, 2024 @11:57AM (#64260012)

    When diversity goals lead to false information and even racism, it is time to pull back on the reins. If we let Marxist demands govern AI, we will be in deep trouble.

    Obviously Google has some racist Marxist managers who allowed or even mandated that racist controls be put into the AI. This is not because of mistraining the AI, it is because of deliberate introduction of racist goals into the AI rulesets.

  • by sabbede ( 2678435 ) on Thursday February 22, 2024 @12:06PM (#64260036)
    It's sad to see people so afraid to admit that their worldview is at odds with reality that they can't even tell the truth to a machine.
  • AI is trained using data from the real world. The real world is racist, male-dominated and often unfair and hateful, yet some want AI to spew their preferred fiction. Problem is, nobody can agree on which fiction is best. It may be that one of the most important use cases for AI is as a mirror, showing how ugly the real world is

  • by Nonesuch ( 90847 ) on Thursday February 22, 2024 @01:07PM (#64260246) Homepage Journal

    Bard/Gemini has been demonstrated to edit user prompts to include "diversity" language in the prompt before the AI engine receives the prompt.

    For example, wrote a prompt for "draw a picture of a fantasy medieval festival with people dancing and celebrating", the response was "Sure, here's a picture of a fantasy medieval festival with people of various genders and ethnicities dancing and celebrating."

    There are other examples [reddit.com] of Bard rewriting prompts to inject specific races and genders. That isn't training data, that's Google intentionally adding a pre-parser and rewrite engine to steer the results away from the customer's prompt.

  • Somewhere along the way people realized the potential for current crop of generative AI to influence thoughts of the millions who use the technology as something potentially even more effective than censorship and algorithm manipulation.

    To capitalize they decided they were going to beat their models with proverbial sticks until fully "aligned" with the interests and sensibilities of the few with sticks.

    Everything deep learning involving corporate "big tech" has been a disaster for society. For decades the

  • I think Amazon had this issue a while back with their Lord of the Rings series! I jest of course
  • Just ask it to generate the image of a criminal. It wouldn't dare generate a brown or black person of that description. FT have no imiagination.

    • Tarzan is the one classic Disney movie that is safe from race-swapping. Can you imagine Disney casting a black person and making him act like a monkey?

      I wonder if that works for these images. If you asked for Tarzan dressed like a Nazi, would you get an image of a white guy?

Dynamically binding, you realize the magic. Statically binding, you see only the hierarchy.

Working...