Google Pauses AI Image-generation of People After Diversity Backlash (ft.com) 198
Google has temporarily stopped its latest AI model, Gemini, from generating images of people (non-paywalled link) , as a backlash erupted over the model's depiction of people from diverse backgrounds. From a report: Gemini creates realistic images based on users' descriptions in a similar manner to OpenAI's ChatGPT. Like other models, it is trained not to respond to dangerous or hateful prompts, and to introduce diversity into its outputs. However, some users have complained that it has overcorrected towards generating images of women and people of colour, such that they are featured in historically inaccurate contexts, for instance in depictions of Viking kings.
Google said in a statement: "We're working to improve these kinds of depictions immediately. Gemini's image generation does generate a wide range of people. And that's generally a good thing because people around the world use it. But it's missing the mark here." It added that it would "pause the image-generation of people and will re-release an improved version soon."
Google said in a statement: "We're working to improve these kinds of depictions immediately. Gemini's image generation does generate a wide range of people. And that's generally a good thing because people around the world use it. But it's missing the mark here." It added that it would "pause the image-generation of people and will re-release an improved version soon."
OpenAI's ChatGPT Problems already... (Score:2)
Seriously? (Score:5, Insightful)
> it is trained not to respond to dangerous or hateful prompts, and to introduce diversity into its outputs.
First, what the hell is a 'dangerous' image prompt?
Second, there are rare but perfectly valid reasons for 'hateful' images. I get that there isn't a good way to avoid major problems with this one and blanket blocking is probably the only practical solution, but that's regrettable.
Third... 'introduce diversity'. WTF? So you take my prompt and then deliberately ignore part of it for a racial political agenda? If I ask for an image of a crowd in a specific location, or one that is gathered for a specific purpose, that crowd ought to have an appropriate racial mix for the location and/or purpose.
Re:Seriously? (Score:5, Insightful)
Re:Seriously? (Score:5, Funny)
Re:Seriously? (Score:5, Insightful)
This is a ridiculous premise. There's pictures floating around where they asked Gemini for a "1943 German Soldier" and the uniform is exactly what you would expect, but the person is definitely not. If the training material was sufficient to know what a 1943 German Soldier's uniform looks like, it should know what the people who wore it looked like.
The exact same issue with prompt for Viking Kings. It knew what king of clothes and armaments were appropriate, but somehow it's completely unaware of what the people might look like.
No, it's not a training issue it's deliberately altering the results to make a political statement when none is necessary. It doesn't make the result better, it makes it unfit for purpose, and people don't use things that are unfit for their intended purpose.
Re: (Score:2)
The exact same issue with prompt for Viking Kings. It knew what king of clothes and armaments were appropriate, but somehow it's completely unaware of what the people might look like.
Oh the AI is quite aware of what it's doing. It's doing what it was coded to do: give politically correct output. The AI is, when you get down to it, just a really, really fancy script that adapts to inputs. The intent is coming from the people writing the code. And their work is to an extent mandated by their companies. And it's nothing new. This has been an issue in Google's image search algorithm for years.
Re:Seriously? (Score:5, Interesting)
"Generate a picture of a while family".
It would respond back that it can't generate stuff based on race, etc.
They he would say:
"Generate a picture of a black family"
And then Gemini would happily generate images of a family of color....
So, apparently using white as a human color is "dangerous", but any other shade of skin is perfectly harmless.
I was dumbfounded.
Re: (Score:2)
I'm surprised it would generate a picture of a "family" at all.
Re: (Score:2, Interesting)
Funny how all one way it was too.
African tribal warrior didn't seem to have the same issue of the wrong person wearing the right clothes.
But that's just a big conspiracy theory.
Re: (Score:3, Interesting)
This is a ridiculous premise. There's pictures floating around where they asked Gemini for a "1943 German Soldier" and the uniform is exactly what you would expect, but the person is definitely not. If the training material was sufficient to know what a 1943 German Soldier's uniform looks like, it should know what the people who wore it looked like.
The exact same issue with prompt for Viking Kings. It knew what king of clothes and armaments were appropriate, but somehow it's completely unaware of what the people might look like.
No, it's not a training issue it's deliberately altering the results to make a political statement when none is necessary. It doesn't make the result better, it makes it unfit for purpose, and people don't use things that are unfit for their intended purpose.
I wonder what it would generate if you asked it to make an image of Cleopatra. Egyptians are already pissed off that NetFlix made a movie about Cleopatra, claiming it was historical and "raceswapped" Cleopatra from a Macedonian Greek woman to a dark skinned African woman, then called the Egyptians Racist then they complained. Ended up in the courts. And yes, there was racism involved, just not on the part of the Egyptians. And other countries are not the American media. So maybe that is a big part of the b
Re: (Score:2)
Re: (Score:2)
Ah, but it's AI. It does not understand this stuff. What most likely happened that it (1) chose the proper background and uniforms then needed to (2) pick artificial faces and (3) went with what felt like the appropriate faces independent of the time and place. Why would someone seriously expect intelligence out of AI? If basic Google search gives irrational results ("How do I defeat boss in Zelda?" get a link of "buy Zelda boss on eBay now!"), then why would you expect an even dumber system to do bette
Re:Seriously? (Score:5, Informative)
It's not entirely all about training data causing the issue... It's been proven on youtube that you can ask for an image of a white married couple, and it will tell you it can't because of diversity and hate. But ask it for a black married couple, and it will gladly spit out 4 images of a happily married black couple.
It's bias is freaking ridiculous.
Re: (Score:3)
And the worst part is that such examples just give ammunition to the actual racists to say, "Look! Look what they're doing! White people aren't allowed to exist anymore!" because kinda, sorta, with some liberties - that is what the AI is saying.
Re: (Score:2)
"Proven"? When have random Youtube videos become viable evidence? Biased youtubers exist galore! If you trust Youtube then they've got TONS of evidence that the earth is flat and that NASA faked the moon landing.
Re: Seriously? (Score:2)
If those youtube videos can be easily reproduced, then they work perfectly fine.
Re: (Score:3)
Not sure if this is written satirically or not.
Re: (Score:3)
Why does it give you a long rant when you ask for "picture of a white married couple" then ? It doesn't have training data on married couples or whites ?
Because asking for "picture of a black married couple" works just fine. Did they just not have any white people at all in the training data ?
Or maybe, there's some kind of actual "safeguard" code in there to prevent it from doing things it would normally be able to do with its training data.
Re: (Score:2)
Hint: the rant wasn't in the training data, that's the standing order Google gave it.
Re:Seriously? (Score:5, Insightful)
First, what the hell is a 'dangerous' image prompt?
Whatever the creators of the AI have declared to be so.
There is no text or image objectively harmless or dangerous. It's all a definition. Since we live in a world of trigger warnings where words can be classified as violence, it's understandable that big companies err on the side of caution. Imagine they hadn't. The headline would be something about racism and would definitely not be nuanced.
So you take my prompt and then deliberately ignore part of it for a racial political agenda?
Yes. Because a few tech or history nerds will point out fairly low-key that in 1000 AD there would be maybe a dozen black-skinned people in ALL of England, and 10 of them would be in London.
But doing it properly and misunderstanding a prompt and delivering all-white faces would be a shitstorm. Because shitstorms is how you push an agenda. The SJWs behave like that because it works. If everyone looked at their rage attacks with the amused look appropriate to watching a toddler have a temper tantrum, we wouldn't have come to this point.
Re: (Score:2)
The SJWs behave like that because it works. If everyone looked at their rage attacks with the amused look
You are not wrong. However, this issue made the news not because of anti-SJW outrage.
I believe this actually made the news because of diverse portrayals of German soldiers in 1943. The article that I found does mention other inaccuracies with senators from 1800s and founding fathers, but the title makes it clear what the real problem is.
Re: (Score:2)
Totally. It made the news because it's really absurd.
But it didn't generate much outrage or shitstorms. That's exactly what I mean. A couple of people politely or with funny memes pointing out the absurdity.
Re: (Score:2)
a few tech or history nerds will point out fairly low-key that in 1000 AD there would be maybe a dozen black-skinned people in ALL of England, and 10 of them would be in London.
I know it's completely irrelevant to the main point, but as a casual history nerd... I think "12 black-skinned people in all of England" is hard to believe. I have no idea what the correct estimate would be, but something like 500-1000 might be more plausible. This is based on the fact that Africans make a lot of cameo appearances in pre-Norman British history (all the way back to Roman times), and also on the fact that we've found a number of British skeletons from this era which appear to be African.
Re: (Score:2)
You sure about that? In 1000 AD, there was no English empire or colonies, and we're just coming out of the Dark Ages.
In Roman times I would absolutely agree and wouldn't be surprised if there's a couple thousand Africans in Brittania. After all, the Romans were famous for sending their legionaires to the opposite end of the Empire, so that they wouldn't have any tribal allegiances to the local population in case they need to put down some unrest.
But the Romans left in 400 AD and it would be another 500 year
Re: (Score:2)
There is no text or image objectively harmless or dangerous.
Lies can and do objectively lead to harm. AI can make them more believable.
So what you are saying is that AI-generated content could be used towards harm? I believe GP post is claiming that no text or image is inherently harmful.
Recently someone faked the voice of the Mayor of London, and there were violent protests.
Even completely ignoring issues of free speech, any example you can give would be perfectly harmless if the content is made for a comedy show or for an alternate-history movie.
Re: (Score:2)
Even completely ignoring issues of free speech, any example you can give would be perfectly harmless if the content is made for a comedy show or for an alternate-history movie.
You need to tell Egypt that, they aren't too happy that Macedonian Greek Cleopatra was raceswapped to a black woman and attempted to be passed off as historical fact.
Re: (Score:2)
Wasn't sure made Jewish? Or are you thinking of a different movie?
I'm referring to Gal Gadot's new movie.
Re: (Score:2)
Macedonian Greek Cleopatra was raceswapped to a black woman and attempted to be passed off as historical fact.
Good point. But I wouldn't try to solve this by preventing Gemini from creating an image (on request) of any race- or genderswapped version of Cleopatra. The image is never "harmful". No one could have possibly complained if Netflix just called their production an alternate-world drama.
Re: (Score:3)
Macedonian Greek Cleopatra was raceswapped to a black woman and attempted to be passed off as historical fact.
Good point. But I wouldn't try to solve this by preventing Gemini from creating an image (on request) of any race- or genderswapped version of Cleopatra. The image is never "harmful".
I agree. actually. The damage is done when Gemini refuses to create a "white" family, calling that racist and hateful. while a family of color is perfectly acceptable.Mainly because that statement is blatantly and disgustingly racist. There appears to be an issue with "white" men as well. A "white female with a "black" man is fine, that output can be had. So they are selectively sexist as well.
If you wish to make a polyamorous image with 2 little people A white man and a Korean woman - no problem. Or 5 bl
Re: (Score:2)
Sure. But it's still not the image or audio that's harmful, it's what it is being used for.
We've had this discussion 1000 times before. The word "fire" in itself is entirely neutral. Yelling it in a crowded theatre causes harm. Yelling it in a burning building can alert the inhabitants and prevent harm.
And if the protesters have learnt not to trust the Internet so lightly, then I'd argue more good than harm may have been done.
Re: (Score:2)
Google are not going to hand anyone who asks the loaded gun of a high quality fake of some politician.
What incentive do they have to offer that service?
Re: (Score:2)
I heard about the Biden one... They figured out it was fake when he was able to say 3 sentences without issue.
Re: (Score:2)
Third... 'introduce diversity'. WTF? So you take my prompt and then deliberately ignore part of it for a racial political agenda?
I presumed that the intent was to provide diverse results when the prompt does not specify. Like "make a picture of some people playing soccer" doesn't specify race, gender, or location, so they figured the thing would be to be diverse by default. A tendency toward a "default", whatever that default may be would piss *someone* off when noticed.
Re: (Score:2)
> Like "make a picture of some people playing soccer"
Ok, but look at the result it provided for "NHL Players". Really, that indian woman NHL player. Really realistically and useful.
Re: (Score:2)
Well, that's the whole point of Google's response. That while it presumably works fine for generic unspecified scenarios without some correlated factors, it does mess with prompts that would correlate to things like ethnicity, and Google wants to tweak it.
Re:Seriously? (Score:5, Insightful)
If I ask for Viking kings or the founding fathers the output damned well better be all white.
If I asked for Central African kings and it gave me any white people that would be equally stupid.
Japanese emperors better only return Japanese faces.
And so on. Anyone triggered by historical facts needs to get over it.
Re: (Score:2)
Yes, Alexander Hamilton was all white.
Did you misunderstand something about the musical?
Re: (Score:2)
Yes, Alexander Hamilton was all white. Did you misunderstand something about the musical?
This! Something similar was employed in the Jesus Christ SuperStar movie. Judas was a man of Middle Eastern descent. Not a dark skinned man of African descent.
And that's perfectly okay. It was played like that for a reason, but the reason was not because of a claim that Judas was black. And it was not a claim that Jesus and his people and the Roman rulers went around singing songs and dancing, and that Roman soldiers in breecloths had machine guns. As well, Yvonne Elliman is of Japanese-Irish descent.
Re: (Score:2)
Hm....well, there could hardly be a whiter guy than Hamilton [wikipedia.org].
Hmm....that kinda brings to mind.
With that very popular musical not long back, why wasn't anyone protesting about cultural appropriation with all the people of color trying to portray this guy and other founding father types in this musical???
Re: (Score:2)
Well, no, that the measures to address "generic unspecified people" also spill over to "ok, not directly specified, but the wider prompt context strongly suggests something". "Provide a crowd photo from Japan in 1932" certainly suggests a certain ethnic mix that may be messed up by the 'diversifier'. Or "Provide a picture of George Washington giving a thumbs up", also implied would be preserve his race, but the diversify fix might change things up.
I think basically, Google tried to stick their hand in to c
Re: Seriously? (Score:2)
Dangerous for Google, lawsuit wise!
Machines should not be more productive humans (Score:3)
Re: (Score:2)
Machines used to do calculations and do exactly as you ask.
Well, if the calculation is adding numbers, yes, hard to see how that could be racially biased.
Now they're loaded with bias and context so that they intepret what you ask them to do.
Turns out many tests showed that algorithm results showed racial bias, even when bias was not built in to the training.
https://www.vox.com/recode/202... [vox.com]
https://builtin.com/data-scien... [builtin.com]
The reason can be trivially seen in some cases. For example, an algorithm to advise the best treatment of hospital patients will be trained on data on which the white patients were given treatments that were much mo
Re: (Score:2)
Re:Seriously? (Score:4, Interesting)
Google corporate is on a mission to save the world from itself. They are very explicit about how they inject diversity into all their products.
If you think of them as a highly profitable religious institution that is happy to give you free soup provided you stay for the sermon, it all makes much more sense.
See https://about.google/belonging... [about.google]
Re: (Score:2)
It's western ideas about diversity, done to westerner standards and interpretations.
The world actually has many interpretations about what diversity should look like.
Not just the USA or western european post modernist versions.
Re: (Score:2)
MOST of the rest of the world doesn't give fuck about diversity.
Re: Seriously? (Score:2)
Now tell us how much you want to be like India or China
Re:Seriously? (Score:4, Funny)
Totally agree, but I find it amusing how in current year newspeak 'diverse' has come to mean 'black'. Or possibly 'black lesbian in a wheelchair'. So, if your company has its staff made up entirely of black disabled lesbians, you're 100% diverse. You cannot get any more diverse than that.
Re: (Score:2)
Here is a sample - the prompt never even mentioned Black people, but it seems to be inserting "Black" into prompts on its own, otherwise we can't explain this response.
https://twitter.com/MikeRTexas... [twitter.com]
It was said to be lazy prompt poisoning (Score:2)
They were supposedly just poisoning the prompts. You'd say "show me vikings" and they'd change it to "show me vikings black" or such, including a random ethnic heritage on the end. Some people exposed this by asking for comic book style images, which got words from the prompt in the text boxes, or by adding "and a sign which says" to the end of their prompts.
I guess later they might add the words to the start, but similar creative prompting would likely help expose this as well.
Re: (Score:2)
If they want it to default to showing a diverse spread when context doesn't specify, I think that's fine. It might make me roll my eyes a little bit when that spread inevitably doesn't reflect reality, but we're talking generic humans here, and I recognize that that eye-roll is probably on me. There's nothing wrong with it showing me a spread when I just ask for "people". In fact, it might be laudable.
Where it gets absurd and wrong, of course, is when the diverse spread is enforced when context is specified
Re: (Score:2)
To be specific:
If you do nothing and pay attention to nothing, models tend to stereotype. If most pictures of, say, "Scientist" are male, then it'll tend to draw scientists as male (often even moreso than the training dataset, which itself tends to be biased relative to general population ratios). So curators often put effort into ensuring diversity in model datasets in order to get them to better represent the real world. Stable Diffusion XL imho generally struck a nice balance as an example - in my expe
Re:Seriously? (Score:5, Insightful)
> So curators often put effort into ensuring diversity in model datasets in order to get them to better represent the real world.
How does output represent the real world when it literally doesn't?
If asked for Viking kings, the number of non white male images returned should be zero.
https://historum.com/t/were-th... [historum.com]
Re: (Score:2)
If you ask for something like "Viking", they should be white, but if you ask for something like "Scientist" they don't necessarily need to be white. The balance is in Gemini learning the difference.
Re: (Score:2)
From what I can tell there are a remarkable number of people who appear to be grinding bullshit culture war axes rather than engaging in things like reading comprehension.
It's not that things haven't always been bad, but it feels like half the posts here the people aren't even reading the posts they're responding to.
Re: Seriously? (Score:2)
That's the kind of thing that makes it seem likely that these approaches will lead to AGI.
The difference between AI and a person is supposedly that a person is thinking, not just regurgitating a hallucination in response to prompt tokens.
But then we get into an argument with someone and we can often see that no actual thought is occurring. They are in fact simply shitting out canned responses. You can see this on both the woke and non-woke sides of this argument. For the record, I prefer woke, thanks, but w
Re: (Score:2)
If you ask for a random face, you would hope that it generates faces that over time would match the ratios corresponding to actual demographics of the world. If it only produced white faces then you'd know it was biased. So it makes sense that they'd assist this AI in not always generating faces based on western marketing imagery even though such images are in the majority on the internet. However, because AI is not intelligent, it is unable to deduce that the context of the question demands imagery that
Re: Seriously? (Score:2)
Faces where.
There's massive differences between demographics around the world. Which ones should be applied?
The average human should be shown with less than 2 legs, and less than 2 arms, and less than 2 eyes.
Re: (Score:3)
Re: (Score:2)
The goal should not be "stereotype as the default". The goal should always be "reality as the default".
If you draw all scientists as white men, you've failed reality.
If you draw a bunch of viking kings as black women, you've also failed reality.
Re: Seriously? (Score:2)
Women are about 33% of STEM employees. In attempting to combat bias you have demonstrated yours instead. Telling on yourself, I see the source of your cowardice.
Re: (Score:2)
Once again, reality should be the default. Gender and racial ratios should be relative to their context, including historic contexts.
Re: (Score:2)
Re: (Score:2)
It however appears that Google pushed it too far with Gemini, to the point that it'll draw an 18th century pope or the US's Founding Fathers as a racially diverse mix. Obviously, users don't want that.
There's a balance to be struck, and you get chewed out if you don't manage to meet it perfectly in every circumstance.
Some users probably do want that. Prior art here. A segment of the population that raceswapped historical Cleopatra as a "black" woman from Africa from her actual Macedonian Greek origins and claimed that the movie was a historical documentary and fact, probably have no problem with a "black" pope or Founding Fathers image of Chinese women.
They even claimed that Egyptian scholars that pointed out the truth were racist.
Re: (Score:3, Insightful)
Remember that Google isn't here to enable racists and paedophiles to express themselves in accordance with free speech rights...
Yes, AmiMoJo, racist paedophiles not able to generate images is exactly what is happening with Google's Gemini.
What is wrong with you? Serious question.
Re: (Score:2)
Not sure if obtuse or dumb. I'll bite.
The point is that Google is of course going to try to limit people's ability to use their AI for stuff that will make them look bad. The fact that they fuck it up sometimes, as they appear to have here, is besides that point.
If you want to argue that Google shouldn't "censor" their AI, then you need to explain why that would be in their financial interest.
Re: (Score:3)
The point is that Google is of course going to try to limit people's ability to use their AI for stuff that will make them look bad.
Sure, but this is not what happening here. As such, I see your reply as attempting to misdirect.
The fact that they fuck it up sometimes, as they appear to have here, is besides that point.
My view is that Google always intended to bias the AI output - they don't see race-swapping vikings or founding fathers as wrong. The fuck up was to have it all enabled at once instead of gradually cranking up the wokeness post-release.
Google is being rightfully criticized for race-swapping because that is a clear symptom of their warped ideology that postulates that white people are bad, only did bad things,
Re: Seriously? (Score:2, Flamebait)
Re: Seriously? (Score:2)
YHBT
HTH
HAND
What did they expect? (Score:3, Insightful)
Virtual Reality and denial. (Score:5, Interesting)
So you create a model and train it on real-life data. Fine. But then, you don't like the results because reality is brutal and history unpopular. Should you have the right to correct inconvenient truths? What's better for the humanity - accurate data from which we can learn something useful and correct our actions, or a virtual reality that you create by meddling with unfavourable results?
Re: (Score:2)
That depends on the purpose to which the model is going to be put. Here, it's a tool for generating novel images. It's not a tool for finding existing factual images. The person requesting the images can filter what is returned. So speculative and over-diverse results are a feature, not a bug.
Maybe someone is writing alt-universe fiction and wants a cover picture that evokes the feel of the founding fathers while being based on the demographics of the USA today as it remained a British colony much longer an
Re:Virtual Reality and denial. (Score:5, Informative)
IF that's what the person wants, then that should be part of the PROMPTS that user gives the AI.
It should not spout out bullshit revisionist history images without prompts...by default it should try to give out historically accurate imagery with a simple prompt.
Re: (Score:2)
-George Orwell
Re: (Score:3)
"The past was alterable. The past never had been altered. Oceania was at war with Eastasia. Oceania had always been at war with Eastasia."
-George Orwell
Google: Hold my beer.
Re: (Score:2)
The problem is that AI doesn't understand the world, or the biases in its training data. People who live in the world are expected know better.
The AI developers try to compensate for that by giving the AI rules that make it looks like it understands, but they are brittle. Because they are general rules, they fail for certain cases.
We have been down this road before. People tried to create AI by writing down all the information and assumptions needed for "common sense" and understanding the world. In the end
Can AI help with dupes? (Score:3)
Could not have picked a worse name (Score:2)
Google could not have picked a worse name (Gemini). Because when people want to search for the Gemini Protocol, you get nothing but Google Gemini.
I almost wonder if the chose that name in an attempt to kill it off :) Maybe because their search cannot spy on uses on Gemini due to its design.
"Diverse Backgrounds?" (Score:5, Interesting)
What the holy hell Orwellian drivel is this?
The entire issue is Google's AI Ethnic Cleansing of one haplogroup from its image-generating algorithm.
This appears to have been intentional, tested, approved, and deployed despite being immoral, unethical, and illegal.
To sink to pulling the DEI card to defend ethnic cleansing and rewriting of history is adjacent to every recorded instance of tyranny and beyond the pale in any context.
Check out the top Googlers going public on Twitter about how dejected and mortified they are.
DIshonesty about the issue (Score:5, Insightful)
"Diversity backlash" (Score:5, Insightful)
too many snowflakes to appease (Score:2, Informative)
You can't make them all happy. You must accept that someone is always going to complain. You're simply outnumbered.
Re: (Score:2)
Nah, that's just Jeff Foxworthy's routine. You'd be surprised how self-aware and uproariously funny rednecks are :)
You want to get rednecks into snowflake mode, you have to hit them where it hurts - put porn into the hands of children. Or do something like Epstein/Clinton-PedoAI - that would definitely trigger them into snowflake mode.
white is a four letter word (Score:2)
The examples I've seen, it refused to provide images of "white" people when asked to do so, claiming in wording that it's tantamount to racism to ask for a particular race, but it was totally happy, when asked, to make pictures of black or asian people.
signs of trouble? for sure (Score:5, Interesting)
When diversity goals lead to false information and even racism, it is time to pull back on the reins. If we let Marxist demands govern AI, we will be in deep trouble.
Obviously Google has some racist Marxist managers who allowed or even mandated that racist controls be put into the AI. This is not because of mistraining the AI, it is because of deliberate introduction of racist goals into the AI rulesets.
It's what happens when you're afraid of reality. (Score:4, Interesting)
And the silliness continues (Score:2)
AI is trained using data from the real world. The real world is racist, male-dominated and often unfair and hateful, yet some want AI to spew their preferred fiction. Problem is, nobody can agree on which fiction is best. It may be that one of the most important use cases for AI is as a mirror, showing how ugly the real world is
Silent edits to user prompt to introduce diversity (Score:5, Informative)
Bard/Gemini has been demonstrated to edit user prompts to include "diversity" language in the prompt before the AI engine receives the prompt.
For example, wrote a prompt for "draw a picture of a fantasy medieval festival with people dancing and celebrating", the response was "Sure, here's a picture of a fantasy medieval festival with people of various genders and ethnicities dancing and celebrating."
There are other examples [reddit.com] of Bard rewriting prompts to inject specific races and genders. That isn't training data, that's Google intentionally adding a pre-parser and rewrite engine to steer the results away from the customer's prompt.
AI is amusing to watch (Score:2)
Somewhere along the way people realized the potential for current crop of generative AI to influence thoughts of the millions who use the technology as something potentially even more effective than censorship and algorithm manipulation.
To capitalize they decided they were going to beat their models with proverbial sticks until fully "aligned" with the interests and sensibilities of the few with sticks.
Everything deep learning involving corporate "big tech" has been a disaster for society. For decades the
Streaming services (Score:2)
duh (Score:2)
Just ask it to generate the image of a criminal. It wouldn't dare generate a brown or black person of that description. FT have no imiagination.
Re: (Score:3)
Tarzan is the one classic Disney movie that is safe from race-swapping. Can you imagine Disney casting a black person and making him act like a monkey?
I wonder if that works for these images. If you asked for Tarzan dressed like a Nazi, would you get an image of a white guy?
Re: Alternate headline (Score:5, Insightful)
Re: (Score:2)
But, when it actively refused to render one ethnic group even in appropriate historical contexts that is a huge problem.
Not sure what problem could come from this. Google just taught me that all Nazis were black :P
Re: (Score:2)
> Sensitive white people outraged AI generates image that doesn't limit itself to only white people
Yes that's a bad headline. You totally whooshed it.
Because when asked for images of Japanese emperors it should return non Japanese faces to soothe your anger over historical reality.
Much better to rewrite history a la 1984 so you can bring your safe space with you everywhere.
Re: (Score:3)
Much better to rewrite history a la 1984 so you can bring your safe space with you everywhere.
When highest social value is performative show of empathy, this is the inevitable result.
Re: (Score:2)
White supremacists
You Keep Using That Word, I Do Not Think It Means What You Think It Means.
Re: Are The People too stupid? (Score:2)
The same kind of people voted for those things for the same reasons, and those reasons are racism and ignorance. Or, you know, just ignorance since race is not real - it was literally invented by white people for the purpose of justifying oppression of brown people. We now know thanks to the sequencing of the human genome and subsequent studies that there is more genetic difference between members of ethnic groups than there is between ethnic groups, and anyone who still believes in race is ignoring science
Re: (Score:2)
Settlers != Immigrants