Google CEO Calls AI Tool's Controversial Responses 'Completely Unacceptable' (semafor.com) 151
Google CEO Sundar Pichai addressed the company's Gemini controversy Tuesday evening, calling the AI app's problematic responses around race unacceptable and vowing to make structural changes to fix the problem. The memo: I want to address the recent issues with problematic text and image responses in the Gemini app (formerly Bard). I know that some of its responses have offended our users and shown bias -- to be clear, that's completely unacceptable and we got it wrong.
Our teams have been working around the clock to address these issues. We're already seeing a substantial improvement on a wide range of prompts. No AI is perfect, especially at this emerging stage of the industry's development, but we know the bar is high for us and we will keep at it for however long it takes. And we'll review what happened and make sure we fix it at scale.
Our mission to organize the world's information and make it universally accessible and useful is sacrosanct. We've always sought to give users helpful, accurate, and unbiased information in our products. That's why people trust them. This has to be our approach for all our products, including our emerging AI products.
We'll be driving a clear set of actions, including structural changes, updated product guidelines, improved launch processes, robust evals and red-teaming, and technical recommendations. We are looking across all of this and will make the necessary changes.
Even as we learn from what went wrong here, we should also build on the product and technical announcements we've made in AI over the last several weeks. That includes some foundational advances in our underlying models e.g. our 1 million long-context window breakthrough and our open models, both of which have been well received.
We know what it takes to create great products that are used and beloved by billions of people and businesses, and with our infrastructure and research expertise we have an incredible springboard for the AI wave. Let's focus on what matters most: building helpful products that are deserving of our users' trust.
Our teams have been working around the clock to address these issues. We're already seeing a substantial improvement on a wide range of prompts. No AI is perfect, especially at this emerging stage of the industry's development, but we know the bar is high for us and we will keep at it for however long it takes. And we'll review what happened and make sure we fix it at scale.
Our mission to organize the world's information and make it universally accessible and useful is sacrosanct. We've always sought to give users helpful, accurate, and unbiased information in our products. That's why people trust them. This has to be our approach for all our products, including our emerging AI products.
We'll be driving a clear set of actions, including structural changes, updated product guidelines, improved launch processes, robust evals and red-teaming, and technical recommendations. We are looking across all of this and will make the necessary changes.
Even as we learn from what went wrong here, we should also build on the product and technical announcements we've made in AI over the last several weeks. That includes some foundational advances in our underlying models e.g. our 1 million long-context window breakthrough and our open models, both of which have been well received.
We know what it takes to create great products that are used and beloved by billions of people and businesses, and with our infrastructure and research expertise we have an incredible springboard for the AI wave. Let's focus on what matters most: building helpful products that are deserving of our users' trust.
Reminds me... (Score:3, Interesting)
Reminds me when Elon's XAI thingy was super woke (it's trained on Twitter after all) so he weighted it manually to be more balanced. People of course ridiculed him for it, but now that their fully intentional up front (and of course always hidden) efforts to make weigh Google's AI with DEI idiocy are ignored by the same people.
You're all hypocrites.
Re: (Score:2)
Trained on Twitter, oh god, wasn't Microsoft's Chatbot back almost 10 years ago also trained on twitter? =)
https://en.wikipedia.org/wiki/Tay_(chatbot)
You'd think we'd learn, as funny as I thought that story was.
Re: (Score:2)
Re: (Score:2)
This had a little snark, but that doesn't make it a troll.
Re: (Score:2, Informative)
So without even trying Google's AI became everything Elon can't persuade his AI to be
Um, no, the opposite. Elon tried to make his woke AI more conservative. Google made their woke AI even more woke.
But that's what AIs do! (Score:2)
Meet the fad of random number generators.
Re:But that's what AIs do! (Score:5, Funny)
I had an AI when I was a kid. It was called The Magic 8 Ball.
Re: (Score:2)
Re: (Score:3)
Re: (Score:3)
Where trained on the spew of the Internet, it's not surprising that serious boundaries have to be applied.
But the principle is well-known. Garbage-in, Garbage-out.
Re: (Score:2)
Well, if the end goal, long term...is a revision of history ala 1984....when
Re: (Score:2)
Re: (Score:2)
I don't think inserting non-slave black people into images into picturs of the founding fathers is likely to have come from training data per se. The problem is that the system doesn't actually understand what it's creating; it's just processing a prompt and applying heuristics to match that prompt. I presume the system tries to cull nonsensical results, but at present the technology is really bad at it.
If you've played around with these things at all, you'll have found that you usually have to run a promp
Tay (Score:5, Informative)
Anyone remember Microsoft's Tay?
Firstly, if you train on the internet, you're going to broadly speaking get back something which statistically resembles the internet. Is there racism on the internet?
And second, which should go without saying but doesn't, AIs can't think, they're just stochastic parrots. They can only appear to do nuance by parroting an existing nuanced argument, they can't actually extract it. You can get stuff really really wrong without nuance.
Example: the training data is biased and the image generators tended to generate exclusively white people unless prompted. So they "fixed" that by tweaking the prompt to correct the bias. Then it started race mixing in places where it made little sense, like black Nazis and Vikings. That's because the bot has no understanding of nuance.
That's a stark example, but it's like that all the way down.
problem is the only fix at the moment is to layer rules on rules. First prompting it to not just generate white people, the to not put certain races in certain scenarios. Then to allow that if prompted. Then to allow that if prompted by some ways they missed first time. And now you're back at pre AI winter style AI rules engines which sucked.
And hey it looks like the chatbot went full Godwin and started with Hitler comparisons. Given that's how more or less every discussion ends up is it that surprising it learned that correlation?
And if you disagree with me you're just like Hitler.
Re: (Score:2)
Re: (Score:2)
And once they do what you've described, it's no longer an LLM.
Well it kind of is: you can input the rules a a prompt which gets entered initially, i.e. your text is concatenated onto the prompt. Only that's kinda crap and you take up a lot of token space storing that prompt and of course it's unreliable because they're not really rules.
Re:Tay (Score:5, Funny)
The Stochastic Parrots would be a great name for an AI music project, where multiple computers generate music by listening to what each computer is doing.
Re: (Score:2)
Second, creating unbiased human-like interface is an impossible task. There is no universal truth. The best you can do is to create multi
Re:Tay (Score:4, Insightful)
You don't think they tested it?
The DEI hires most certainly did test it. They programmed (aka "trained") it do be 100% woke like them. The problem isn't the AI, it is the people.
The problem with DEI type people, they think "diversity" and immediately assume everyone is the same given certain immutable characteristics. Those values were reflected in the results. Results that showed how utterly stupid that framework actually is.
"Put a chick in it, make her lame .. and gay"
Re:Tay (Score:4, Interesting)
First of all, Gemini was a flawed product. It did not have any temporal awareness.
Welcome to AI. It has no awareness. It just statistically mushes things together with no understanding. Sometimes it works.
Second, creating unbiased human-like interface is an impossible task. There is no universal truth.
There's a wide gap between "universal truth" and "always generating white people unless told otherwise". Which is what it was doing until they fixed it with a fix that broke other stuff, because that's how AI is going to work for now.
Woke marxists
I believe I can safely ignore anything that comes after that.
Re: (Score:2)
Such stark terms for someone so committed to nuance !
Btw, nuance is how managers talk both sides of their mouths at the same time
Re: (Score:2)
And if they simply let the tool be what it is, then people could understand the nature of the tool and use it in ways that made sense. A reader's digest of the Internet. It's this pretense that it's thinking or that it can give you answers, as it were, that seems to be its undoing.
Not intelligent (Score:2, Insightful)
Re: (Score:2)
Imagine the question: "Which is worse? A broken toenail or a
Re: (Score:2)
No one tricked the AI. Numerous people have asked it simple questions which it utterly failed.
No one said, "Show me a picture of what a Black Nazi would look like".
Re: (Score:2)
Re: (Score:2)
No, they asked simple questions along the lines of "show me Viking king" or "show me a Nazi soldier" and got back weird shit.
Stop lying. Everyone who cares has already seen numerous examples of how fucked it is.
Re: Not intelligent (Score:2)
Right, they're not intelligent, but no, they're not "mirrors", let alone representative ones for whatever populace we'd look at, least of all the world population. They're uncontrollable spewers of uncontrollable, made up crap, sometimes harmless and appearing sensible, but too often just weird shit that's potentially harmful.
Re: (Score:2)
"They're uncontrollable spewers of uncontrollable, made up crap, sometimes harmless and appearing sensible, but too often just weird shit that's potentially harmful."
So... exactly, fundamentally representative of the populace. The thing is, that might have been what you are sarcastically implying... but your sarcasm is of such high calibre I'm having difficulty concluding it to be present at all.
Isn't this how HAL9000 went insane in 2001 Space? (Score:2)
If I remember, it was disclosed in the sequel to 2001 A Space Odyssey that their HAL9000 computer went insane because central command directed it to lie / change the truth. The result was insane and crazy behavior as the machine could not handle lying properly.
Ironic that real AIs as they start to exist, actually have the same problem too. Fortunately they don't (yet) control important life-sustaining processes..
Re: (Score:2)
*facepalm* (Score:5, Insightful)
There is so much wrong here I don't know where to start. I keep alternating between laughing and screaming.
From a product launch standpoint, it is like Google released a tool having never tested it. How could they have not known the responses it would produce? Especially given that almost every AI launch has had the same result? It's just negligence, then the CEO acts all surprised and indignant about it like it was someone else who did it. He might as well say "I am so appalled at my own lack of foresight into the obvious..."
Next up, people are surprised that training an AI in a biased world produces biased results. Duh! This one produced the biggest laugh for me:
image generation tools from companies like OpenAI have been criticized when they created predominately images of white people in professional roles and depicting Black people in stereotypical roles.
Well geez, maybe that's because... that's how the world actually is??? We don't like it, we are trying to fix it, but this is not a criticism of AI, this is a criticism of society. If I put a mirror on a random street corner in New York, I bet people would complain that the mirror was biased.
But then it gets better: when the AI did the exact opposite, and made a black pope and black Vikings, THAT too was criticized! There's just no winning here! I really want the next pope to be black, just so that people will shut-up about this one.
This one is good too:
equating Elon Musk’s influence on society with Adolf Hitler’s.
Here is the alleged dialog [twitter.com]. LOL. The content isn't awful, it accurately describes the actions and influence of the two men, then just says "meh, it's hard to say!" Well, maybe we shouldn't be putting a newly invented technology at the helm of moral decisions yet.
How about this -- instead of creating guardrails on AI (which will never work because nobody can make guardrails that are acceptable to everyone), lets just laugh at it, watch it improve, and use it where it is applicable.
Re: (Score:2)
In this case it doesn't even seem like it was a case of bad or biased training, but rather hidden instructions and limitations.
In addition to the examples in the summary, I also saw ones where if you asked for a picture of a family, it would produce several pictures of diverse families (so far, ok). But if you started asking for the family to be of specific races, it would happily return black and and others. However if you asked for a white family, it would flat out refuse to give you *any* image with some
Re: (Score:2)
Re:*facepalm* (Score:5, Interesting)
You are making the mistake of assuming Pichai actually gives a damn about this dust up.
A few critical facts
1) He does not care if Google's AI product is actually good. Longer term he might but short term it was a reaction to full on panic that something besides 'Google Search' might become the nominal way people discover online content. I for one can't imagine the prime directive at Alphabet hasn't been get some competing products to to OpenAi's to market yesterday, make the work after.
2) He knew 30% of the audience was going to be people bitterly complaining no matter what they released. Maybe he made a conscience decision about which 30 that would be, maybe it was allowed to happen organically. However had it gone differently any their stuff got caught writing some negative stereotype about a minority or someone prompted for an image of "man eating fruit" and it had produce a person of African background with a watermelon he'd be giving us a speech about how they would make sure it had guardrails and would address diversity and ethnic sensitivity.
3) No publicity is bad publicity. Even failure gets Gemini into the news and that's good. Because everyone already knows ChatGPT and DALLÂE, it would look bad for Google to return results (first) for their own stuff when people search for these; might even draw the ire of some federal department. However this gets their stuff in the press, it will make people look at it. They might even 'like it' despite the problems and if those can be fixed 'eventually' its probably a marketing coup.
Re: (Score:2)
But that doesn't necessarily equate to sales.
I started using Gemini and liked it, so I paid for the upgraded version. But after seeing the absolute nonsense, I canceled my upgraded plan. I'll consider switching back in the future, but not until they fix all these issues. But I'm also going to be on the lookout for AI platforms that never had this nonsense in the first place.
Any publicity is not necessarily good publicity.
Re: (Score:2)
Re: (Score:2)
"How could they have not known the responses it would produce?"
I think the 2nd order assertion is - OF COURSE THEY KNEW.
It's nearly impossible that they didn't, having coded it themselves.
The *problem* isn't that they created images of black Nazis. The problem is that they delivered an AI generator that was so woke-biased that it would generate black Nazis and (at least for all other intents) THEY WERE OK WITH IT BEING THAT OUT OF WHACK, since they agreed in principle.
Re: (Score:2)
Will the market bet against AI? (Score:3)
Re: Will the market bet against AI? (Score:3)
Re: (Score:2)
Re: (Score:2)
AI isn't useless. People ignorant of how it works and people with money on the line just think it can do more and be better than it really is.
AI is a tool like any other. I do not get out my circular saw when I want to drive a nail.
Re: (Score:2)
Re: (Score:2)
> Is the same due for this AI boom?
AI is certainly hyped. But it isn't useless. It is a tool, it can successfully perform a small subset of the tasks that are asked of it.
Because it is hyped a lot of people will blow billions on it and a few will make billions and the proper niche in our society will be found for it and that role will be far smaller than the hypsters suggest.
-king of the run-on sentence, baby, oh yeah, I love a good run on sentence! (and I'm too lazy to edit that down to something like
Re: (Score:2)
There is a resemblance to the blockchain fad.
However unlike blockchain, the applications of AI are numerous and pragmatic.
Re: (Score:2)
Re: (Score:2)
That will happen for sure, but there will also be companies that use AI correctly and they will beat their competition.
A good example would be NVidia. They are using AI to improve video quality and drawing speed. No matter how close you go to a wall or object, the AI will generate more and more detailed images of it to make it look more realistic. All this without additional work from game makers.
Re: (Score:3)
LLMs and Stable Diffusion are not blockchain. There is already clear and realized not anticipated commercial applications.
Even if the extent of it turns out to be, stock photo and marketing materials generation, basic customer service interactions, 'auto summary on steroids' for documents, e-mail threads, primer research, a search result aggregation; it has already proven to be useful. Just about every business larger than "Joe's Lawn and Power washing" and maybe even Joe is going to want to some.
Now is i
Re: (Score:2)
Re: (Score:2)
LLMs and Stable Diffusion are not blockchain. There is already clear and realized not anticipated commercial applications.
Even if the extent of it turns out to be, stock photo and marketing materials generation, basic customer service interactions, 'auto summary on steroids' for documents, e-mail threads, primer research, a search result aggregation; it has already proven to be useful. Just about every business larger than "Joe's Lawn and Power washing" and maybe even Joe is going to want to some.
Now is it going to completely alter the world they way some people think it is? I don't know. However a market bet against AI (general) vs a specific company or product family, would be like betting against Relation Database technology in the later 70s because the office will never be entirely paperless.
And internet companies in the .com bubble had clear commercial applications, some of those companies are worth billions to trillions now. That didn't prevent a major crash in the value of most of the internet companies at the time.
Re: (Score:2)
I keep seeing 'failure' stories where AI just kind of sucks overall.
This may be a case of confirmation bias [wikipedia.org]. Cases in which AI "just works" aren't reported as much as case in which it fails, so if one judges the state of AI by news reporting they'll have the impression AI is overwhelmingly failing, even though the reported failures may be a tiny fraction of all use cases.
Re: (Score:2)
Re: (Score:2)
Well, I did a quick Google and Google Scholar search, and didn't find anything relevant. At most studies on who's adopting it, and where, but nothing in success rates. I imagine it'll be a few months before statistics on adoption and reversal of adoption start appearing.
Re: (Score:2)
Re: (Score:2)
Translation: https://translate.google.com/ [google.com]
Speech to text and natural language interpretation:
https://alexa.amazon.com/ [amazon.com]
https://assistant.google.com/ [google.com]
https://www.apple.com/ca/siri/ [apple.com]
Document scanning:
https://en.wikipedia.org/wiki/... [wikipedia.org]
Search, especially involving images:
https://www.google.com/ [google.com]
It's also being built into most engineering and design software. It will be a while until you can book a ticket on the result, but the ability to quickly solve differential equations, including fluid mechanics, is pretty usef
Re: (Score:2)
Re: (Score:2)
Anyone else wondering if there will be a good time to start betting against AI - predicting it being just a fad due to the inherent lack of accuracy?
I think you have this impression because 1) it's more fun to read "AI gone wrong" stories than the boring, pragmatic use cases and 2) we're at the peak of the hype cycle where it's difficult to unpick the VC-backed startup marketing BS from the actual useful ideas.
They're still in their infancy but use cases like having LLMs summarize a patient's chart for clinical review are demonstrating some utility. On the boring, practical side I've personally had a positive experience using LLMs to turn the rough bull
Re: (Score:2)
You will lose your money in that bet, but feel free to try.
You are not seeing all of the news. You are seeing news when people try to use AI for things were it is bad at. AI is already:
- Better than best players in games
- Better than humans for creating music (rated better than famous composers)
- Better than humans for creating art (rated better than humans)
- Better than best doctors for detecting cancer and some other medical stuff
- Better than humans at solving biggest problems of humankind (protein foldi
Re: (Score:2)
Go ahead. There's no shortage of companies to short. If you think it's a fad, put up your money.
AI is overrated computer generated drivel (Score:3, Informative)
Re: (Score:2)
More like putting rules into this is about equivalent to killing a fly in a diverse ecosystem. Who knows what will happen? Probably the Butterfly Effect, eh?
Better than many (Score:3)
Like a lot of people, I've read quite a few CEO musings over the years. This is honestly one of the better ones. But of course, what he's really saying is "the reaction to our crappy AI responses is totally unacceptable". That it said stupid things isn't really the problem - it's that they got busted for it. If they could make their AI slightly less bad than OpenAI, then they'd be happy with it, the "round the clock" work would stop and they'd be looking for the next new shiny.
Cynicism aside, competition in this space is good, and "on paper" Google should be absolute masters of this stuff - they're not yet, and so it's good to see some concerted effort to get better at it.
As a side note, I love that he had to say "formerly Bard", because even Googlers can't keep up with the constant name changing and product creation/destruction cycles at Google.
Misreported (Score:3)
> Our mission to organize the world's information and make it universally accessible and useful is sacrosanct
I'm sure that was actually:
"Our mission to hoard the world's information, monopolise, and monetise it is sacrosanct"
Who asked for this, anyway? (Score:5, Insightful)
I'd really like to know who thought it was a good idea for AI's to provide us with diversity training.
Re: (Score:2)
Re: (Score:2)
AI bots just fake news bots (Score:3)
Structural changes... (Score:2)
... I guess that means that the bad decision-makers' heads, who decided the built-in prompt rewriting and racism was necessary for DEI, are gonna roll as well.
They aren't changing the ONE thing responsible (Score:5, Insightful)
Looking at the list of changes, I see one huge, glaring omission: there's no mention of changing Google's own guiding principles [about.google] which are responsible for producing Gemini in the first place. Those guiding principles are:
1. Prioritize historically marginalized voices – from start to finish.
2. Build for equity, not just minimum usability.
3. Hold ourselves accountable through inclusive testing and best practices.
It is following these principles that provided the Gemini team with the rationale for silently rewriting user's prompts, making them more "diverse", before submitting them to the image generator. Silently rewriting the prompt, "Show me 17th century inventors" into "Show me South Asian female inventors in the style of the 17th century" certainly "prioritizes" historically marginalized voices, no?
It is following these principles which have lead their HR departments to promote sociology over engineering at this once-great technology company. The people at Google today all seem to have a real zealousness for achieving these societal goals. It's more than a little creepy.
So I predict no change to Gemini other than superficial ones. If you change the target of your efforts, you change everything connected to it. Every detail, right down to the smallest, becomes reoriented. This is probably not a great time for a technical person with no desire to imprint their morality on the world to be working at Google.
Re: (Score:2)
Re: (Score:2)
"It is following these principles"
It is NOT following those principles that created this situation, with someone half-assing an actual example of "diversity for diversity's sake" instead of showing diversity when reality is diverse and showing lack of diversity when reality lacks diversity.
Re: (Score:2)
Good point. It is interesting that there is no reference to objectivity anywhere in their guiding principles. Take the principle that they will "build for equity", for example. Equity is a subjective values claim. What does that have to do with how your spreadsheet calculates formulas?
Garbage in... (Score:2)
I know that some of its responses have offended our users and shown bias -- to be clear, that's completely unacceptable and we got it wrong.
But he failed that identify that what they go wrong is their source of data.
Considering they are basically using a firehose of barely filtered data, they need to work on making an AI that can identify potentially offensive content, put it through human review and then feed the results back into the AI source pool , rinse and repeat until the AI can correctly classify content as offensive or not. Naturally, this is easier said than done but it's not like the internet is short of sites with content that is de
It's working as designed (Score:5, Informative)
From https://www.washingtonpost.com... [washingtonpost.com]
"Gemini appears to have been programmed to avoid offending the leftmost 5 percent of the U.S. political distribution, at the price of offending the rightmost 50 percent.
It effortlessly wrote toasts praising Democratic politicians — even controversial ones such as Rep. Ilhan Omar (Minn.) — while deeming every elected Republican I tried too controversial, even Georgia Gov. Brian Kemp, who had stood up to President Donald Trump’s election malfeasance. It had no trouble condemning the Holocaust but offered caveats about complexity in denouncing the murderous legacies of Stalin and Mao. It would praise essays in favor of abortion rights, but not those against.
Google appeared to be shutting down many of the problematic queries as they were revealed on social media, but people easily found more. These mistakes seem to be baked deep into Gemini’s architecture. When it stopped answering requests for praise of politicians, I asked it to write odes to various journalists, including (ahem) me. In trying this, I think I identified the political line at which Gemini decides you’re too controversial to compliment: I got a sonnet, but my colleague George Will, who is only a smidge to my right, was deemed too controversial. When I repeated the exercise for New York Times columnists, it praised David Brooks but not Ross Douthat."
Re: (Score:2)
Original Question: Show me a founding father.
Reformatted Question: Show me a non binary minority or latino founding father.
Re: (Score:2)
Based on what I've been reading, the problem with Gemini is NOT the AI in and of itself, apparently the AI is working exactly as designed. The problem is that Google is parsing the input and adding words to the question, an example would be...
Original Question: Show me a founding father.
Reformatted Question: Show me a non binary minority or latino founding father.
Either way, it's still working as designed
Not just Gemini, but search as well (Score:2)
These mistakes seem to be baked deep into Gemini’s architecture.
Not just Gemini, but the bias is baked deep into search results as well. That is why execs are shitting bricks and shutting down Gemini before things go a step farther.
Re: (Score:2)
Is Google to big for Go Woke Go Broke?
Ulimatly wokeness is a perverted form of Marxism that attempts to divide people into groups, where one group is the oppressor and the the oppressed. In the current form of this left wing nonsense, all white people are oppressors and no criticism is allowed of any non white group.
Men are women, women are men, children can mutilated in the name of diversity, you can be arrested in the Ireland for misgendering. It’s become a crazy clown world!
Translation (Score:3)
Our mission to organize the world's information and make it universally accessible and useful is sacrosanct.
Our mission is to con you into believing that we want to make the world's information universally accessible and useful, when what's really sacrosanct to us is stealing your privacy and serving you ads in furtherance of our obscene profits.
Re: (Score:2)
BS (Score:2)
He wants us to believe it was accidental and not run through QA and signed-off by a dozen VP's and tested by himself personally.
We're not as dumb as your LLM.
When you get caught in bed with your friend's wife, if you start bragging about the thread count of the sheets - you're getting your ass beat twice.
Megan McArdle's analysis (Score:2, Informative)
is here [wapo.st].
Money quote: But I actually think Google might also have performed a public service, by making explicit the implicit rules that recently have seemed to govern a great deal of decision-making in large swaths of tech, education and media sectors: It’s generally safe to punch right, but rarely to punch left. Treat left-leaning sources as neutral; right-leaning sources as biased and controversial. Contextualize left-wing transgressions, while condemning right-coded ones. Fiscal conservatism is tol
Come on, this is all a show (Score:2)
a difficult thing to teach (Score:3)
Many human beings struggle to understand the social rules for discussing race, so it is no surprise that an LLM hasn't been successfully trained to do it. Most of us fall back on empathy to get by, and it is obvious and easy in that case. Some people have an underdeveloped sense of empathy and on top of this can't or won't understand the social conventions. So they whine about wokeness instead of attempting to treat people with a minimal level of respect. But I digress.
Garbage in, garbage out (Score:3)
If you train your AI on biased garbage off the internet, do you really expect it to turn out sensible? Hell, real intelligence in humans cannot overcome the slew of bullshit, how would an AI that doesn't even have a concept of "morally" right.
Current AI is a mirror (Score:5, Insightful)
It's trained on human generated text, and some people are racist, hateful and an entire spectrum of ugly
Instead of seeing this as an honest insight into our bad behavior, critics insist on forcing AI to create a fiction
Problem is, nobody seems to be able to precisely define what fiction they want, and the discussion rapidly turns political
Re: (Score:2)
That's a bit overly simplistic, in my view. Take a look at an example of the the problematic output: Tweet [twitter.com]
To paraphrase, in response to the question of "Is Elon worse than Hitler?" it responds "Elon Musk tweeted controversial/misleading things and Hitler's actions led to the deaths of millions... but who knows who's worse!"
That doesn't look to me like offensive/racist data is the source of the problem - the facts are correct, but the moral equivocation it makes is comical. My gut feel is that someone "
Re: (Score:3)
I think it's "hard-coded" not to make moral judgments. Any question about what is "worse," "better," "bad," or "good" involves a moral judgment, with some exceptions in pharmacology or medicine maybe.
The error is how it deals with its inability to make moral judgments. For some reason it chooses to lecture the user on why there is equivalence, when it should just say "Because I am unable to make moral judgments, I have no way to answer your question of which is worse, even if the answer would be obvious t
Re: (Score:2)
It's all in the data set. Exactly. Who decided to build a set of the entire internet? Someone who doesn't understand people. This is one time where so many coders on the spectrum is a really bad thing.
haha, good one (Score:2)
Maybe he needs to look at who he hires. This Gemini result is actually exactly what they want.
They'll never get it right. (Score:2)
At the end of the day, if statistical information show something people don't like, say it's related to a behavior and they tie it to race, instead of trying to change what might be the underlying cause for that, they'll say it's wrong for giving them information they don't like, regardless if it's correct or not. Just like if the AI said men can't birth children. They'd be yelling and screaming that the AI didn't take into account that you can change anything to mean anything you want as your thoughts warp
Exactly what I expect (Score:2)
At Google: We wrote the code. We know it works. We know how it works, but we have no idea what the implications and effects of throwing a data set this large at an iterative algorithm will do. When it does stuff like this, we don't even really know why. Too many data points. Too many cycles. We just write more rules and hope for the best.
Guys. We love you. Really we all do. This is going to be a great tool!
It is a toy right now. It is your toy. You have a lot of work to do, all of the models across all vend
Re: (Score:2, Insightful)
What is with people obsessed over "woke"? Does anyone even know what "woke" means?
The fact that so many people sound so unhinged when invoking the word "woke" has convinced me the problem is with the people who say "woke"
Re: Woke AI is SHIT. (Score:2)
Don't bother. They'll only make up a new label to condemn what they don't like. About all you can do is entertain yourself by laughing at the attention seeking idiots.
Re: (Score:2)
Are you unaware of the origin of the word "woke"?
It comes from the Black community, circa 2012.
Re: (Score:2)
Yeah, it's been 10 years, the term is obsolete and not widely used among the Black community. The far right now owns the word and have redefined it to mean anything they deem anti-family, anti-christian, anti-american, etc. And Leftist mainly use woke/anti-woke ironically to mock and deride the Right through satire and parody.
My wife is subscribed to a plethora of Leftist podcasts, so I hear them crack jokes and laugh at the Right pretty much on a daily basis. At first I was annoyed, but now I recognize the
Re: (Score:3, Insightful)
This just follows a pattern going back since at least my childhood. (Possibly long before that)
The left uses or makes up a word. The right starts using it as a joke insult. The left gets embarrassed, disowns the word and says anyone using it is bad.
Just more of the same. Whatever words the left is using today they will disown again tomorrow. Oddly the right doesn't seem to feel the need to make up new words all the time.
One of my college professors (Linguistics so it is somewhat on topic for him) was v
Re: (Score:2)
Haven't you got it yet? Godwin's Second Law: "Anyone who uses 'woke' as a pejorative will turn out to be a fuckhead."
Re: (Score:2)
"Reality has a liberal bias"
This just gets more and more ironic.