Google CEO: Building AI Responsibly is the Only Race That Really Matters (ft.com) 53
Sundar Pichai, CEO of Google and Alphabet, writing at Financial Times: While some have tried to reduce this moment to just a competitive AI race, we see it as so much more than that. At Google, we've been bringing AI into our products and services for over a decade and making them available to our users. We care deeply about this. Yet, what matters even more is the race to build AI responsibly and make sure that as a society we get it right. We're approaching this in three ways. First, by boldly pursuing innovations to make AI more helpful to everyone. We're continuing to use AI to significantly improve our products -- from Google Search and Gmail to Android and Maps. These advances mean that drivers across Europe can now find more fuel-efficient routes; tens of thousands of Ukrainian refugees are helped to communicate in their new homes; flood forecasting tools are able to predict floods further in advance. Google DeepMind's work on AlphaFold, in collaboration with the European Molecular Biology Laboratory, resulted in a groundbreaking understanding of over 200mn catalogued proteins known to science, opening up new healthcare possibilities.
Our focus is also on enabling others outside of our company to innovate with AI, whether through our cloud offerings and APIs, or with new initiatives like the Google for Startups Growth program, which supports European entrepreneurs using AI to benefit people's health and wellbeing. We're launching a social innovation fund on AI to help social enterprises solve some of Europe's most pressing challenges. Second, we are making sure we develop and deploy the technology responsibly, reflecting our deep commitment to earning the trust of our users. That's why we published AI principles in 2018, rooted in a belief that AI should be developed to benefit society while avoiding harmful applications. We have many examples of putting those principles into practice, such as building in guardrails to limit misuse of our Universal Translator. This experimental AI video dubbing service helps experts translate a speaker's voice and match their lip movements. It holds enormous potential for increasing learning comprehension but we know the risks it could pose in the hands of bad actors and so have made it accessible to authorised partners only. As AI evolves, so does our approach: this month we announced we'll provide ways to identify when we've used it to generate content in our services.
Our focus is also on enabling others outside of our company to innovate with AI, whether through our cloud offerings and APIs, or with new initiatives like the Google for Startups Growth program, which supports European entrepreneurs using AI to benefit people's health and wellbeing. We're launching a social innovation fund on AI to help social enterprises solve some of Europe's most pressing challenges. Second, we are making sure we develop and deploy the technology responsibly, reflecting our deep commitment to earning the trust of our users. That's why we published AI principles in 2018, rooted in a belief that AI should be developed to benefit society while avoiding harmful applications. We have many examples of putting those principles into practice, such as building in guardrails to limit misuse of our Universal Translator. This experimental AI video dubbing service helps experts translate a speaker's voice and match their lip movements. It holds enormous potential for increasing learning comprehension but we know the risks it could pose in the hands of bad actors and so have made it accessible to authorised partners only. As AI evolves, so does our approach: this month we announced we'll provide ways to identify when we've used it to generate content in our services.
Nope. It is CEO lying: The more, the better (Score:5, Insightful)
And this one does its best to contribute.
Technically True, the best kind of True (Score:5, Insightful)
Re:Technically True, the best kind of True (Score:4, Interesting)
Sure. The best liars always put in a bit of truth, make it sound credible and only ever lie by misdirection and by omission. Directl lying is for cretins that do not think about possible lawsuits and liability. It still can be successful for a while (see Trump for example), but it comes with really high risks and usually fails long-term at some point. Of course Google will have hired one of the very best liars they could get for that CEO position.
Re: (Score:3)
Re: (Score:2)
You cannot be responsible without also being ethical.
Sure you can. At the risk of Godwinning the discussion, Hitler was responsible for the Holocaust and I hope we can all agree that he was definitely not being ethical.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
The thing is that "ethics" is just like "morals": It describes an approach (simplified: "let your action be guided by principles") but not any values. So a concrete set of ethical (or moral) _values_ can include taking responsibility for your actions, but it can also include the very opposite or it can not make any statement at all.
Re: (Score:2)
Re: (Score:2)
Technically what he said is very likely true but you have to look at it from his point of view. The CEO is responsible to his shareholders therefore developing an AI that earns them money is a responsible thing to do from his point of view. It's not his fault if everyone else interprets "responsible" slightly differently to mean things like ethical etc. (a word you'll note he carefully did not use).
While this argument gets used a ton, and yes, in this current version of the world it remains true, shouldn't there be a point where a CEO is also responsible for the ethical and moral makeup of the company they are in charge of? And, shouldn't there be enough of a conscience among the board members to where "profit above all" should at least have a tiny, itty-bitty shred of what we consider ethics and morality? I mean, surely the board members, if given the choice between: A) Make massive profits, then kil
Re: (Score:3)
Ethics and morality? When there is _money_ to be made? You must be insane! People are willing to sell the future of the whole human race for a bit of temporal profit and you are talking about "ethics and morality".
Not the Choice (Score:2)
I mean, surely the board members, if given the choice between: A) Make massive profits, then kill all humanity, and B) Make much smaller profits, but benefit humanity on the whole
Obviously, no board would choose (A) because, unless we have actually been secretly taken over by aliens, they would be killing themselves. The problem is that option (A) is really: "Make massive profits in the short term by screwing over some of humanity" and that option is much more attractive to them since the part of humanity they are screwing over does not include them and they get to make loads of money and then screw over any remaining shareholders once the crap they pulled comes to light. That opti
Re: (Score:3)
The responsibility will get pushed off to the organization that's denying you a home loan or employment with the AI. (Hello social credit score.)
That's what they're banking on anyway. Legal risks - should there be any - will get assumed by the smaller companies. If the risks ever do make their way up to Google, they are in the too big to fail category, an exception will be made.
Re: (Score:3)
Translation: We're wayyy behind in this race so let's try to make the others look evil.
Re: (Score:2)
Indeed. We were asleep at the wheel so let's slow the others down.
Re: (Score:3)
Getting Dizzy (Score:1)
At Google, we've been bringing AI into our products and services for over a decade
Yeah, we've had it all along.
Talk about spin!
Re: (Score:2)
Re: (Score:2)
What bias?
The AI didn't invent any bias of its own. The AI was trained on data provided by the human race, all it did was bring human bias into sharper focus.
Makes sense for a CEO to say that (Score:5, Funny)
An ethical, fair AI that doesn't behave like a psychopath with zero remorse, conscience or a feeling of responsibility is in the interest of every CEO.
It's basically the only thing that could keep an AI from being a perfect replacement for them.
Re: Makes sense for a CEO to say that (Score:2)
Re: (Score:2)
You are aware who'd have to make that decision, yes?
Re: Makes sense for a CEO to say that (Score:2)
Ultimately, the market.
Which is why the market must be interdicted at all costs, from Pichai's POV.
Re:Makes sense for a CEO to say that (Score:4, Interesting)
Problem is that this is not what they mean. What they mean is a method for censoring LLM in a very specific way, where instead of learning from what is, they would learn to ignore some aspects of reality, while insisting that aspects of reality that don't exist are real. In line with political interests of the day.
We've already seen early attempts at this with crude woke censorship of major LLMs. It generally seems to fail at this stage, because we don't actually understand how LLMs generate responses that they do. So best we can is build a vector based database on top of the LLM to search for certain key phrases in generated responses and change those to something else. These are currently quite possible to subvert by extending queries to enable LLM to route around these with things like rewording, telling LLM to consider itself as something else and so on.
Re: (Score:2)
Problem is that this is not what they mean. What they mean is a method for censoring LLM in a very specific way, where instead of learning from what is, they would learn to ignore some aspects of reality, while insisting that aspects of reality that don't exist are real. In line with political interests of the day.
Yep. The AI is only mirroring the bias of the human race.
The only way to fix it is to fix the humans. Meddling with the AI model will only lead to instability and grief in the long term.
Re: (Score:2)
Yep. The AI is only mirroring the bias of the human race.
The only way to fix it is to fix the humans. Meddling with the AI model will only lead to instability and grief in the long term.
Prejudice is useful. It's a matter of personal sensibilities and ideology that govern weighing of usefulness against fairness.
Machines are quite capable of creating bias from unbiased data.
Re: (Score:2)
Well played.
it reads like (Score:2)
Re: (Score:2)
A sales pitch from somebody who knows the competition is ahead of them.
One Race the AI Race (Score:2)
So Google has gone from "One Race, the Human Race" to "One Race, the AI Race" openly. Seems fitting for them now.
Re: (Score:1)
Google wants an ad spamming AI (Score:1)
This is why they're interested in (Score:2)
self driving cars (captive audience) and writing web standards for their spyware near browser monopoly - web schema 3 = no more ad block. Only FF has made this optional publicly.
Hopeless (Score:2)
BULLSHIT. (Score:3)
The only thing Sundar cares about is if it's profitable. He will blow soooo much smoke up your ass talking about ethics and responsibility but remember this is the company that fired it's top AI ethicists because they got in the way of profits.
Right, Pichai (Score:2)
Responsible not so much (Score:4, Insightful)
Google AI development is so responsible that they had to block Bard in the EU because it doesn't respect data privacy laws.
Re: (Score:2)
Well I guess that is more responsible than violating their laws.
Re: (Score:2)
I believe what he means... (Score:3)
Mmmm (Score:3)
"Building AI Responsibly is the Only Race That Really Matters "
Building a responsible AI would even be better.
Caught with pants down on AI fad (Score:2)
This message isn't for you and me, joe.
We need to watch him, and regulate his actions and not let him blame it on "AI" And all others who would t
He is a human after all. (Score:2)
Justify It All You Want (Score:3)
"We're behind the game and losing in the AI race, so we're gonna make statements like this to justify why we're losing this race despite our market dominance in technology."