Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Google AI Technology

Google CEO: Building AI Responsibly is the Only Race That Really Matters (ft.com) 53

Sundar Pichai, CEO of Google and Alphabet, writing at Financial Times: While some have tried to reduce this moment to just a competitive AI race, we see it as so much more than that. At Google, we've been bringing AI into our products and services for over a decade and making them available to our users. We care deeply about this. Yet, what matters even more is the race to build AI responsibly and make sure that as a society we get it right. We're approaching this in three ways. First, by boldly pursuing innovations to make AI more helpful to everyone. We're continuing to use AI to significantly improve our products -- from Google Search and Gmail to Android and Maps. These advances mean that drivers across Europe can now find more fuel-efficient routes; tens of thousands of Ukrainian refugees are helped to communicate in their new homes; flood forecasting tools are able to predict floods further in advance. Google DeepMind's work on AlphaFold, in collaboration with the European Molecular Biology Laboratory, resulted in a groundbreaking understanding of over 200mn catalogued proteins known to science, opening up new healthcare possibilities.

Our focus is also on enabling others outside of our company to innovate with AI, whether through our cloud offerings and APIs, or with new initiatives like the Google for Startups Growth program, which supports European entrepreneurs using AI to benefit people's health and wellbeing. We're launching a social innovation fund on AI to help social enterprises solve some of Europe's most pressing challenges. Second, we are making sure we develop and deploy the technology responsibly, reflecting our deep commitment to earning the trust of our users. That's why we published AI principles in 2018, rooted in a belief that AI should be developed to benefit society while avoiding harmful applications. We have many examples of putting those principles into practice, such as building in guardrails to limit misuse of our Universal Translator. This experimental AI video dubbing service helps experts translate a speaker's voice and match their lip movements. It holds enormous potential for increasing learning comprehension but we know the risks it could pose in the hands of bad actors and so have made it accessible to authorised partners only. As AI evolves, so does our approach: this month we announced we'll provide ways to identify when we've used it to generate content in our services.

This discussion has been archived. No new comments can be posted.

Google CEO: Building AI Responsibly is the Only Race That Really Matters

Comments Filter:
  • by gweihir ( 88907 ) on Tuesday May 23, 2023 @08:06AM (#63544703)

    And this one does its best to contribute.

    • by Roger W Moore ( 538166 ) on Tuesday May 23, 2023 @08:37AM (#63544763) Journal
      Technically what he said is very likely true but you have to look at it from his point of view. The CEO is responsible to his shareholders therefore developing an AI that earns them money is a responsible thing to do from his point of view. It's not his fault if everyone else interprets "responsible" slightly differently to mean things like ethical etc. (a word you'll note he carefully did not use).
      • by gweihir ( 88907 ) on Tuesday May 23, 2023 @09:01AM (#63544821)

        Sure. The best liars always put in a bit of truth, make it sound credible and only ever lie by misdirection and by omission. Directl lying is for cretins that do not think about possible lawsuits and liability. It still can be successful for a while (see Trump for example), but it comes with really high risks and usually fails long-term at some point. Of course Google will have hired one of the very best liars they could get for that CEO position.

      • You cannot be responsible without also being ethical. Controversially, how can being unethical ever be seen as being responsible?
        • You cannot be responsible without also being ethical.

          Sure you can. At the risk of Godwinning the discussion, Hitler was responsible for the Holocaust and I hope we can all agree that he was definitely not being ethical.

          • How was he responsible?
        • by gweihir ( 88907 )

          The thing is that "ethics" is just like "morals": It describes an approach (simplified: "let your action be guided by principles") but not any values. So a concrete set of ethical (or moral) _values_ can include taking responsibility for your actions, but it can also include the very opposite or it can not make any statement at all.

          • But we aren't talking about taking responsibility for your actions, which is something that happens after the fact. We are talking about being responsible, which means your actions are and always will be expressing responsibility to yourself and everyone else in the word.
      • Technically what he said is very likely true but you have to look at it from his point of view. The CEO is responsible to his shareholders therefore developing an AI that earns them money is a responsible thing to do from his point of view. It's not his fault if everyone else interprets "responsible" slightly differently to mean things like ethical etc. (a word you'll note he carefully did not use).

        While this argument gets used a ton, and yes, in this current version of the world it remains true, shouldn't there be a point where a CEO is also responsible for the ethical and moral makeup of the company they are in charge of? And, shouldn't there be enough of a conscience among the board members to where "profit above all" should at least have a tiny, itty-bitty shred of what we consider ethics and morality? I mean, surely the board members, if given the choice between: A) Make massive profits, then kil

        • by gweihir ( 88907 )

          Ethics and morality? When there is _money_ to be made? You must be insane! People are willing to sell the future of the whole human race for a bit of temporal profit and you are talking about "ethics and morality".

        • I mean, surely the board members, if given the choice between: A) Make massive profits, then kill all humanity, and B) Make much smaller profits, but benefit humanity on the whole

          Obviously, no board would choose (A) because, unless we have actually been secretly taken over by aliens, they would be killing themselves. The problem is that option (A) is really: "Make massive profits in the short term by screwing over some of humanity" and that option is much more attractive to them since the part of humanity they are screwing over does not include them and they get to make loads of money and then screw over any remaining shareholders once the crap they pulled comes to light. That opti

      • The responsibility will get pushed off to the organization that's denying you a home loan or employment with the AI. (Hello social credit score.)

        That's what they're banking on anyway. Legal risks - should there be any - will get assumed by the smaller companies. If the risks ever do make their way up to Google, they are in the too big to fail category, an exception will be made.

    • Translation: We're wayyy behind in this race so let's try to make the others look evil.

    • Of course he isn't lying. Google said, "Don't be evil." They must be the good guys with our best interests at heart who are willing to sacrifice short term profits & market share gain for the good of humanity, right?
    • At Google, we've been bringing AI into our products and services for over a decade

      Yeah, we've had it all along.

      Talk about spin!

    • A.I. Lives Matter!
  • by Opportunist ( 166417 ) on Tuesday May 23, 2023 @08:11AM (#63544713)

    An ethical, fair AI that doesn't behave like a psychopath with zero remorse, conscience or a feeling of responsibility is in the interest of every CEO.

    It's basically the only thing that could keep an AI from being a perfect replacement for them.

    • Imagine the improvement to the bottom line of every major corporation of they just replaced the top two levels of management with AI!
    • by Luckyo ( 1726890 ) on Tuesday May 23, 2023 @08:38AM (#63544765)

      Problem is that this is not what they mean. What they mean is a method for censoring LLM in a very specific way, where instead of learning from what is, they would learn to ignore some aspects of reality, while insisting that aspects of reality that don't exist are real. In line with political interests of the day.

      We've already seen early attempts at this with crude woke censorship of major LLMs. It generally seems to fail at this stage, because we don't actually understand how LLMs generate responses that they do. So best we can is build a vector based database on top of the LLM to search for certain key phrases in generated responses and change those to something else. These are currently quite possible to subvert by extending queries to enable LLM to route around these with things like rewording, telling LLM to consider itself as something else and so on.

      • Problem is that this is not what they mean. What they mean is a method for censoring LLM in a very specific way, where instead of learning from what is, they would learn to ignore some aspects of reality, while insisting that aspects of reality that don't exist are real. In line with political interests of the day.

        Yep. The AI is only mirroring the bias of the human race.

        The only way to fix it is to fix the humans. Meddling with the AI model will only lead to instability and grief in the long term.

        • Yep. The AI is only mirroring the bias of the human race.

          The only way to fix it is to fix the humans. Meddling with the AI model will only lead to instability and grief in the long term.

          Prejudice is useful. It's a matter of personal sensibilities and ideology that govern weighing of usefulness against fairness.

          Machines are quite capable of creating bias from unbiased data.

    • Well played.

  • a sales pitch, with a heaping helping of patting themselves on the back,
  • So Google has gone from "One Race, the Human Race" to "One Race, the AI Race" openly. Seems fitting for them now.

  • That you CANNOT block. No more blocking Google toe fungus and rotten teeth ads!!!! /sarc
  • Any outcome that predicates on human virtue is domed.
  • by Gravis Zero ( 934156 ) on Tuesday May 23, 2023 @09:03AM (#63544825)

    The only thing Sundar cares about is if it's profitable. He will blow soooo much smoke up your ass talking about ethics and responsibility but remember this is the company that fired it's top AI ethicists because they got in the way of profits.

  • This from somebody who decided that Don't Be Evil was not a good motto for the company. Your credibility is zero.
  • by nicubunu ( 242346 ) on Tuesday May 23, 2023 @09:31AM (#63544911) Homepage

    Google AI development is so responsible that they had to block Bard in the EU because it doesn't respect data privacy laws.

  • by VeryFluffyBunny ( 5037285 ) on Tuesday May 23, 2023 @09:38AM (#63544921)
    ...is that Google will no more be held responsible for their uses & abuses of AI than they have been for every other technology that they've adopted. The USA has a regulation problem. That's why people are suffering & are going to suffer as a result of abuses of AI. Don't forget that Google also sell their services internationally to repressive regimes around the world.
  • by nospam007 ( 722110 ) * on Tuesday May 23, 2023 @10:22AM (#63545029)

    "Building AI Responsibly is the Only Race That Really Matters "

    Building a responsible AI would even be better.

  • This is major obeisance to the hype of AI. His financials have heard about this "AI"-woojie and now he need to dance dance dance to get them to think that he had been on top of this the whole time and not only that, but 101.99% of all the forward motion of Google or whatever corp will towards this thing that will take over everything just like they "heard" about.

    This message isn't for you and me, joe.

    We need to watch him, and regulate his actions and not let him blame it on "AI" And all others who would t
  • I also pity this man. What on earth makes a man want to dance this much and for what? He seems to have chosen to serve harsh masters. Hope his mom is proud?
  • by WankerWeasel ( 875277 ) on Tuesday May 23, 2023 @11:05AM (#63545123)

    "We're behind the game and losing in the AI race, so we're gonna make statements like this to justify why we're losing this race despite our market dominance in technology."

To write good code is a worthy challenge, and a source of civilized delight. -- stolen and paraphrased from William Safire

Working...