Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI Businesses Google

Google's Sundar Pichai Doesn't Want You To Be Clear-Eyed About AI's Dangers (techcrunch.com) 71

Alphabet and Google CEO, Sundar Pichai, is the latest tech giant kingpin to make a public call for AI to be regulated while simultaneously encouraging lawmakers towards a dilute enabling framework that does not put any hard limits on what can be done with AI technologies. From a report: In an op-ed published in today's Financial Times, Pichai makes a headline-grabbing call for artificial intelligence to be regulated. But his pitch injects a suggestive undercurrent that puffs up the risk for humanity of not letting technologists get on with business as usual and apply AI at population-scale -- with the Google chief claiming: "AI has the potential to improve billions of lives, and the biggest risk may be failing to do so" -- thereby seeking to frame 'no hard limits' as actually the safest option for humanity. Simultaneously the pitch downplays any negatives that might cloud the greater good that Pichai implies AI will unlock -- presenting "potential negative consequences" as simply the inevitable and necessary price of technological progress. It's all about managing the level of risk, is the leading suggestion, rather than questioning outright whether the use of a hugely risk-laden technology such as facial recognition should actually be viable in a democratic society.
This discussion has been archived. No new comments can be posted.

Google's Sundar Pichai Doesn't Want You To Be Clear-Eyed About AI's Dangers

Comments Filter:
  • https://en.wikipedia.org/wiki/... [wikipedia.org] There's nothing else that can describe him in the position he is in.
    • by mrops ( 927562 )

      Its not like the turtle can't do anything from the post, he can very well shit from the post.

  • What am I missing? Isn't modern AI largely just pattern recognition? Is there anything I should be scared of that we don't have existing laws for?

    Are we talking AI like what I've played with using TensorFlow...or does Google have some new capability that none of us know about?

    Sure...an AI bot scans my X-Ray and misses a tumor a radiologist would have found...there's already liability laws for that...same for when self-driving cars kill someone. I don't see what regulation loopholes they have for pu
    • Re: (Score:2, Informative)

      The thing is...the current AI techniques aren't even really new. It is just that there is more data available now in proper datasets and computing power is more accessible and powerful. The infrastructure around it has gotten a lot better. We just started renaming anything with an algorithm that is running against a trained data set "AI".

      • We are no where near the point of actual intelligence. The only real worry is whether what we have can be applied to put people out of jobs.

      • by HiThere ( 15173 )

        Intelligence is poorly defined. At one point someone defined it as "things we can do, but don't know how we can do them". This was back around the time when chess playing programs were though of as intelligent. And they were.

        Now it sounds as if you have decided that driving a car isn't intelligent. This sounds like you've modified the definition into "things we can do, but don't expect to know how we can do them". But I might be misreading they thrust of your argument, because I don't really understand

    • If you wouldn't want Hitler to have a technology, be concerned. Because eventually the equivalent of Hitler will have it.
    • What am I missing?

      That's easy: ignorance. The term "artificial intelligence" is a lot scarier when your only understanding of computers is gleaned from watching the occasional SciFi show where some artificial intelligent robot or computer is bent on world domination.

      As you say "artificial intelligence" is just pattern recognition and it is no scarier than any other computer algorithm. Of course, that being said, any computer algorithm can either cause harm or benefit to society depending on how it is used and deployed an

      • by Layzej ( 1976930 )

        What am I missing? Isn't modern AI largely just pattern recognition? Is there anything I should be scared of that we don't have existing laws for?

        Misuse of the technology in areas of public surveillance comes to mind. Something google is likely keen to exploit.

      • by AK Marc ( 707885 )
        When the definition of AI is "anything you can do by brute force, but done faster than brute force", then we have, but don't need AI. Turing test isn't for a brute force answer. Which is why it's a good test of AI.

        Chess AI doesn't do anything brute force can't, but it does so significantly more efficiently than brute force. Most machine learning is also like that. Most of the "machine learning" uses are the same. Machine learning could be replaced by making brute force correlation checks against exhau
    • by Rick Schumann ( 4662797 ) on Monday January 20, 2020 @02:11PM (#59638272) Journal
      The danger of the so-called 'AI' crapware they keep trotting out, is people trusting too much the data/results/actions that are it's output.
      There is a entire mythology built up around these computer algorithms that they inappropriately term as 'artificial intelligence', a mythology that goes back more than 100 years, a mythology that started in books and movies, and later in television and popular culture. People hear 'artificial intelligence' and they think there's something, someone, 'alive' in that box, something that has actual thoughts, is conscious, and so on. Nothing could be farther from the truth. Companies like Google/Alphabet have so much money invested in things like so-called 'AI' that they'll do or say just about anything to see it proliferate, otherwise they'll never make enough profit off it to keep their heads off the stockholders' chopping block. But if it's allowed to proliferate, people will trust it way, way too much, thinking that it's somehow infallible; nothing could be farther from the truth, however. Because it can't 'think', because it's not 'conscious', because there is no one 'alive' in that box, you really can't trust it. You can't question it when what it does or what it tells you doesn't make sense; if it's in control of a vehicle (as in so-called 'self-driving vehicles') you won't even have time to hit the 'Emergency Stop' button before it causes some sort of tragedy. If it's medical diagnosis is wrong but the human doctors just accept it anyway, your life could be adversely affected (or you might die). If you believe it's investment 'advice', you might lose everything (whereas a human investment advisor would have told you something different). If police believe it's Minority Report-esque [imdb.com] finger-pointing, innocent people might be arrested and convicted (and the guilty getting away with their crimes). We are poised at a crossroads of sorts, where this half-assed software they call 'AI' could get into way too many things and the damage it could cause would be devastating. That's why it needs to be regulated now, and why companies like Google/Alphabet and others need to be told 'no'.
    • by Kjella ( 173770 ) on Monday January 20, 2020 @02:34PM (#59638378) Homepage

      What am I missing? Isn't modern AI largely just pattern recognition? Is there anything I should be scared of that we don't have existing laws for?

      When it also gets tied into the decision making systems it also becomes a heartless manipulator. Take one of those "freemium" games, the AI won't care that it's trying to get you addicted in fact the goal function will try to mindfuck with you through rewards and punishments to get you addicted because that makes the most money. For sure that's what they do already, but having a personalized devil pushing and pulling your strings experimenting on the users like lab rats won't make it any better.

      So you might say well let's not play then. But what when your employer starts using AI to predict who should be hired and get raises, promotions and bonuses? What happens when your insurance company starts using AI to tailor rates? The current methodology is mainly to throw all the data in there and let the computer work out patterns, millions of weights that can't be explained in human terms and you don't really know what aspect of your life/work it'll latch on to. Maybe it'll figure out this guy is a stayer, we don't need to give him a raise but that guy is a flight risk so let's bump him - maybe independently of work performance.

      It'll also have no problems throwing you under the bus for relatively arbitrary reasons, like this person has a name associated with lower socio-economic classes who have an increased risk of not paying their loans so application denied or bump the risk premium. It doesn't care if there's a feedback loop so it can take a butterfly flapping and turn it into a storm. It all has a very strong aura of precrime, the computer will find you a high risk of something and then get rid of that risk without any real recourse on your end. It's not going to be pretty working for an AI.

      • People worry that AI will one day possess general intelligence -- be as smart as we are. But long before that, it has already become quite powerful.

        Your bank loan application and insurance quote are already processed by an "AI". A pretty dumb rule based one but not a human.

        Soon we will have semi-intelligent software that can make quite important decisions. And people are lazy and like objectivity -- "Computer says no". If a USA immigration computer does not like you, good luck trying to have a human l

      • When it also gets tied into the decision making systems it also becomes a heartless manipulator. But what when your employer starts using AI to predict who should be hired and get raises, promotions and bonuses? What happens when your insurance company starts using AI to tailor rates? The current methodology is mainly to throw all the data in there and let the computer work out patterns, millions of weights that can't be explained in human terms and you don't really know what aspect of your life/work it'll latch on to. Maybe it'll figure out this guy is a stayer, we don't need to give him a raise but that guy is a flight risk so let's bump him - maybe independently of work performance.

        Don't we already have laws for this? These are all terrible things, but if an insurance company or HR system discriminate against you unfairly, that is illegal. I only know of AI/ML as a tool...and really just one for classification. Everything I fear it doing is already illegal and if discovered it was making poor decisions, it would run afoul of existing legislation and regulations. I don't want an AI discriminating against me unfairly, nor do I want a human doing the same things. I can see requiring

        • by ceoyoyo ( 59147 )

          What is unfair discrimination? If I decide you should have a high car insurance rate because you're young and young people get in a lot of crashes, is that unfair? How about if it's because you're male? Black? Short?

          What if it's because an algorithm decides based on one thousand, four hundred and thirty six interacting characteristics that are known to predict your personalized risk of being in a car crash?

          You're absolutely right, it has nothing to do with AI. But AI does make the question more complicated,

    • by ceoyoyo ( 59147 )

      He's talking about AI, as you describe. The big name application is facial recognition, but there are lots of other things too. Crime prediction algorithms. Recidivism risk assessments. Insurance and actuarial algorithms.

      The solution isn't regulating "AI", it's regulating these applications, no matter how they are accomplished.

    • Machine learning is used heavily by Chinese and American Social Credit scoring agencies.

  • Did he steam them from the jailer in Shawshank Redemption?
  • by Gravis Zero ( 934156 ) on Monday January 20, 2020 @12:37PM (#59637876)

    One thing that should be made clear is that corporations should be held liable for the actions of their machinations. This is to say, if it breaks the law (even and especially inadvertently) then the corporation is held liable just as if it were an employee instructed by management. This means if your machination creates a red-lining scheme that the corporation faces the same punishment as if it were a completely and entirely willful action. I say this because if we do not do this then "oh we didn't mean to, the AI did it" will be a cop-out for all their crimes when they knowingly feed a neural network data crafted for the desired outcome.

    It's real simple, if an "AI" does it then the corporation fully intended to do it and cannot deny responsibility.

  • AI is a technology that requires a significant amount of computing power, and a lot of infrastructure to enable whatever functionality it's used to improve. Facial recognition requires a grid of surveillance cameras. Other applications require large arrays of appropriate sensors (microphones etc), and vast feeds of YOUR data.

    Those are all expensive infrastructure, built and owned by big business and big government. Will it be controlled by the average citizen? Will it benefit the average citizen? Hell no! I

  • by sinij ( 911942 ) on Monday January 20, 2020 @12:42PM (#59637896)
    While Pichai is essentially correct in his "the inevitable and necessary price of technological progress" point, he should clarify that such progress does not necessary depends on survival of our species.
  • Artificial intelligence at a fundamental level does not pose any particlular new dangers than natural intelligence already does.

    Human beings are quite capable of doing positively atrocious and destructive things all on their own. The only significant difference that I can think of is that of scale. AI can tirelessly perform tasks which are automated, while humans may get bored or tired of the repetitiion, which may demotivate them from continuing.

    One argument that I categorically do *NOT* buy is tha

    • Artificial intelligence at a fundamental level does not pose any particlular new dangers than natural intelligence already does.

      AI fundamentally changes the cost of labor. AGI fundamentally changes everything.

      why would it have any different values than humans do when. like any one of us, it is trained by the experiences it has in this world and in a human society, which could conceivably lend itself to the emergence of a notion of a "conscience", which may guide their direction and motives, again, just as any human being may experience.

      Human minds are not featureless gate arrays. Half of our DNA is dedicated to structure of mind. Human behavior is not governed only by environment but from specific physical structure and genetic memory accumulated over billions of years of evolution.

      • by mark-t ( 151149 )
        Human minds are not featureless gate arrays. Half of our DNA is dedicated to structure of mind. Human behavior is not governed only by environment but from specific physical structure and genetic memory accumulated over billions of years of evolution.Considering that we don't actually really know what consciousness is yet, I'm going to call that speculative. There is nothing "magic" about natural intelligence, so there should be no fundamental reason it cannot be simulated entirely within a sufficiently com
        • Considering that we don't actually really know what consciousness is yet, I'm going to call that speculative.

          One constant you can always count on is people taking your remarks out of context. What I was responding to was the following assertion:

          "why would it have any different values than humans do when. like any one of us, it is trained by the experiences it has in this world and in a human society, which could conceivably lend itself to the emergence of a notion of a "conscience", which may guide their direction and motives, again, just as any human being may experience."

          I was answering the question by pointing

      • AI fundamentally changes the cost of labor.

        So do power drills, steam engines, and the wooden shipping pallet.

        AGI fundamentally changes everything.

        General intelligence is general. It's just more people.

        Human minds are not featureless gate arrays. Half of our DNA is dedicated to structure of mind. Human behavior is not governed only by environment but from specific physical structure and genetic memory accumulated over billions of years of evolution.

        General intelligence needs to be non-specific. It can't be an engineering or architecture or writing tool that does a single thing; it has to be able to discover new ideas on its own. To do anything on a human level of thinking, it has to be able to think on a human level--that is: it can't be something akin to a chimpanzee, but rather must be able to run abstract analytical consi

        • So do power drills, steam engines, and the wooden shipping pallet.

          This is what I deserve for failing to be specific.

          General intelligence is general. It's just more people.

          What it does is closes the loop giving "dead labor" immortality.

          General intelligence needs to be non-specific. It can't be an engineering or architecture or writing tool that does a single thing; it has to be able to discover new ideas on its own. To do anything on a human level of thinking, it has to be able to think on a human level--that is: it can't be something akin to a chimpanzee, but rather must be able to run abstract analytical considerations unrelated to any particular task.

          You are anthropomorphizing featureless gate arrays.

          The human mind is quite highly structured. Assuming some futuristic featureless array of gates is able to do all of the above I see no reason to assume sensibilities and outcomes of such devices would be at all similar to human mind. Some of human character is evolutionary in nature not exclusively a result of self discovery nor environmental

          • The human mind is quite highly structured.

            The human brain is a chimpanzee brain that traded off enormously-powerful short-term memory for enormously-powerful long-term memory.

            Also, hopfields tend to behave like human long-term memory. In 2004, someone managed to use fMRI to identify connections between neurons involved in human memory by...predicting the connections that would be used in a hopfield. It worked. Turns out some AI programmer had inadvertently figured out how an entire component of the human brain physically works.

            I see no reason to assume sensibilities and outcomes of such devices would be at all similar to human mind

            Because genera

    • by AK Marc ( 707885 )

      Artificial intelligence at a fundamental level does not pose any particlular new dangers than natural intelligence already does.

      NI comes with billions of years of evolution imprinting genetic predispositions to self-preservation and socialization. AI comes with no possible conscience, unless one is "implanted".

      AI doesn't require feelings at all. NI does, in almost all cases.

    • I reject the distinction "artificial intelligence". There is only intelligence, which denotes an actor's ability to manipulate its environment to achieve its goals. The actor may be biological. Or it may not.

      Some biological intelligence is innate. We call that "instincts". Newborn mammals "know" how to nurse when they get to the teat. Soon after, they learn that crying for attention will get them stress relief. Still later, they learn what actions produce other desired responses in their environment. Eventu

  • Google can say what they want. No limits on AI? Ok, let the defense spend all their money on AI, and let the killing commence. No? Oh so there are limits.
  • Big companies like Alphabet/Google love loose regulation that is written to cater to their business model because they have the resources to deal with it and aren't really limited by it. It reduces the risk of being blindsided by a faster moving startup. The technical term for a situation where a regulating agency ends up doing the bidding of the entities which it's supposed to regulate in the public interest is "regulatory capture" [wikipedia.org].
  • by OldMugwump ( 4760237 ) on Monday January 20, 2020 @12:47PM (#59637920) Homepage
    Sounds like a classic anti-competitive move. Get the government to kneecap your competitors in the name of "the public interest".

    Google can afford the costs of any imaginable regulatory bureaucracy - lawyers filing for licenses and permissions, constant interaction with regulators, etc.

    Small startups attempting to compete with Google can't afford these things. So regulation shuts out the competition - at a cost, but a cost that only the largest players can afford.
  • by barius ( 1224526 ) on Monday January 20, 2020 @01:49PM (#59638174)
    Massive corporations buying regulations that add a veneer of legitimacy to their incredibly anti-humanist implementations is as old a trick as there is. I can't wait for the politicians to scramble over each other to write it into law, for the usual fee.
  • Although the tendency was already there, Pichai has enthusiastically been building up momentum for it. Google is rapidly becoming just as despicable as Microsoft.

I've noticed several design suggestions in your code.

Working...