Google Cofounder Sergey Brin Warns of AI's Dark Side (wired.com) 79
Google co-founder Sergey Brin has warned that the current boom in artificial intelligence has created a "technology renaissance" that contains many potential threats. In the company's annual Founders' Letter, the Alphabet president struck a note of caution. "The new spring in artificial intelligence is the most significant development in computing in my lifetime," writes Brin. "Every month, there are stunning new applications and transformative new techniques." But, he adds, "such powerful tools also bring with them new questions and responsibilities." From a report: When Google was founded in 1998, Brin writes, the machine learning technique known as artificial neural networks, invented in the 1940s and loosely inspired by studies of the brain, was "a forgotten footnote in computer science." Today the method is the engine of the recent surge in excitement and investment around artificial intelligence. The letter unspools a partial list of where Alphabet uses neural networks, for tasks such as enabling self-driving cars to recognize objects, translating languages, adding captions to YouTube videos, diagnosing eye disease, and even creating better neural networks.
Brin nods to the gains in computing power that have made this possible. He says the custom AI chip running inside some Google servers is more than a million times more powerful than the Pentium II chips in Google's first servers. In a flash of math humor, he says that Google's quantum computing chips might one day offer jumps in speed over existing computers that can be only be described with the number that gave Google its name, a googol, or a 1 followed by 100 zeroes.
As you might expect, Brin expects Alphabet and others to find more uses for AI. But he also acknowledges that the technology brings possible downsides. "Such powerful tools also bring with them new questions and responsibilities," he writes. AI tools might change the nature and number of jobs, or be used to manipulate people, Brin says -- a line that may prompt readers to think of concerns around political manipulation on Facebook. Safety worries range from "fears of sci-fi style sentience to the more near-term questions such as validating the performance of self-driving cars," Brin writes.
Brin nods to the gains in computing power that have made this possible. He says the custom AI chip running inside some Google servers is more than a million times more powerful than the Pentium II chips in Google's first servers. In a flash of math humor, he says that Google's quantum computing chips might one day offer jumps in speed over existing computers that can be only be described with the number that gave Google its name, a googol, or a 1 followed by 100 zeroes.
As you might expect, Brin expects Alphabet and others to find more uses for AI. But he also acknowledges that the technology brings possible downsides. "Such powerful tools also bring with them new questions and responsibilities," he writes. AI tools might change the nature and number of jobs, or be used to manipulate people, Brin says -- a line that may prompt readers to think of concerns around political manipulation on Facebook. Safety worries range from "fears of sci-fi style sentience to the more near-term questions such as validating the performance of self-driving cars," Brin writes.
Re:Weird Al (Score:5, Insightful)
Re: Weird Al (Score:1)
It's most likely a disclaimer, so they can say when things turn to shit 'See, I told you so.'
Re: (Score:1)
Said people have a lot of the AI technology all locked up in patents they own or control. If we 'slow down the research' a lot, they can sit on their portfolio and rake in the cash.
Re: (Score:2)
Has anyone else noticed that the people that are touting the ills of AI are the same ones that are using AI to take your money?
I would reword that.
Has anyone else noticed that the people who work with AI directly have some concerns but the media blows it into full blown apocalypse which trivializes the concerns being raised?
It is almost like the slide is being greased before we are pushed down.
Re: (Score:2)
less of a warning (Score:1)
More like a virtue signal. A warning would come with specifics of things to look out for, details about what needs to be done to prevent bad stuff within his own company, etc.
He's probably betting on it (Score:1)
Shifty that one.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
While they are interesting stories, actually putting it into practice and coding it would be nightmarishly hard. We as human beings can deal with it because we can see the metaphorical grey; an AI only sees as well as it was coded, and inputs are simplified more and more so that it knows what to do if X Y and Z is seen (a vastly simplified version of AI driving a car). I don't think AI is anywhere near human level consciousnesses but trends in AI car driving and use in math and money make it forefront to
Re: (Score:2)
Given that almost all the stories featuring those laws were about exceptions or problems with the laws, I got the impression that the overarching message was that simplistic expressions of morality and/or ethics is _hard_ and likely to cause more problems than it solves.
It's been a long while since I read Asimov, so I may be adding later commentary to my recollection.
Just as 1984 was supposed to be a warning, not a blueprint (joking) so too Asimov's laws (more seriously).
Today's AI is not magic (Score:1)
" loosely inspired by studies of the brain"
Understatement of the year. There are many programmers today (and a lot more non-programmers) who think that neural nets are "smart" or have "thought processes" or are even conscious. Most neural nets are just a cascading if-then directed acyclic-graph and a weight assigned to each. It's a deterministic feature finder. We can't figured out "thought" processes for computers, and our neurons operate in a much more complicated way than backprop neural nets.
Re: (Score:2)
Most neural nets are just a cascading if-then directed acyclic-graph and a weight assigned to each.
True. But the important point is that they learn those weights on their own, from examples, rather than being explicitly told.
Re: (Score:2)
So the neural net seems to be able learn some of the underlying concepts, and we don't understand the algorithm that was synthesized.
That said, we still fully control these AIs, they don't evolve while solving a task, they can be saved to disk and restored
Re: (Score:3)
Maybe it would be better to call it automatic programming than artificial intelligence.
A tensor of weights are data not "programming".
Researchers have been using the term "artificial intelligence" since 1956. They shouldn't have to stop just because Hollywood later started using the term to mean something else.
If you get confused when terms you first heard in movies are used differently in real life, perhaps you should just accept that your opinion doesn't really matter very much. Why should the research community change their terminology to accommodate you?
Re: Today's AI is not magic (Score:2)
In many cases itâ(TM)s very much like magic, that is exactly why there are many in our industry who are raising warnings - not about what AI is today, but what it has the capacity to become with ongoing advances of technology.
Itâ(TM)s important to recognize that the primary virtue of machine learning is that, in many knowledge domain, it can vastly outperform software hand crafted by humans.
We may write the âinterfaceâ(TM) and we may write the training software but, in many cases we don
Re: (Score:1)
Re: (Score:2)
My layperson notion is that humans have two main modes of doing stuff. There's reason, where we split things into objects and use logic to connect them up, and this is a very powerful intelligence, as it basically sets us apart from other animals. But as those whacky old Greek philosophers knew, there are things which are not thing-enough, so they escape logic. That's the point of Zeno's Paradoxes, I guess.
Meanwhile, there is the rest of the brain, which constructs our perceptions, and it does that in a way
Re: (Score:1)
Natural Intelligence is not magic either (Score:2)
We haven't entirely figured out "thought" processes for natural intelligence, and biological neurons are themselves much more complicated than a 'unit' in a backprop-trained multi-layer perceptron. But in the end, do we know what there is in natural intelligence that can't ever be re-implemented? Is there evidence this is so?
To the contrary, there is increasing evidence that certain behaviors and perceptual capa
Be afraid of natural intelligences! (Score:2)
Not that AI-driven cyborgs will rebel against humans, but that powerful billionaire *humans*, who have known intelligence, and unlike AI, drive and will, creating cybernetic slaves who will *never* rebel. These slaves further increase their master's wealth and power to the harm of the billions of other plebeians on Earth.
The Roman Republic and Empire had this problem---the slaveowners were so vastly richer than the average person. (Gaius Julius Caesar supported restricting slavery to
Overstating what "AI" can do (Score:5, Insightful)
As always, it's the 80% of the features that take 20% of the work. The remaining 20% is the hard part.
Re:Overstating what "AI" can do (Score:5, Insightful)
It's not really artificial intelligence yet.
Yes it is. When researchers and practitioners say "AI" they don't mean human-level Hollywood AI. Machine learning is a subset of AI.
That makes it a bit smarter than a chimp perhaps, but "intelligent"?
State-of-the-art AI is nowhere near the intelligence of a chimp. Not even close.
Re: (Score:3)
Try to teach a chimp to translate Chinese to English...
Try to teach a computer to peel a banana.
Re: (Score:2)
Try to motivate a computer to want to peel a banana.
Re: (Score:2)
Profit!
Re:Overstating what "AI" can do (Score:5, Insightful)
Its when the accuracy becomes better than a human then it becomes a problem. Some people like oncology radiologists, their job is looking at X-ray, MRI and CAT scans and identifying when a fuzzy white blob is cancer or not. There were people who made their living from creating books for the blind by reciting the words. They lost that living when smartphones and home computers could do that automatically.
Re: (Score:3)
The ORs have it: identify objects OR listen OR translate OR drive...
Yes, one machine might assemble iPhones; another might navigate a vehicle; another might calculate your tax payment - but none will do ALL those things for a very long time. That requires intelligence, among other things like various appendages.
Re: (Score:2)
another might navigate a vehicle; (...) but none will do ALL those things for a very long time.
You think a self-driving car is one AI? I'm guessing it's at a minimum two, one image-recognition AI translating sensor data to objects and one driving AI working out the route. I think both because of resource limitations and to upgrade components we will have sub-AIs that deal with their little specialty, not so different from human brain centers and how when you learn to ride a bike it's stored somewhere, you don't figure it out from scratch. Like a chef AI would have a small "fillet a fish" sub-AI and t
Re: (Score:2)
Agreed. AI used to mean a computer that could "think". At least that's what people have been lead to believe. Over the years the term "AI" has been hijacked by companies who are clearly taking advantage of the misconception with advertisements for systems that talk to people about finding viruses and "healing" networks and other such nonsense. These programs are not "thinking" like a person and I don't believe they ever will simply because they do not have human experiences. They are simply sophisticated al
Re: (Score:2)
Hello, what is your name?"
John
Hello John. How are you?
Re: (Score:2)
Lead by example? (Score:4, Insightful)
Watch what we say (Score:1)
Barney Google and Snuffy Smith (Score:2)
Pffft. Everybody knows that Google's name really comes from Barney Google with his goo-goo-googley eyes.
==========
Who's the most important man this country ever knew?
Do you know what politician I have reference to?
Well, it isn't Mr. Bryan, and it isn't Mr. Hughes.
I've got a hunch that to that bunch I'm going to introduce:
(Again you're wrong and to this throng I'm going to Introduce:)
Barney Google, with the goo-goo-googley eyes.
Barney Google bet his horse would win the prize.
When the horses ran that day, Spa
Re: (Score:2)
...or a force for good (Score:1)
Re: (Score:1)
Re: (Score:1)
And a number of corporations, through inaction of various forms, have caused many multiples of people using their products to die.
Re: ...or a force for good (Score:1)
Re: (Score:2)
No mass murder has ever been committed by a corporation.
I guess you have never heard of a city in India called "Bhopal".
Relevant (Score:1)
Re: (Score:2)
How would you prove a natural intelligence generated score was not engaging in illegal generation?
With computed scores, one does not explicitly use such improper inputs. Next, the specific technology used for the score, and analytica
The Brain is Complex (Score:2)
The brain is a lot more complex than people are giving it credit. The whole body and brain is even more complex and it is a system that works together.
Neural networks of today are nothing like how the brain works. (They are similar to how a neuron works)
There are a few architectures that are used to simulate how the brain works on a computer (Cognitive Architectures), but they don't cover all aspects of them.(i.e. speech, cognition, emotion, creativity, reactive planning, complex situations, being above to
AI Advanced Tecnology (Score:1)
OHNOEZ! MY TERMINATORZ! (Score:2)
Seriously.
We're already informed about the possible problems with supremely advanced AI.
Does this mean we should just throw our wooden shoes into the gears and kick of the Butlerian Jihad now?
Of course not.
We still need to do as much research as possible on AI. So we can actually understand the delineation point between "Assistive software" and "Crazy, kill everything AI."