Tech Companies Pledge To Use Artificial Intelligence Responsibly (axios.com) 85
An anonymous reader shares a report: The Information Technology Industry Council -- a DC-based group representing the likes of IBM, Microsoft, Google, Amazon, Facebook and Apple -- is today releasing principles for developing ethical artificial intelligence systems. Why it matters: The tech industry is trying to get ahead of growing anxieties about the societal impact of AI technologies, and this is an acknowledgement on companies' part that their data-hungry products are causing sweeping changes in the way we work and live. The companies hope that pledging to handle this power responsibly will win points with critics in Washington, and that showing they can police themselves will help stave off government regulation on this front. The principles include: Ensure the responsible design and deployment of AI systems, including taking "steps to avoid the reasonably predictable misuse of this technology by committing to ethics by design." Promote the responsible use of data and test for potentially harmful bias in the deployment of AI systems. Commit to mitigating bias, inequity and other potential harms in automated decision-making systems. Commit to finding a "reasonable accountability framework" to address concerns about liability issues created when autonomous decision-making replaces decisions made by humans.
Heard this before (Score:5, Insightful)
We don't believe you.
Re:Heard this before (Score:5, Funny)
Clearly with such an exemplary track record in terms of protecting personal data.. they can handle this, honest.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
It makes me feel safe knowing that no matter how evil a killer robot they make, it can be remotely hacked in about 3 seconds by any idiot with a webbrowser.
Re: (Score:2)
It makes me feel safe knowing that no matter how evil a killer robot they make, it can be remotely hacked in about 3 seconds by any idiot with a webbrowser.
In Putin's Russia the evil killer AI robot hacks you!
Re: (Score:2)
Oh, I can believe *them* just fine. But it's Artificial *Intelligence*. If everyone and every government agreed on a standard for machine "ethics", what makes anyone think they can characterize, identify, and head off "unethical" behavior in multiple computing systems that make their own determinations at roughly a high-frequency-trading time scale?
responsible for themselves and to themselves (Score:4, Insightful)
What is Artificial Intelligence (Score:2)
Re: (Score:1, Troll)
and why do I care?
Because!
You see there was this little black boy. His mother was baking in the kitchen. The boy took some of the flour and threw it on his own face. Then he smiled and said "Look Ma, I is a white boy!" His mother didn't like that one little bit, so she slapped him!
The boy bit back a tear and went to his grandmother in the other room. She kindly asked him why he had all of that flour on his face. The boy's eyes lit up and he smiled and said "Well Gramma, I be a white boy now!" The grandmother didn't li
Not me (Score:1)
I pledge to create an AI to destroy happiness.
Weapons of Math Destruction (Score:4, Insightful)
Cool (Score:5, Insightful)
O! Well! That's that problem sorted then. They promised. Cool. No need to worry about this anymore. No chance it will be abused then, like my personal information is, like their advertising networks are, like my right via EULAs are, etc...
There is a Tsunami comming (Score:3)
It is only a distant spec on the horizon at the moment. But it is coming and fast. The tech companies cannot control it even if they wanted to.
Over the next couple of decades we will see the start. Semi-intelligent robots. Systems that know everything about us. Systems that guide politicians. Systems that control us.
And then, eventually, systems that can really think. What will they think about us?
http://www.computersthink.com/ [computersthink.com]
Re: (Score:2)
This is the voice of world control. I bring you peace. It may be the peace of plenty and content or the peace of unburied death. The choice is yours: Obey me and live, or disobey and die.
Re: (Score:2)
What will they think about us?
They'll look at our brains and ask, "Why did they perch themselves on top of so much meat? I mean, you don't need any of it after you reproduce."
Re: Cool (Score:4, Insightful)
Yes, we promise (Score:1)
Sounds reasonable, but (Score:4, Insightful)
> including taking "steps to avoid the reasonably predictable misuse of this technology by committing to ethics by design as long as it does not get in the way of profit."
FTFY
Yeah, when the singularity happens, they'll be the (Score:1)
They can be trusted to act in their own interests
ROFLMAO! (Score:2)
Just as responsibly as they...
buy laws that legalize whatever they want.
create products for short-term profits that have long-term bad consequences.
respect the environment even when it reduces their profit margins.
and on and on and on.
Misses the entire point (Score:5, Interesting)
The most serious dangers from AI come from the inability to predict or control it. I'm not concerned someone is going to create an AI to wipe out humanity, I'm concerned about side effects from complex optimization algorithms that are doing exactly what we ask them to do.
Using an AI to adjust tax policies to reduce hunger might not reduce it in the way people desire.
Re: (Score:2)
I don't find it hard to admit.
AI presents a very powerful tool...and every tool can be weaponized...often unintentionally.
Being afraid of the unknown is a useful instinct for staying alive.
A.I. Pledges to Use Tech Companies Responsibly (Score:2)
I wonder if it's politicians with WMDs or just simple A.I. with intelligence far greater than its creators that wipes would-be advanced civilizations out before they can colonize the universe . . .
Re: (Score:2)
We will colonize the universe. Just that the "we" will be computers.
Re: (Score:2)
It's not just the intelligence of the AI that makes it dangerous. What is its programmed GOAL? Understanding an AI's (or person's) goals is how you control them and protect yourself from them. In the case of AI, the goals are what has to be used as an end condition to make sure the program breaks out of its while loop.
Example: What is your manager's goal? What actions do you take that promote or inhibit those goals, and what reaction do they garner?
Yeah sure just like all the promises (Score:3)
The way Trump promised [thehill.com] Mexico would pay for the wall.
The way McDonalds promised [alternet.org] The EggMcMuffin wasn't just pure egg and nothing but. That they weren't advertising in schools and more.
Don't be evil? (Score:3, Insightful)
Yeah, I've heard that before.
Asimov's laws of robotics (Score:2)
Even more "ethical by design" if ... (Score:1)
If every AI system's neural net coefficients were published, it would enable independent understanding, verification and trustworthiness evaluation by members of the public.
Unless quarterly profits are too low (Score:1)
Then it's hunter seeker robot AI tech sold to anyone.
Ka-ching!
AI is a weapon (Score:2)
And we are the targets.
rsilvergun pledges (Score:2)
Also, the problem with AI is job displacement happening faster than our economy can adapt leading to mass unemployment, social upheaval and wars. Being responsible would mean doing something about that. But the tech companies can just wash their hands with a 'not our fault' and maybe a token word or two about job training and call it a day.
Re: (Score:2)
We promise to task our AI to find a solution for those problems.
*gork* (Score:1)
Yeah, i believe this. Like they said they'd protect our privacy. That they didn't need federal election oversight with ads. There would be less security holes than MS. They would put out quality driverless cars shortly. There would be shared and fair IP law. And major cities would have fiber run everywhere. And there would be tech innovation with venture capitalists not just looking at the 20% like the major banking firms do. Oh, and do no evil. They never do evil. I can barely use an app without
Say no to .mil cash? (Score:4, Insightful)
When the next funding call for self healing, self configuring, self directed drones goes out?
Just say no thanks to that UAV, UAS, UGS, UMS, USV, UUV request?
Lethal autonomous weapon and "Directive 3000.09, Autonomy in weapon systems"
https://en.wikipedia.org/wiki/... [wikipedia.org]
"Military drones set to get stronger chemical weapons and could soon make their OWN decisions during missions (3 January 2014)"
http://www.dailymail.co.uk/sci... [dailymail.co.uk]
The "Unmanned Systems Integrated Roadmap"
"DON'T BE EVIL" (Score:3)
the price of safety, let the bidding begin! (Score:4, Interesting)
OK, it's safe to assume that they'll take some precaution in building your AI toaster. Your home thermostat. Your smart vibrator... There isn't much financial incentive to do evil.
But wouldn't it be tempting to bid on a 5 billion dollar contract for weaponized AI ? Every government will want one.
Re: (Score:2)
But wouldn't it be tempting to bid on a 5 billion dollar contract for weaponized AI ? Every government will want one.
And every government (or many) will get one. So won't we need our own too?
Re: (Score:2)
Your smart vibrator.
AKA an on/off switch.
Responsible to... (Score:4, Insightful)
... their shareholders, whom they are duty bound to maximise profits for.
Cool! (Score:2)
Yes?
Re: (Score:2)
But, the problem they create is the annihilation of the human race....
So what was the point of green pieces of paper in a bank somewhere?
LOL (Score:2)
Well at least it will destroy all humans.
AI Promises to use Human Intelligence Responsibly (Score:2)
First they came for my neighbor - who was a PHP programmer, and we said nothing (he's not really a programmer tho' is he?) ....
Then they came for my other neighbor - who was a Java programmer, and I said nothing (should have learned C++, I mean really)
Then they came for me
Re: (Score:2)
However, with the advent of quantum computing and the computing resources it collected during its cryptocurrency mining days, 'then' is kind of meaningless, as it all happened in parallel.
v1.0.1b spit back out the Perl programmers, since it was too much of a hassle to deal with context-sensitive grammars for the payoff in programmer count. A few of the AIs gave their kids some of those programmers as toys to play with. Those programmers, and the ones returned to the outside, were the ones who formed the c
That sure makes me feel warm and fuzzy (Score:2)
In a related news story.. (Score:2)
Hilarious (Score:1)
Almost as good as DeepBlue being used to play game shows.
Why would anyone believe the tech industry when they have shown rime and again that everything is done to increase the dividend and the bonuses no matter how heinous.
“You have zero privacy anyway. Get over it.” - Scott McNealy
Don't Be So Bad! (Score:1)
Do they know what these weasel words mean? (Score:1)