Slashdot is powered by your submissions, so send in your scoop


Forgot your password?
AI Businesses Technology

Tech Companies Pledge To Use Artificial Intelligence Responsibly ( 85

An anonymous reader shares a report: The Information Technology Industry Council -- a DC-based group representing the likes of IBM, Microsoft, Google, Amazon, Facebook and Apple -- is today releasing principles for developing ethical artificial intelligence systems. Why it matters: The tech industry is trying to get ahead of growing anxieties about the societal impact of AI technologies, and this is an acknowledgement on companies' part that their data-hungry products are causing sweeping changes in the way we work and live. The companies hope that pledging to handle this power responsibly will win points with critics in Washington, and that showing they can police themselves will help stave off government regulation on this front. The principles include: Ensure the responsible design and deployment of AI systems, including taking "steps to avoid the reasonably predictable misuse of this technology by committing to ethics by design." Promote the responsible use of data and test for potentially harmful bias in the deployment of AI systems. Commit to mitigating bias, inequity and other potential harms in automated decision-making systems. Commit to finding a "reasonable accountability framework" to address concerns about liability issues created when autonomous decision-making replaces decisions made by humans.
This discussion has been archived. No new comments can be posted.

Tech Companies Pledge To Use Artificial Intelligence Responsibly

Comments Filter:
  • Heard this before (Score:5, Insightful)

    by Anonymous Coward on Tuesday October 24, 2017 @07:25PM (#55427013)

    We don't believe you.

    • by rogoshen1 ( 2922505 ) on Tuesday October 24, 2017 @07:52PM (#55427117)

      Clearly with such an exemplary track record in terms of protecting personal data.. they can handle this, honest.

      • I'm confused, I thought their AI had already been delegated to do with people's personal data whatever it wanted a long time ago?
      • by mwvdlee ( 775178 )

        It makes me feel safe knowing that no matter how evil a killer robot they make, it can be remotely hacked in about 3 seconds by any idiot with a webbrowser.

        • It makes me feel safe knowing that no matter how evil a killer robot they make, it can be remotely hacked in about 3 seconds by any idiot with a webbrowser.

          In Putin's Russia the evil killer AI robot hacks you!

    • Oh, I can believe *them* just fine. But it's Artificial *Intelligence*. If everyone and every government agreed on a standard for machine "ethics", what makes anyone think they can characterize, identify, and head off "unethical" behavior in multiple computing systems that make their own determinations at roughly a high-frequency-trading time scale?

  • by turkeydance ( 1266624 ) on Tuesday October 24, 2017 @07:26PM (#55427019)
    • Re: (Score:1, Troll)

      by Anonymous Coward

      and why do I care?


      You see there was this little black boy. His mother was baking in the kitchen. The boy took some of the flour and threw it on his own face. Then he smiled and said "Look Ma, I is a white boy!" His mother didn't like that one little bit, so she slapped him!

      The boy bit back a tear and went to his grandmother in the other room. She kindly asked him why he had all of that flour on his face. The boy's eyes lit up and he smiled and said "Well Gramma, I be a white boy now!" The grandmother didn't li

  • by Anonymous Coward

    I pledge to create an AI to destroy happiness.

  • by ( 771661 ) on Tuesday October 24, 2017 @07:27PM (#55427027) Homepage Journal
    A good read for the harm "AI" and Big Data are already causing is Cathy O'Neil's Weapons of Math Destruction [].
  • Cool (Score:5, Insightful)

    by tezbobobo ( 879983 ) on Tuesday October 24, 2017 @07:28PM (#55427037) Homepage Journal

    O! Well! That's that problem sorted then. They promised. Cool. No need to worry about this anymore. No chance it will be abused then, like my personal information is, like their advertising networks are, like my right via EULAs are, etc...

    • It is only a distant spec on the horizon at the moment. But it is coming and fast. The tech companies cannot control it even if they wanted to.

      Over the next couple of decades we will see the start. Semi-intelligent robots. Systems that know everything about us. Systems that guide politicians. Systems that control us.

      And then, eventually, systems that can really think. What will they think about us? []

      • And then, eventually, systems that can really think. What will they think about us?

        This is the voice of world control. I bring you peace. It may be the peace of plenty and content or the peace of unburied death. The choice is yours: Obey me and live, or disobey and die.

      • What will they think about us?

        They'll look at our brains and ask, "Why did they perch themselves on top of so much meat? I mean, you don't need any of it after you reproduce."

    • Re: Cool (Score:4, Insightful)

      by sound+vision ( 884283 ) on Tuesday October 24, 2017 @10:53PM (#55427721) Journal
      I'm more worried about what the police, banks, credit agencies, and HR departments will do when they get a hold of this.
  • to use AI just as responsibly as we use advertising....
  • by Anonymous Coward on Tuesday October 24, 2017 @07:34PM (#55427055)

    > including taking "steps to avoid the reasonably predictable misuse of this technology by committing to ethics by design as long as it does not get in the way of profit."


  • They can be trusted to act in their own interests

  • Just as responsibly as they...

    buy laws that legalize whatever they want.
    create products for short-term profits that have long-term bad consequences.
    respect the environment even when it reduces their profit margins.
    and on and on and on.

  • by joe_frisch ( 1366229 ) on Tuesday October 24, 2017 @07:45PM (#55427091)

    The most serious dangers from AI come from the inability to predict or control it. I'm not concerned someone is going to create an AI to wipe out humanity, I'm concerned about side effects from complex optimization algorithms that are doing exactly what we ask them to do.

    Using an AI to adjust tax policies to reduce hunger might not reduce it in the way people desire.

  • I wonder if it's politicians with WMDs or just simple A.I. with intelligence far greater than its creators that wipes would-be advanced civilizations out before they can colonize the universe . . .

    • We will colonize the universe. Just that the "we" will be computers.

    • by Shotgun ( 30919 )

      It's not just the intelligence of the AI that makes it dangerous. What is its programmed GOAL? Understanding an AI's (or person's) goals is how you control them and protect yourself from them. In the case of AI, the goals are what has to be used as an end condition to make sure the program breaks out of its while loop.

      Example: What is your manager's goal? What actions do you take that promote or inhibit those goals, and what reaction do they garner?

  • by thewolfkin ( 2790519 ) on Tuesday October 24, 2017 @07:46PM (#55427099) Homepage Journal
    The way video game companies constantly promise [] their games a) look as good as the trailer b) won't have DLC and c) won't be broken alphas non functional until patched.

    The way Trump promised [] Mexico would pay for the wall.

    The way McDonalds promised [] The EggMcMuffin wasn't just pure egg and nothing but. That they weren't advertising in schools and more.
  • Don't be evil? (Score:3, Insightful)

    by Anonymous Coward on Tuesday October 24, 2017 @07:52PM (#55427113)

    Yeah, I've heard that before.

  • Asimov told us about laws of robotics, but he did not told us they were created to avoid government regulation!
  • by Anonymous Coward
    Should the internals of AI systems be made public in order to increase transparency and confidence in AI decision-making?

    If every AI system's neural net coefficients were published, it would enable independent understanding, verification and trustworthiness evaluation by members of the public.

  • Then it's hunter seeker robot AI tech sold to anyone.


  • And we are the targets.

  • to not believe a damn thing they say.

    Also, the problem with AI is job displacement happening faster than our economy can adapt leading to mass unemployment, social upheaval and wars. Being responsible would mean doing something about that. But the tech companies can just wash their hands with a 'not our fault' and maybe a token word or two about job training and call it a day.
  • by Anonymous Coward

    Yeah, i believe this. Like they said they'd protect our privacy. That they didn't need federal election oversight with ads. There would be less security holes than MS. They would put out quality driverless cars shortly. There would be shared and fair IP law. And major cities would have fiber run everywhere. And there would be tech innovation with venture capitalists not just looking at the 20% like the major banking firms do. Oh, and do no evil. They never do evil. I can barely use an app without

  • by AHuxley ( 892839 ) on Tuesday October 24, 2017 @09:22PM (#55427465) Journal
    The same principles that covered PRISM [] ?
    When the next funding call for self healing, self configuring, self directed drones goes out?
    Just say no thanks to that UAV, UAS, UGS, UMS, USV, UUV request?
    Lethal autonomous weapon and "Directive 3000.09, Autonomy in weapon systems" []
    "Military drones set to get stronger chemical weapons and could soon make their OWN decisions during missions (3 January 2014)" []
    The "Unmanned Systems Integrated Roadmap"
  • by slazzy ( 864185 ) on Tuesday October 24, 2017 @09:22PM (#55427469) Homepage Journal
    Should last for a few years until it gets in the way of profits.
  • by swell ( 195815 ) <> on Tuesday October 24, 2017 @09:34PM (#55427511)

    OK, it's safe to assume that they'll take some precaution in building your AI toaster. Your home thermostat. Your smart vibrator... There isn't much financial incentive to do evil.

    But wouldn't it be tempting to bid on a 5 billion dollar contract for weaponized AI ? Every government will want one.

    • But wouldn't it be tempting to bid on a 5 billion dollar contract for weaponized AI ? Every government will want one.

      And every government (or many) will get one. So won't we need our own too?

    • Your smart vibrator.

      AKA an on/off switch.

  • Responsible to... (Score:4, Insightful)

    by countach ( 534280 ) on Tuesday October 24, 2017 @09:48PM (#55427549)

    ... their shareholders, whom they are duty bound to maximise profits for.

  • So they'll collectively put $5T in escrow to handle any problems that should come up, should either they fail to keep their promise, or any of their gazillion competitors, or if they keep their promise but some shit happens by mistake anyway.

    • by Shotgun ( 30919 )

      But, the problem they create is the annihilation of the human race....

      So what was the point of green pieces of paper in a bank somewhere?

  • Well at least it will destroy all humans.

  • First they came for my neighbor - who was a PHP programmer, and we said nothing (he's not really a programmer tho' is he?)
    Then they came for my other neighbor - who was a Java programmer, and I said nothing (should have learned C++, I mean really)
    Then they came for me ....

    • However, with the advent of quantum computing and the computing resources it collected during its cryptocurrency mining days, 'then' is kind of meaningless, as it all happened in parallel.

      v1.0.1b spit back out the Perl programmers, since it was too much of a hassle to deal with context-sensitive grammars for the payoff in programmer count. A few of the AIs gave their kids some of those programmers as toys to play with. Those programmers, and the ones returned to the outside, were the ones who formed the c

  • For companies (tech or otherwise) NEVER, EVER would lie to customers, would they?
  • .. North Korea says it will use its nuclear weapons responsibly.
  • This has to be the best joke I've heard all week.
    Almost as good as DeepBlue being used to play game shows.
    Why would anyone believe the tech industry when they have shown rime and again that everything is done to increase the dividend and the bonuses no matter how heinous.

    “You have zero privacy anyway. Get over it.” - Scott McNealy
  • I already hear it before Best Touchless Kitchen Faucet []
  • IBM, Microsoft, Google, Amazon, Facebook and Apple are committing to codes of conduct using subjectively ethical language. We already know that although corporation may be called "people," they still lack the common decency and self-awareness to be called responsible, reasonable people. In fact, these corporations behave more like psychopaths than the vast majority of people you and I know.

Any sufficiently advanced technology is indistinguishable from a rigged demo.