Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI EU Technology

Europe To Pilot AI Ethics Rules, Calls For Participants (techcrunch.com) 51

The European Commission has launched a pilot project intended to test draft ethical rules for developing and applying AI technologies to ensure they can be implemented in practice. It's also aiming to garner feedback and encourage international consensus building for what it dubs "human-centric AI" -- targeting among other talking shops the forthcoming G7 and G20 meetings for increasing discussion on the topic. From a report: The Commission's High Level Group on AI -- a body comprised of 52 experts from across industry, academia and civic society announced last summer -- published their draft ethics guidelines for trustworthy AI in December. A revised version of the document was submitted to the Commission in March. It's boiled the expert consultancy down to a set of seven "key requirements" for trustworthy AI, i.e. in addition to machine learning technologies needing to respect existing laws and regulations -- namely:

Human agency and oversight: "AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy."
Robustness and safety: "Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems."
Privacy and data governance: "Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them."
Transparency: "The traceability of AI systems should be ensured."
Diversity, non-discrimination and fairness: "AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility."
Societal and environmental well-being: "AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility."
Accountability: "Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes."

This discussion has been archived. No new comments can be posted.

Europe To Pilot AI Ethics Rules, Calls For Participants

Comments Filter:
  • ...Isaac Asimov up from the grave?

  • to fly the Boeing 787 Max. I would trust an AI to fly a plane, wouldn't you?
  • Translation (Score:4, Insightful)

    by DNS-and-BIND ( 461968 ) on Monday April 08, 2019 @01:41PM (#58405166) Homepage
    "Don't let impartial algorithms with no preconceptions come to conclusions we don't like. Instead, massage them until they agree with what we've already decided coincides with our pre-existing political biases."
    • by AmiMoJo ( 196126 )

      Yeah that's called "morality" and it tends not to follow along purely logical lines.

  • by DavenH ( 1065780 ) on Monday April 08, 2019 @02:04PM (#58405350)
    And I think it ought to, but those expectations are mediated by people using and/or governing the technology -- like it's not Xerox's responsibility to prevent a fax machine sending hate mail. Expecting intrinsic adherence to all these social desires is a bit ridiculous.
    Transparency in particular; what does this even mean? The ability to inspect vast inscrutible matrices? It may be that they mean an AI has to intrinsically explain all its outputs, though that would actually limit its capabilities severely. Can you explain the mechanism of how you recognise a face or have an idea?
    • by AmiMoJo ( 196126 )

      Here's an example of how similar rules already apply under GDPR.

      You apply for a mortgage onl. You are declined by the bank's computer. You have the right to ask why you were declined (transparency) and to have the decision reviewed by a human.

      For facial recognition transparency would mean disclosing things like how reliable the system is, and if it has know limitations (e.g. less reliable with dark skin), and having a system in place to handle an correct errors. Explaining how it works would involve explain

  • If so, William Gibson will be a happy camper then. I, for one, welcome the Turing Police overlords!

    • I want to know what the penalties are for an AI deliberately disobeying an order, and if the penalty is applied to the AI, its coders, the statisticians who chew the data before it's fed in, or the legal owners. And if there's a penalty for setting an AI loose.

      That'd be a fun job. AI finder general.

  • by sdinfoserv ( 1793266 ) on Monday April 08, 2019 @02:51PM (#58405652)
    "Citizens should have full control over their own data" - now show me one single US entity (Corporation or Government for that matter) that will abide by such a concept.
    If we can't have anything close to this before AI, what dream world do you live in to apply this TO AI?
    • by Anonymous Coward

      What part of "European Commission" did you miss?

    • "Citizens should have full control over their own data" - now show me one single US entity (Corporation or Government for that matter) that will abide by such a concept. If we can't have anything close to this before AI, what dream world do you live in to apply this TO AI?

      You forgot to mention the rest of the friggin world, dood!

      Or is it just the US that triggers your spittle flecked rage?

  • by DickBreath ( 207180 ) on Monday April 08, 2019 @02:54PM (#58405668) Homepage
    If Google doesn't need any AI Ethics, then why does the EU need them?

    Ethics, like anything of value, should be sold to the highest bidder.
  • by sfcat ( 872532 ) on Monday April 08, 2019 @03:29PM (#58405908)
    So no AI then I guess...lets go over the list:

    Human agency and oversight: "AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy."

    This seems to be a weird EU version of don't decrease human liberty. Whether a specific AI system obeys this is something you can never get everyone to agree upon. To some, self-driving cars without steering wheels violate this principle. Not sure I agree with that but some will see it that way.

    Robustness and safety: "Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems."

    Most software systems don't do this for predictable deterministic tasks. All ML algorithms have a built in error rate. Its part of the math. Hell, most humans aren't capable of this when performing many AI tasks.

    Privacy and data governance: "Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them."

    Would be nice. Needs open standards to work. Won't happen because politicians, lawyers and CEOs don't understand software.

    Transparency: "The traceability of AI systems should be ensured."

    No ML or RL algorithm in common use does this.

    Diversity, non-discrimination and fairness: "AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility."

    If the dataset is biased, then the prediction will be biased. Its math. Don't blame AI when its just reflecting human society. And AI can't fix human society either, that's our job. Quit blaming technology for human failings.

    Societal and environmental well-being: "AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility."

    Eh, so no use of AI for companies in finance. I'm sure that will have a positive effect on your banks in the global financial markets.

    Accountability: "Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes."

    Finally, a good one. But it should be extended to anyone who sells or rents software is responsible for the quality of the software and for losses that incur for its problems. That's a better one...

    This entire framework judges AI incorrectly. The standard isn't perfection. Its better than the humans hired to do the job can do. It would be better if the same standards applied to AI computer systems and humans. That reaches the EU's goal with fewer lawyers.

    • Just reward the RL algorithm for being traceable.
    • It's mainly a crappy wishlist. Let me make another:
      1. Pigs should fly.
      2. Gravity should never lead to humans dying.
      3. Shit should smell like flowers and should be sterile.

      Having said that, the idea was that it is a starting point, i.e. a very, very high bar. FTFA:
      "(i) Starting in June 2019, all stakeholders and individuals will be invited to test the assessment list and provide feedback on how to improve it. In addition, the AI high- level expert group will set up an in-depth review with stakeholders from

  • Human agency and oversight: "AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy."

    Exactly what I've said before: Robots, so-called 'AI' (such as it is, LOL), and similar are tools, fundamentally no different than a shovel or a hammer. As with all tools, we create them to serve us, not the other way around. If these tools are somehow misused or perverted in such a way that violates that fundamental truth, then something must be done to rectify the situation; either establish rules concerning the tool or class of tools in question, or get rid of the tool or class of tools entirely. In a ve

  • by Anonymous Coward

    Replace "AI Systems" with "A person" and re-read. Once people can do that there won't be a need for another set of rules just for AI. But, yeah, that's not gonna happen.

  • I guess this will have to come in 5 different levels of explanation of the AI decision:

    Level 1: Explain it to me like I'm 5 or wear a MAGA hat.

    Level 2: Explain it to me like I'm the average regulatory enforcement bureacrat ... ...
    Level 5: Explain it to me like I'm Geoffrey Hinton
  • by AHuxley ( 892839 ) on Monday April 08, 2019 @07:17PM (#58407110) Journal
    A Spanish AI will detect any mention of Catalonia.
    A German AI will report on all interest in German art, culture and German history.
    That French AI will be interested in political memes, cartoons and protests.

    Robustness: EU police spyware to keep working on tracking users.
    Data governance: Censorship.
    Transparency: EU nations can see all your data use.
    Diversity: Lots of illegal immigrants.
    Societal and environmental well-being: Reporting of EU users to their nations police when they use the internet in the wrong political way.
    Accountability: The resulting police interviews after an AI reports a user to the gov.
    Trustworthy: No using an EU funded AI project to support any nation's attempt to exit the EU.
    Consultancy: More tax payers money.
  • That's some majorly fluffy bullshit. Vague, undefined, unenforceable, mystic...

    How about we swap out "AI" for "decisions made by politicians and CEOs" and see how far that gets us?

Some people manage by the book, even though they don't know who wrote the book or even what book.

Working...