Europe To Pilot AI Ethics Rules, Calls For Participants (techcrunch.com) 51
The European Commission has launched a pilot project intended to test draft ethical rules for developing and applying AI technologies to ensure they can be implemented in practice. It's also aiming to garner feedback and encourage international consensus building for what it dubs "human-centric AI" -- targeting among other talking shops the forthcoming G7 and G20 meetings for increasing discussion on the topic. From a report: The Commission's High Level Group on AI -- a body comprised of 52 experts from across industry, academia and civic society announced last summer -- published their draft ethics guidelines for trustworthy AI in December. A revised version of the document was submitted to the Commission in March. It's boiled the expert consultancy down to a set of seven "key requirements" for trustworthy AI, i.e. in addition to machine learning technologies needing to respect existing laws and regulations -- namely:
Human agency and oversight: "AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy."
Robustness and safety: "Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems."
Privacy and data governance: "Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them."
Transparency: "The traceability of AI systems should be ensured."
Diversity, non-discrimination and fairness: "AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility."
Societal and environmental well-being: "AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility."
Accountability: "Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes."
Human agency and oversight: "AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy."
Robustness and safety: "Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems."
Privacy and data governance: "Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them."
Transparency: "The traceability of AI systems should be ensured."
Diversity, non-discrimination and fairness: "AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility."
Societal and environmental well-being: "AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility."
Accountability: "Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes."
Re: (Score:2)
Can we dig (Score:2)
...Isaac Asimov up from the grave?
Re:Can we dig (Score:5, Informative)
That's what they're going for here, but they should read the rest of the extract:
Trevize frowned. "How do you decide what is injurious, or not injurious, to humanity as a whole?"
"Precisely, sir," said Daneel. "In theory, the Zeroth Law was the answer to our problems. In practice, we could never decide. A human being is a concrete object. Injury to a person can be estimated and judged. Humanity is an abstraction."
Re: (Score:1)
Didn't HAL 9000 assassinate the astronauts to save humanity? Problems with doing Violence X to prevent Violence Y get sticky, putting bots into making large moral tradeoffs. Dictators we call "cruel" were allegedly doing just that. "I have to punish a large group to send a message so that another group doesn't rise up, creating even more violence."
Re: (Score:2)
Re: (Score:1)
Good, we need pilots (Score:1)
Translation (Score:4, Insightful)
Re: (Score:2)
Re: (Score:2)
Yeah that's called "morality" and it tends not to follow along purely logical lines.
Could be applied to technology as a whole (Score:3)
Transparency in particular; what does this even mean? The ability to inspect vast inscrutible matrices? It may be that they mean an AI has to intrinsically explain all its outputs, though that would actually limit its capabilities severely. Can you explain the mechanism of how you recognise a face or have an idea?
Re: (Score:3)
Here's an example of how similar rules already apply under GDPR.
You apply for a mortgage onl. You are declined by the bank's computer. You have the right to ask why you were declined (transparency) and to have the decision reviewed by a human.
For facial recognition transparency would mean disclosing things like how reliable the system is, and if it has know limitations (e.g. less reliable with dark skin), and having a system in place to handle an correct errors. Explaining how it works would involve explain
Turing Police? (Score:2)
If so, William Gibson will be a happy camper then. I, for one, welcome the Turing Police overlords!
Re: (Score:3)
I want to know what the penalties are for an AI deliberately disobeying an order, and if the penalty is applied to the AI, its coders, the statisticians who chew the data before it's fed in, or the legal owners. And if there's a penalty for setting an AI loose.
That'd be a fun job. AI finder general.
Re: (Score:1)
Re: (Score:2)
You mean Rick Deckard, right?
(Eyeing you suspiciously) and how do YOU know so much about Replicants, eh? Hands where I can see 'em, pal!
awww, isn't that cute (Score:3, Insightful)
If we can't have anything close to this before AI, what dream world do you live in to apply this TO AI?
Reading Comprehension Failure (Score:1)
What part of "European Commission" did you miss?
Re: (Score:2)
"Citizens should have full control over their own data" - now show me one single US entity (Corporation or Government for that matter) that will abide by such a concept. If we can't have anything close to this before AI, what dream world do you live in to apply this TO AI?
You forgot to mention the rest of the friggin world, dood!
Or is it just the US that triggers your spittle flecked rage?
Unnecessary (Score:3)
Ethics, like anything of value, should be sold to the highest bidder.
So No AI in the EU then? (Score:3)
Human agency and oversight: "AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy."
This seems to be a weird EU version of don't decrease human liberty. Whether a specific AI system obeys this is something you can never get everyone to agree upon. To some, self-driving cars without steering wheels violate this principle. Not sure I agree with that but some will see it that way.
Robustness and safety: "Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems."
Most software systems don't do this for predictable deterministic tasks. All ML algorithms have a built in error rate. Its part of the math. Hell, most humans aren't capable of this when performing many AI tasks.
Privacy and data governance: "Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them."
Would be nice. Needs open standards to work. Won't happen because politicians, lawyers and CEOs don't understand software.
Transparency: "The traceability of AI systems should be ensured."
No ML or RL algorithm in common use does this.
Diversity, non-discrimination and fairness: "AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility."
If the dataset is biased, then the prediction will be biased. Its math. Don't blame AI when its just reflecting human society. And AI can't fix human society either, that's our job. Quit blaming technology for human failings.
Societal and environmental well-being: "AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility."
Eh, so no use of AI for companies in finance. I'm sure that will have a positive effect on your banks in the global financial markets.
Accountability: "Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes."
Finally, a good one. But it should be extended to anyone who sells or rents software is responsible for the quality of the software and for losses that incur for its problems. That's a better one...
This entire framework judges AI incorrectly. The standard isn't perfection. Its better than the humans hired to do the job can do. It would be better if the same standards applied to AI computer systems and humans. That reaches the EU's goal with fewer lawyers.
Re: (Score:1)
Re: (Score:2)
It's mainly a crappy wishlist. Let me make another:
1. Pigs should fly.
2. Gravity should never lead to humans dying.
3. Shit should smell like flowers and should be sterile.
Having said that, the idea was that it is a starting point, i.e. a very, very high bar. FTFA:
"(i) Starting in June 2019, all stakeholders and individuals will be invited to test the assessment list and provide feedback on how to improve it. In addition, the AI high- level expert group will set up an in-depth review with stakeholders from
Are you using the tool or is it using you? (Score:2)
Human agency and oversight: "AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy."
Exactly what I've said before: Robots, so-called 'AI' (such as it is, LOL), and similar are tools, fundamentally no different than a shovel or a hammer. As with all tools, we create them to serve us, not the other way around. If these tools are somehow misused or perverted in such a way that violates that fundamental truth, then something must be done to rectify the situation; either establish rules concerning the tool or class of tools in question, or get rid of the tool or class of tools entirely. In a ve
Replace "AI Systems" with "A person" (Score:1)
Replace "AI Systems" with "A person" and re-read. Once people can do that there won't be a need for another set of rules just for AI. But, yeah, that's not gonna happen.
Re: AI Transparency requirement (Score:2)
Level 1: Explain it to me like I'm 5 or wear a MAGA hat.
Level 2: Explain it to me like I'm the average regulatory enforcement bureacrat
Level 5: Explain it to me like I'm Geoffrey Hinton
Fundamental rights in the EU (Score:4, Insightful)
A German AI will report on all interest in German art, culture and German history.
That French AI will be interested in political memes, cartoons and protests.
Robustness: EU police spyware to keep working on tracking users.
Data governance: Censorship.
Transparency: EU nations can see all your data use.
Diversity: Lots of illegal immigrants.
Societal and environmental well-being: Reporting of EU users to their nations police when they use the internet in the wrong political way.
Accountability: The resulting police interviews after an AI reports a user to the gov.
Trustworthy: No using an EU funded AI project to support any nation's attempt to exit the EU.
Consultancy: More tax payers money.
Feel good Fluff (Score:2)
That's some majorly fluffy bullshit. Vague, undefined, unenforceable, mystic...
How about we swap out "AI" for "decisions made by politicians and CEOs" and see how far that gets us?