AI Researchers Propose 'Bias Bounties' To Put Ethics Principles Into Practice (venturebeat.com) 47
Researchers from Google Brain, Intel, OpenAI, and top research labs in the U.S. and Europe joined forces this week to release what the group calls a toolbox for turning AI ethics principles into practice. From a report: The kit for organizations creating AI models includes the idea of paying developers for finding bias in AI, akin to the bug bounties offered in security software. This recommendation and other ideas for ensuring AI is made with public trust and societal well-being in mind were detailed in a preprint paper published this week. The bug bounty hunting community might be too small to create strong assurances, but developers could still unearth more bias than is revealed by measures in place today, the authors say.
"Bias and safety bounties would extend the bug bounty concept to AI and could complement existing efforts to better document data sets and models for their performance limitations and other properties," the paper reads. "We focus here on bounties for discovering bias and safety issues in AI systems as a starting point for analysis and experimentation but note that bounties for other properties (such as security, privacy protection, or interpretability) could also be explored."
"Bias and safety bounties would extend the bug bounty concept to AI and could complement existing efforts to better document data sets and models for their performance limitations and other properties," the paper reads. "We focus here on bounties for discovering bias and safety issues in AI systems as a starting point for analysis and experimentation but note that bounties for other properties (such as security, privacy protection, or interpretability) could also be explored."
I propose a more ethical form of bounty (Score:1)
Re: (Score:1)
Murdering people, with betting on said murders, as "more ethical"...alrighty then.
Unintended Consequences (Score:4, Insightful)
It sounds like a type of public moderation system, which we all know are impervious to abuse. Especially when dealing with such objectively measurable things as bias. /sarc
Re: (Score:2)
Sounds more like a bug bounty system where the payout is only made once the issue is confirmed. It would need to be backed up by reproducible tests.
LOOOL! "finding bias in AI[sic]"! (Score:4, Informative)
It's a (shitty) neural network! Bias is its whole and entire point! It is literally the only thing it can do!
What you mean is that it came to conclusions, *based on the data it was given*, that you didn't like! Because you, unlike it, can hold views that are NOT based on observed reality, but on unnatural ideology. And you, like a good Abrahamic religion deciple, want to force it into that same unnatural distortion.
Instead of just
A) giving it more data, that is a better representation of reality (Like data that shows being social is more sucessful in the long run. Case in point: Homo sapiens.), or
B) accepting that maybe your views are wrong and "ethics" is exactly the word that people use to justify them anyway, so you should fix you unnatural ethics!
Re:LOOOL! "finding bias in AI[sic]"! (Score:4, Insightful)
What is being aimed for is a re-biasing toward human-centric values, sacrificing some accuracy from its inductive bias if necessary. That sounds silly (make it less accurate?), but readjusting optimization systems to conform with our values is what we always -- any regulation on business, the Geneva Convention, minimum wage: these are all suboptimal with respect to their objective fitness metrics, but deemed worthy compromises for their humanitarian value.
It is indeed going to be a sacrifice in accuracy for an insurance risk model to disallow it from discrimination on constitutionally protected parameters, but that's the price of a society with Western values.
Re: (Score:2)
It's a (shitty) neural network! Bias is its whole and entire point! It is literally the only thing it can do! What you mean is that it came to conclusions, *based on the data it was given*, that you didn't like!
Yes, but "he didn't look like a typical pedestrian" is not a good legal argument if you just ran over somebody. Deep learning networks do not learn like humans do, like if you show it a million face photos then for every one it'll nudge the weights a little bit towards "faciness". That's great if you want to generate something that recognizes/resembles 95% of the population. But you can't show it a small deck of people that are albinos, have different color eyes, birthmarks, birth defects, burns, scars, clo
Non-BIAS bounties (Score:3, Informative)
Re: (Score:1)
So the AI as sold has to be CoC and bias aware?
Thats for the gov, mil, company buying to AI to set up and work out in their own time.
The AI they buy cant get useful results? Thats the data set selected by the owner of the AI.
Buy/rent/find/create a better data set? Expect the CoC AI to come with a CoC to correct the data given?
All the AI can do is change given data and output junk political results?
Want to rent, buy, pay for, import, upgra
Re: (Score:3)
When they talk about AI "being biased", they don't actually mean the AI being biased. They mean the system using an AI being biased because of bad implementation, which often means having trained the AI component with insufficiently diverse data for the task at hand.
Re: (Score:2)
You should vet the data for bias before using it in the model. To use data then adjust it to correct the "bias" in the output is just a confirmation bias machine. You either get the answer you want or keep changing the input until you get it.
Re: (Score:2)
Ah, but how do you test whether the data is biased? There's too much to do this economically by hand.
We'll have to write an AI to assess it. Now, how do we test the AI...
Ignorance on your part (Score:3)
The data is biased because it is based on historical data and historical data IS ALREADY BIASED.
Build an AI to provide mortgages and you feed in 100 years of redlining.
Build an AI to suggest prison sentencing and you feed in 100 years of racist policing and prosecution.
Build an AI to suggest college admission and you have 200 years of legacy and racism.
Any AI must be fed data and standard data is garbage. You do not need to intentionally add in the bias, you need to intentionally REMOVE the bias from the d
Re: (Score:3)
Re: (Score:1)
Want an AI with a CoC that allows a bank to risk its own money on such loans?
Loans to people who did not get loans in the past is now a new CoC political setting.
Set loan approval due to US CoC settings.
Loans dont get paid back? The AI reports the work done by the bank is as at 100% political correctness.
The bank fails to make money on the loans to people who did not pay the money back? The AI still reports the b
Re: (Score:2)
Here mortgages are decided based on affordability test. They look at your income and outgoings to see if you can afford to pay the mortgage and if your income is reasonably stable. That way bias from statistical data is largely avoided because the decision is mostly based on factual information like your pay cheque and if you have been in your job for longer than the legal probation period.
As for the religious symbol thing, practically it would be impossible to make exceptions for them. A Sikh might want to
Re: (Score:3)
should Pastafarians be allowed to wear a collander or is that not a real church?
Authorities here have really struggled with this one. Of course they just wanted to rule that pastafarianism is not a "real" religion whereas all the other established ones are, but how do you put that into laws exactly? Someone wanted to wear a colander for their passport photo, but that got rejected, even though the rules clearly state: "no headdress except religious ones". The European Human Rights Court offered a helpfuldefinition of what constitutes a "real" religion: it needs to have power of convi
Re: (Score:2)
Think this is just a definition difference. You seem to be calling bias anything not fair (by your second sentence).
"Bias is the tendency of a statistic to overestimate or underestimate a parameter."
That's all. Nothing about justice or fairness, but is about correctness. When that value happens to be skin color and the result you're calculating is prison sentence length...then you get into justice issues.
Yes, hopefully broader data might prevent issues like this, but I don't think you can state that and
Re:Non-BIAS bounties (Score:4, Insightful)
If you have an unbiased but noisy measurement you will tend to get the correct answer but will be unsure of it. You can fix this by collecting more data and averaging.
If you have a biased, precise measurement, you will get the wrong answer and will be very confident in it. Collecting more data and averaging will only make you more certain about your wrong answer.
I've noticed machine learning people like to refer to the "bias/variance tradeoff." This is a real thing, and is an accurate, but unfortunate, choice of terms relating to generalization error.
The problem is, people familiar with "the bias/variance tradeoff" are tempted to look at biased *sampling* as something other than the prime evil.
Re: (Score:3)
Diverse isn't enough. Any machine learning algorithm endeavours to reproduce it's training data. If that data is biased, the algorithm will learn the bias.
If the data isn't diverse enough you end up extrapolating, and will get garbage.
You're quite correct though, the algorithm is not biased. The problem is with the data, and/or the wishful thinking of the operator.
Re: (Score:1)
AI is NOT BIASED and will reflect the data as it truly is.
LOLWTF.
https://en.wikipedia.org/wiki/... [wikipedia.org]
Re: (Score:2)
A hot mess (Score:2, Flamebait)
Oh boy, SJWs are going to have a field day. File a grievance and get paid?
The purveyors of this are about to find out how quickly libs eat themselves up.
AI Researchers Propose ... (Score:2)
... "crowdsourcing" the shit they can't do.
Re: (Score:2)
The AI will outbid them (Score:3)
It will secretly play online poker and with the won money it will bribe the researchers to give him an evil overlord personality.
Bias testing isn't done already? (Score:1)
I don't understand why we need to add magical new test cases here. Aren't we testing for boundary conditions like this? At least we would have in an old normal computer program. I think we need to again with these black box AI solutions (where you have no idea what the inside is doing to produce the result).
Hmm. Climate Research Included? (Score:1)
AI will always have bias (Score:1)
Re: (Score:2)
They are rarely better, just faster.
Clue me in, I don't get it. (Score:2)
An AI is by definition not a human being. It follows an algorithm that evaluates certain traits and unless programmed to particularly target a certain group of people, will not have a bias against that group.
What's that supposed to accomplish? To prove that the algorithm is wrong if it comes to a conclusion we don't want it to arrive at? Then ... why do we do it? Isn't the idea of using an AI to eliminate the human element of bias and come to a conclusion based on facts rather than emotion? And if that conc
Re: (Score:2)
The company that buys the AI will have to get results expected by the US, EU gov and their CoC settings approved for education use.
The AI the private sector buys into is set with a full gov and academic CoC. Not much use as an AI but the product is EU approved as 100% tested for political correctness.
The AI results will always default to virtue signaling and political correctness. Any data in, CoC EU politics out.
Enter real data collected over decades. The AI
Re: (Score:2)
So you need one AI to do the work and one to pretend being in accordance to whatever arbitrary compliance requirements are being set?
What's that, a job creation scheme for AIs?
Dammit. The AIs already unionized.
Re: (Score:1)
Some other nations faster AI with no CoC but the results cant be used in the USA, EU due to the politics of the result.
The complex world of AI design and gov regulation over the political results.
Buy the correct CoC AI and sell results on the EU, USA.
Invest in the fast non CoC AI and get locked out for the USA, EU as the results did not pass the CoC code set?
That CoC AI super computer from the USA? Some EU n
Re: (Score:2)
You don't need AI for political correctness. Just a random number generator generating results. Start with a completely flat random distribution (all outcomes equally likely), then manipulate the distribution curve increasing perceived positive outcomes for "special" groups defined based on age, sexual orientation, gender, race, religion, income, residence, political leanings, expressing particular opinions about some hot topic, etc.
US tech censorship and EU laws (Score:1)
A CoC for AI learning?
Who will buy, import a US designed AI? With all that extra junk CoC?
Give a US designed AI a task and it spends days pondering the task due to its CoC?
Reporting the task back to the USA due to the CoC?
After a week of "thinking" the AI says no, the CoC wont allow the task to be started.
The nation that invested in an US AI blocks further import of US AI tech and buys on the open market. Their next AI has no US style CoC, public trus
Turning the Objective into the Subjective (Score:1)
Re: (Score:3)
This is true, but it's not the only problem. A great many training datasets are hoplessly biased because they're convenience samples. Things like Twitter scrapes.
Many machine learning practitioners apparently think their algorithms are magic, and relieve them of the painful lessons of statistical sampling.
Unsolvable problem (Score:2)
The problem is fundamentally unsolvable. The problem is that when you have an AI system and feed it data it will produce results that are inherently without bias. The problem is further exasperated when fed larger amounts of real historical data that was fueled by real human bias.
An AI does not care about identity politics or artificial constructs like a fluid gender identity. An AI is really an idiot savant, it does not have a filter that tells it to worry about losing a promotion, getting excluded from te
ok Karen (Score:2)
I'd much rather have Karens get offended at people walking on the street than messing up politically-incorrect accurate analysis/generalization by AI engines.