Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI Google Intel

AI Researchers Propose 'Bias Bounties' To Put Ethics Principles Into Practice (venturebeat.com) 47

Researchers from Google Brain, Intel, OpenAI, and top research labs in the U.S. and Europe joined forces this week to release what the group calls a toolbox for turning AI ethics principles into practice. From a report: The kit for organizations creating AI models includes the idea of paying developers for finding bias in AI, akin to the bug bounties offered in security software. This recommendation and other ideas for ensuring AI is made with public trust and societal well-being in mind were detailed in a preprint paper published this week. The bug bounty hunting community might be too small to create strong assurances, but developers could still unearth more bias than is revealed by measures in place today, the authors say.

"Bias and safety bounties would extend the bug bounty concept to AI and could complement existing efforts to better document data sets and models for their performance limitations and other properties," the paper reads. "We focus here on bounties for discovering bias and safety issues in AI systems as a starting point for analysis and experimentation but note that bounties for other properties (such as security, privacy protection, or interpretability) could also be explored."

This discussion has been archived. No new comments can be posted.

AI Researchers Propose 'Bias Bounties' To Put Ethics Principles Into Practice

Comments Filter:
  • by slinches ( 1540051 ) on Friday April 17, 2020 @04:42PM (#59960050)

    It sounds like a type of public moderation system, which we all know are impervious to abuse. Especially when dealing with such objectively measurable things as bias. /sarc

    • by AmiMoJo ( 196126 )

      Sounds more like a bug bounty system where the payout is only made once the issue is confirmed. It would need to be backed up by reproducible tests.

  • by BAReFO0t ( 6240524 ) on Friday April 17, 2020 @04:45PM (#59960058)

    It's a (shitty) neural network! Bias is its whole and entire point! It is literally the only thing it can do!

    What you mean is that it came to conclusions, *based on the data it was given*, that you didn't like! Because you, unlike it, can hold views that are NOT based on observed reality, but on unnatural ideology. And you, like a good Abrahamic religion deciple, want to force it into that same unnatural distortion.

    Instead of just
    A) giving it more data, that is a better representation of reality (Like data that shows being social is more sucessful in the long run. Case in point: Homo sapiens.), or
    B) accepting that maybe your views are wrong and "ethics" is exactly the word that people use to justify them anyway, so you should fix you unnatural ethics!

    • by DavenH ( 1065780 ) on Friday April 17, 2020 @04:56PM (#59960088)
      The whole AI bias thing is very poorly framed in my view. Yes, the whole point of training a model is to develop its inductive bias.

      What is being aimed for is a re-biasing toward human-centric values, sacrificing some accuracy from its inductive bias if necessary. That sounds silly (make it less accurate?), but readjusting optimization systems to conform with our values is what we always -- any regulation on business, the Geneva Convention, minimum wage: these are all suboptimal with respect to their objective fitness metrics, but deemed worthy compromises for their humanitarian value.

      It is indeed going to be a sacrifice in accuracy for an insurance risk model to disallow it from discrimination on constitutionally protected parameters, but that's the price of a society with Western values.

    • by Kjella ( 173770 )

      It's a (shitty) neural network! Bias is its whole and entire point! It is literally the only thing it can do! What you mean is that it came to conclusions, *based on the data it was given*, that you didn't like!

      Yes, but "he didn't look like a typical pedestrian" is not a good legal argument if you just ran over somebody. Deep learning networks do not learn like humans do, like if you show it a million face photos then for every one it'll nudge the weights a little bit towards "faciness". That's great if you want to generate something that recognizes/resembles 95% of the population. But you can't show it a small deck of people that are albinos, have different color eyes, birthmarks, birth defects, burns, scars, clo

  • Non-BIAS bounties (Score:3, Informative)

    by cygnusvis ( 6168614 ) on Friday April 17, 2020 @04:53PM (#59960082)
    AI is NOT BIASED and will reflect the data as it truly is. Provided that the data is diverse, the AI will be fair. If you want equal representation (which is firmly against best-fit methods), then you must INTENTIONALLY build in a bias. These bounties will be paid to people who notice a LACK of built in bias by noticing a lack of representation which is caused by the "best-fit" method.
    • When they talk about AI "being biased", they don't actually mean the AI being biased. They mean the system using an AI being biased because of bad implementation, which often means having trained the AI component with insufficiently diverse data for the task at hand.

      • You should vet the data for bias before using it in the model. To use data then adjust it to correct the "bias" in the output is just a confirmation bias machine. You either get the answer you want or keep changing the input until you get it.

        • by Cederic ( 9623 )

          Ah, but how do you test whether the data is biased? There's too much to do this economically by hand.

          We'll have to write an AI to assess it. Now, how do we test the AI...

    • The data is biased because it is based on historical data and historical data IS ALREADY BIASED.

      Build an AI to provide mortgages and you feed in 100 years of redlining.

      Build an AI to suggest prison sentencing and you feed in 100 years of racist policing and prosecution.

      Build an AI to suggest college admission and you have 200 years of legacy and racism.

      Any AI must be fed data and standard data is garbage. You do not need to intentionally add in the bias, you need to intentionally REMOVE the bias from the d

      • Then the question becomes: what exactly is biased data? For instance: your AI handles mortgages. Some statistical groups will show a significantly poorer repayment record than others: perhaps people from poor neighborhoods, with lower education, immigrant parents. Perhaps certain ethnic groups have a large overlap with these red flags. Does that make the AI racist? You could argue that such data isn't relevant, and that cases need to be judged on individual merit, but simply applying statistical data l
        • by AHuxley ( 892839 )
          Re "perhaps people from poor neighborhoods, with lower education, immigrant parents"
          Want an AI with a CoC that allows a bank to risk its own money on such loans?
          Loans to people who did not get loans in the past is now a new CoC political setting.
          Set loan approval due to US CoC settings.

          Loans dont get paid back? The AI reports the work done by the bank is as at 100% political correctness.
          The bank fails to make money on the loans to people who did not pay the money back? The AI still reports the b
        • by AmiMoJo ( 196126 )

          Here mortgages are decided based on affordability test. They look at your income and outgoings to see if you can afford to pay the mortgage and if your income is reasonably stable. That way bias from statistical data is largely avoided because the decision is mostly based on factual information like your pay cheque and if you have been in your job for longer than the legal probation period.

          As for the religious symbol thing, practically it would be impossible to make exceptions for them. A Sikh might want to

          • should Pastafarians be allowed to wear a collander or is that not a real church?

            Authorities here have really struggled with this one. Of course they just wanted to rule that pastafarianism is not a "real" religion whereas all the other established ones are, but how do you put that into laws exactly? Someone wanted to wear a colander for their passport photo, but that got rejected, even though the rules clearly state: "no headdress except religious ones". The European Human Rights Court offered a helpfuldefinition of what constitutes a "real" religion: it needs to have power of convi

    • Think this is just a definition difference. You seem to be calling bias anything not fair (by your second sentence).

      "Bias is the tendency of a statistic to overestimate or underestimate a parameter."

      That's all. Nothing about justice or fairness, but is about correctness. When that value happens to be skin color and the result you're calculating is prison sentence length...then you get into justice issues.

      Yes, hopefully broader data might prevent issues like this, but I don't think you can state that and

      • by ceoyoyo ( 59147 ) on Friday April 17, 2020 @11:08PM (#59960808)

        If you have an unbiased but noisy measurement you will tend to get the correct answer but will be unsure of it. You can fix this by collecting more data and averaging.

        If you have a biased, precise measurement, you will get the wrong answer and will be very confident in it. Collecting more data and averaging will only make you more certain about your wrong answer.

        I've noticed machine learning people like to refer to the "bias/variance tradeoff." This is a real thing, and is an accurate, but unfortunate, choice of terms relating to generalization error.

        The problem is, people familiar with "the bias/variance tradeoff" are tempted to look at biased *sampling* as something other than the prime evil.

    • by ceoyoyo ( 59147 )

      Diverse isn't enough. Any machine learning algorithm endeavours to reproduce it's training data. If that data is biased, the algorithm will learn the bias.

      If the data isn't diverse enough you end up extrapolating, and will get garbage.

      You're quite correct though, the algorithm is not biased. The problem is with the data, and/or the wishful thinking of the operator.

    • AI is NOT BIASED and will reflect the data as it truly is.

      LOLWTF.

      https://en.wikipedia.org/wiki/... [wikipedia.org]

    • Comment removed based on user account deletion
  • A hot mess (Score:2, Flamebait)

    by NaCh0 ( 6124 )

    Oh boy, SJWs are going to have a field day. File a grievance and get paid?

    The purveyors of this are about to find out how quickly libs eat themselves up.

  • ... "crowdsourcing" the shit they can't do.

  • by nospam007 ( 722110 ) * on Friday April 17, 2020 @06:18PM (#59960254)

    It will secretly play online poker and with the won money it will bribe the researchers to give him an evil overlord personality.

  • I don't understand why we need to add magical new test cases here. Aren't we testing for boundary conditions like this? At least we would have in an old normal computer program. I think we need to again with these black box AI solutions (where you have no idea what the inside is doing to produce the result).

  • One ponders if they would preclude any AI from working on climate models; lest the biases be exposed. U of EA, "Humans can predict climate better than an AI ever could, it's just common sense" /s
  • If our human neural networks can develop biases, so can artificial ones. I think this notion that we can eliminate bias in AI is silly and doomed to failure, especially as AI becomes more complex and like the human brain.
    • A good way to think of it is that models have biases just like humans do, but get through the task much more quickly.

      They are rarely better, just faster.
  • An AI is by definition not a human being. It follows an algorithm that evaluates certain traits and unless programmed to particularly target a certain group of people, will not have a bias against that group.

    What's that supposed to accomplish? To prove that the algorithm is wrong if it comes to a conclusion we don't want it to arrive at? Then ... why do we do it? Isn't the idea of using an AI to eliminate the human element of bias and come to a conclusion based on facts rather than emotion? And if that conc

    • by AHuxley ( 892839 )
      Re 'What's that supposed to accomplish?"
      The company that buys the AI will have to get results expected by the US, EU gov and their CoC settings approved for education use.
      The AI the private sector buys into is set with a full gov and academic CoC. Not much use as an AI but the product is EU approved as 100% tested for political correctness.
      The AI results will always default to virtue signaling and political correctness. Any data in, CoC EU politics out.
      Enter real data collected over decades. The AI
      • So you need one AI to do the work and one to pretend being in accordance to whatever arbitrary compliance requirements are being set?

        What's that, a job creation scheme for AIs?

        Dammit. The AIs already unionized.

        • by AHuxley ( 892839 )
          A slow AI thats 100% EU and US CoC ready? Gets the correct political results every time :)
          Some other nations faster AI with no CoC but the results cant be used in the USA, EU due to the politics of the result.
          The complex world of AI design and gov regulation over the political results.
          Buy the correct CoC AI and sell results on the EU, USA.
          Invest in the fast non CoC AI and get locked out for the USA, EU as the results did not pass the CoC code set?

          That CoC AI super computer from the USA? Some EU n
        • You don't need AI for political correctness. Just a random number generator generating results. Start with a completely flat random distribution (all outcomes equally likely), then manipulate the distribution curve increasing perceived positive outcomes for "special" groups defined based on age, sexual orientation, gender, race, religion, income, residence, political leanings, expressing particular opinions about some hot topic, etc.

  • Big tech wants an AI to get what data sets now?
    A CoC for AI learning?
    Who will buy, import a US designed AI? With all that extra junk CoC?
    Give a US designed AI a task and it spends days pondering the task due to its CoC?
    Reporting the task back to the USA due to the CoC?
    After a week of "thinking" the AI says no, the CoC wont allow the task to be started.

    The nation that invested in an US AI blocks further import of US AI tech and buys on the open market. Their next AI has no US style CoC, public trus
  • by Anonymous Coward
    Countless hyperventilating reporters have posted articles online about how various forms of AI turn out to be "racist" or "sexist" or some other "ist," all based on a major flaw in critical thinking called affirming the consequent. In other words, their line of thinking is that racists believe a person's race is predictive for certain traits or behaviors generally viewed by the racist to be negative, therefor anytime traits or behaviors generally viewed as negative are measured disproportionately in one or
    • by ceoyoyo ( 59147 )

      This is true, but it's not the only problem. A great many training datasets are hoplessly biased because they're convenience samples. Things like Twitter scrapes.

      Many machine learning practitioners apparently think their algorithms are magic, and relieve them of the painful lessons of statistical sampling.

  • The problem is fundamentally unsolvable. The problem is that when you have an AI system and feed it data it will produce results that are inherently without bias. The problem is further exasperated when fed larger amounts of real historical data that was fueled by real human bias.

    An AI does not care about identity politics or artificial constructs like a fluid gender identity. An AI is really an idiot savant, it does not have a filter that tells it to worry about losing a promotion, getting excluded from te

  • I'd much rather have Karens get offended at people walking on the street than messing up politically-incorrect accurate analysis/generalization by AI engines.

Technology is dominated by those who manage what they do not understand.

Working...