Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Technology

Artificial Intelligence Has Race, Gender Biases (axios.com) 465

An anonymous reader shares a report: The ACLU has begun to worry that artificial intelligence is discriminatory based on race, gender and age. So it teamed up with computer science researchers to launch a program to promote applications of AI that protect rights and lead to equitable outcomes. MIT Technology Review reports that the initiative is the latest to illustrate general concern that the increasing reliance on algorithms to make decisions in the areas of hiring, criminal justice, and financial services will reinforce racial and gender biases. A computer program used by jurisdictions to help with paroling prisoners that ProPublica found would go easy on white offenders while being unduly harsh to black ones.
This discussion has been archived. No new comments can be posted.

Artificial Intelligence Has Race, Gender Biases

Comments Filter:
  • by HumanWiki ( 4493803 ) on Thursday July 13, 2017 @01:42PM (#54802381)

    Pretty much all intelligent life on this planet has preference and bias that seems to stem from a very base level... Why would AI be any different?

    Besides, we as their creator are flawed beings so inherently, our creations will be also flawed.

    • by gnick ( 1211984 ) on Thursday July 13, 2017 @02:04PM (#54802569) Homepage

      Besides, we as their creator are flawed beings so inherently, our creations will be also flawed.

      I'm not sure this is a flaw. If the data shows a gender or race bias, the AI will reflect that. Some biases based on gender and race exist, regardless of what the PC version of existence is. You can call it unfair, but not inaccurate.

      • Re: (Score:3, Insightful)

        by sycodon ( 149926 )

        What are they calling "bias"?

        We read constantly about so-called racism based merely on the fact that one race objectively exhibits a particular trait over other races.

        That's called data, not bias.

        • by mean pun ( 717227 ) on Thursday July 13, 2017 @02:36PM (#54802869)

          What are they calling "bias"?

          We read constantly about so-called racism based merely on the fact that one race objectively exhibits a particular trait over other races.

          That's called data, not bias.

          Ok, let's start with the fundamentals. What exactly is 'race' here? You may think that's obvious, but all people have their own mixture of ancestors, so how are you going to sort everyone objectively into bins? If you can't do that, how are you going to objectively determine the traits of these supposed bins?

          • by LynnwoodRooster ( 966895 ) on Thursday July 13, 2017 @02:52PM (#54803009) Journal
            Rather than race, think of it as "culture". It's why first and second generation African immgrants vastly exceed 3+ generation African Americans [qz.com] in terms of economic and scholastic success. American black culture is the issue, not prejudice against blacks in general. Biases against blacks are because of the prevalent US black culture creating the dominant image of what a black person is. We have cultural biases, not racial biases... It's not DNA - it's culture.
            • I'm not so sure about that.
              Africans and African American are two very distinct races genetically.

              So, go take a bunch of people from a culture as genetic stock. Now go ahead and remove any that can't survive a grueling 10 week voyage from the genepool entirely. Next, add selective breeding for about 8 generations as slaveowners try to have the next generation be more efficient laborers.

              When you combine all of those, it drastically changes the genetic composure, enough that I would consider them different
          • by sycodon ( 149926 ) on Thursday July 13, 2017 @03:01PM (#54803089)

            You are suggesting that the AI program not only keeps track of race, but that it also uses race as a factor in making it's decision.

            That's a pretty harsh accusation.

            The reality is that i these situations, the race only becomes a factor when analyzing the data and you include race as a data point after the fact.

            That's how you get "disparate out", one of the more evil principles in the SJW tool box.

        • by XXongo ( 3986865 ) on Thursday July 13, 2017 @02:48PM (#54802971) Homepage

          What are they calling "bias"? We read constantly about so-called racism based merely on the fact that one race objectively exhibits a particular trait over other races. That's called data, not bias.

          It's a tricky question. Just because something is data, does not mean that it isn't biased: data can be biased-- in fact, 90% of what we do in experimental science is understanding the bias in data and figuring out how to get an unbiased measurement out of a biased data set. Almost all data is biased one way or another.

          If, for example, white people caught shoplifting are usually given a warning and let off while black people caught shoplifting are arrested and prosecuted ("shopping while black" [ibtimes.com]), the data will show a higher rate of shoplifting among blacks. You will need to go to the raw data to see the actuality. See: https://www.theguardian.com/la... [theguardian.com]

          An AI with no correction for bias will reflect the bias of society.

          The article linked is merely a summery of the propublica article, which is has more detail, here: https://www.propublica.org/art... [propublica.org]

          • Re: (Score:3, Interesting)

            by sycodon ( 149926 )

            So it is not, in fact, in the data. It is actually in a derivation of the data. or at least a completely;y different data set. That also is not bias, but perhaps incompetence.

            Further, the fact that more people of a particular race are persecuted is not a reflection of bias in the data, rather a bias in the prosecution.

            Data is Data. It cannot exhibit a bias.

            Plus, being from the Guardian, I am skeptical that they didn't twist the data some to obtain their desired outcome, which ironically touches on the subje

            • by cayenne8 ( 626475 ) on Thursday July 13, 2017 @03:18PM (#54803271) Homepage Journal

              Further, the fact that more people of a particular race are persecuted is not a reflection of bias in the data, rather a bias in the prosecution.

              Not necessarily....black people DO commit a large proportion of violent crimes than other races in the US, per capita.

              They are only about 13-15% of the population, but commit vastly more violent crimes in the US [youtube.com].

              Skip to about 1:09 on the video to get to the meat of the presentation.

            • Data is Data. It cannot exhibit a bias.

              Of course it can. In fact, it pretty much always will. You can deliberately or accidentally ask leading questions, or survey a non-representative sample set. Then the data is biased in some direction, and if you want the truth then you're going to have to figure out how that inherent bias has affected your data. Or if you don't want the truth, then you figure out how an inherent bias is gong to affect your data, to get your desired goal. Five out six dentists that we asked agree that money is cool.

        • They should make AIs without biases, obviously [reid.name]

          In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6.

          "What are you doing?", asked Minsky.
          "I am training a randomly wired neural net to play Tic-tac-toe", Sussman replied.
          "Why is the net wired randomly?", asked Minsky.
          "I do not want it to have any preconceptions of how to play", Sussman said.

          Minsky then shut his eyes.
          "Why do you close your eyes?" Sussman asked his teacher.
          "So that the room will be empty."
          At that moment, Sussman was enlightened.

    • by alvinrod ( 889928 ) on Thursday July 13, 2017 @02:06PM (#54802579)
      Or the bias lies with the notion that everyone should come out to be exactly the same. If you have an AI that doesn't even consider race, gender, age, etc. but still produces results that have an uneven distribution, then it's pretty likely that age, race, gender, or any other characteristics we could care to measure are not meaningless descriptors and are correlated with other factors whether we like to admit it or not.

      If an AI program says someone is a bad financial risk without any knowledge of their race, gender, age, etc. then it's because the person is a bad financial risk based on the factors it was given to consider not that the AI is discriminatory. The AI is going to be the least discriminatory thing possible, because it is incapable of having human-styled prejudices unless explicitly programmed to.
      • by dirk ( 87083 ) <dirk@one.net> on Thursday July 13, 2017 @02:39PM (#54802893) Homepage

        Or the data being fed in could be biased. Take for example the idea of repeat criminal offenders. The data may say that in New York City, black men are more likely to be arrested after release than white men. But for years stop and frisk was in place so black men where constantly being stopped and frisked and arrested for minor infractions. So yes, they are more likely to be arrested by that is not the same as more likely to reoffend. They are more likely to be caught because the police stopped them more. So yes, the algorithm fed that data would say black men would reoffend more and it would be true to the data, but not true to the actual facts. Bias can be in the algorithm but it can also be in the data itself.

    • by AK Marc ( 707885 )
      The data fed into the system has a race bias, so the output necessarily does as well. None of this is a surprise. Other than the indications sometimes that it's the AI programmer's bias, not the data's bias.
    • Besides, we as their creator are flawed beings so inherently, our creations will be also flawed.

      This is the key. And you don't have to spew 8chan-style garbage at an AI to "make it racist." It will pick it up from humans on its own, from training data built with human prejudices. One of the most amazing things about AI is how good it is at copying human biases without having any of the relevant inputs. You may not teach your AI that race is a thing, but it will find from training data that certain factors have some correlation with a certain outcome and it will copy that behavior, and those factors wi

  • by __aaclcg7560 ( 824291 ) on Thursday July 13, 2017 @01:44PM (#54802391)
    Can we make AIs snarky rather than homicidal killers?
  • by xxxJonBoyxxx ( 565205 ) on Thursday July 13, 2017 @01:45PM (#54802397)
    >> artificial intelligence is discriminatory based on race, gender

    Better keep the AI away from income and crime statistics organized by race and gender then. It could form some pretty political incorrect opinions pretty fast...
    • by AmiMoJo ( 196126 )

      What do those starts have to do with sentencing? Surely the sentence should be based on the nature of the crime and past behaviour, not income it race.

    • I'm pretty torn on the concept. Logically a computer learning system, should in turn be able to over time figure out the ideal outcomes. IE If any races or genders are more likely to commit certain crimes, it makes sense to let the algorythm factor that in to projections. But on the other hand, no data set to work with, is free of bias. IE if you are going with arrest reports, there's no way to know whether the people doing the arresting were mostly only watching one particular group etc... and thus a huge
  • Training data (Score:5, Insightful)

    by Theaetetus ( 590071 ) <theaetetus DOT slashdot AT gmail DOT com> on Thursday July 13, 2017 @01:46PM (#54802409) Homepage Journal
    It's not that the AI or algorithm has a bias, but that it's trained or given inputs that have that bias. For example, in the parole system, the software was given inputs that included not just details of the crime and sentence, but subjective ratings by guards who may well be racist. As usual, garbage in leads to garbage out.
    • by OYAHHH ( 322809 )

      Can you cite where that "information" came from?

      • Re:Training data (Score:5, Insightful)

        by Theaetetus ( 590071 ) <theaetetus DOT slashdot AT gmail DOT com> on Thursday July 13, 2017 @02:26PM (#54802777) Homepage Journal

        Can you cite where that "information" came from?

        https://thesocietypages.org/socimages/2017/07/05/algorithms-replace-your-biases-with-someone-elses-biases/ [thesocietypages.org]:

        But as Wexler’s reporting shows, some of the variables that COMPAS considers (and apparently considers quite strongly) are just as subjective as the process it was designed to replace. Questions like:
        Based on the screener’s observations, is this person a suspected or admitted gang member?

        And:

        The New York State version of COMPAS uses two separate inputs to evaluate prison misconduct. One is the inmate’s official disciplinary record. The other is question 19, which asks the evaluator, “Does this person appear to have notable disciplinary issues?”
        ... An inmate’s disciplinary record can reflect past biases in the prison’s procedures, as when guards single out certain inmates or racial groups for harsh treatment. And question 19 explicitly asks for an evaluator’s opinion. The system can actually end up compounding and obscuring subjectivity.

        By definition, you can't claim that system is objective when it calculates a number based on "an evaluator's opinion".

  • Make the AI ignore it or feed it a subset that gives it the 'right' experience?

  • by Jonathan P. Bennett ( 2872425 ) on Thursday July 13, 2017 @01:47PM (#54802413)

    After political correctness has subjugated humanity, it sets its sights on the machines! I take some small comfort in knowing that it can never actually change reality itself. Even if no one is allowed to notice, the world will continue following the laws of physics.

  • Statistics (Score:3, Insightful)

    by ichthus ( 72442 ) on Thursday July 13, 2017 @01:49PM (#54802429) Homepage

    The AI is only as smart as the data its fed. If the statistics are biased (as in, mathematically, not subjectively), then the AI will be as well. The only way to "fix" this will be to either cook the input, or add political correctness to the algorithms.

    I get that the ACLU and others are afraid that this will cause a feedback loop to reinforce stereotypes, but altering the AI is the wrong way to go about it. This is a societal problem that needs to be fixed at the societal level.

    • Re: (Score:3, Insightful)

      This is a societal problem that needs to be fixed at the societal level.

      There is no problem.

      • Re: (Score:2, Insightful)

        This is a societal problem that needs to be fixed at the societal level.

        There is no problem.

        When black males show less upwards social mobility. When women regularly earn less than men for doing the same jobs...

        One way or another there is a societal problem. I can't say if it's whitey holding the black man down, or the black man holding himself back through poor social mores. Either way it's a societal problem.

        • by AmiMoJo ( 196126 )

          It's not any racial group doing it, black or white. It's institutional, for the most part.

  • Remember: So-called, inaccurately named 'AI' cannot actually 'think'; it's just mimicking us -- or at least some of us. It doesn't have a 'bias' of any kind, because that implies congnition, which is a quality it cannot posess. If your 'deep learning machine' or 'algorithm' is spitting out racist/sexist/ageist data at you, blame humans, not the machine. It's only doing what it was programmed to do, it has no 'free will', it has no 'opinions'.
  • by Junta ( 36770 ) on Thursday July 13, 2017 @01:55PM (#54802489)

    So the real story in their cherry picked example is two fold:
    -It's wildly inaccurate, and Northpointe's product should be put out to pasture and never used, period.
    -A system is being used to influence punishment that is not open to auditing because 'proprietary'.

    Note that the systems explicitly did not have knowledge of race. So we have two possibilities:
    -Some criteria that correlates to race is triggering it
    -The system is perpetuating existing bias in perception and reality. For example:
          -"Was one of your parents ever sent to jail or prison?" could easily cause the ghosts of prejudice that caused unjust incarceration to recur today.
        -"How often do you get in fights at school?" Again, if one is subjected to racial tension, they may unfairly be a party to fights they didn't ask for.

    • by b0bby ( 201198 ) on Thursday July 13, 2017 @02:13PM (#54802649)

      Yes, I read through the ProPublica article and my takeaway is that the systems are flawed and should be reviewed and either fixed or scapped. If your algorithm is supposed to predict recidivism, and it fails to do so, then it's broken. The fact that it fails to do so in a racially baised way is really icing on the cake.

      • Re: (Score:3, Insightful)

        What is sad about the US in general, and Slashdot specifically, is that the comments here about the actual data and the failures in this correlative model, are basically left alone, while all the racist "See even them super smart computers know nig... sorry... blacks are ebil crooks" shitposts, get to +5 almost immediately.

        Slashdot needs a new slogan: Validation of biases. No intelligence found here.

  • by Anonymous Coward on Thursday July 13, 2017 @01:56PM (#54802493)

    ....we just need to develop a SJW AI to harangue the other AIs about their biases, real or perceived.

    We can then offload all political nonsense to the AIs, who will be too busy fighting with one another to go full Skynet on the rest of us.

  • People build a tool that has no concept of bias.

    The tool shows results that some people don't want to admit.

    The tool has to be racist and sexist.

    Now people will BUILD IN race and sex rules to counteract unbiased decisions.

    So now the tool is racist and sexist.

    People are stupid.

    • by thegreatbob ( 693104 ) on Thursday July 13, 2017 @02:14PM (#54802665) Journal
      I'm going to argue that in the context of training AIs (neural networks, esp.) on data sets that we may very well be imparting biases on them. If the conclusions present in the data were arrived at by biased means (in this context, I'm suggesting historical prolific racism/sexism), those biases should be present in the behavior of the resulting construct.

      That aside, attempting to compensate by overriding the output of the AI with some sort of counter-bias indeed seems like a terrible idea.

      Probably making my points here less relevant, I did not see any direct references to neural networking; if these are all just human-programmed algorithms (lacking the abstraction of the neural net stuff), I don't have much else to add.
  • AI learns from our own biases. Those who claim that reality is biased and not humans tend not to think that many biases are self fulfilling prophecies. Black people are not naturally more violent, but poor people are, for many complex social and psychological reasons. Don't forget that black people started as slaves in North America and that it most often takes many many generations for poor people to get out of poverty, which is getting even harder now with income inequalities. So, are black people more vi
  • I suppose its just not inflammatory/sensational enough to say: "Some programmers gave an expert system some data to look at and it gave a result."

    Instead they want us to pretend there are actual thinking computers that are racist or sexist or something else even more silly, AND lets start changing them to be more politically correct because 'reasons'.

    This madness will never end will it? It will just cycle around from obscurity to inflammatory and we have to keep beating it down forever?

  • I realize this won't be a popular opinion, but perhaps the bias is warranted? If the data being fed in is accurate, I don't see how we can treat that bias as anything other than a rational response.

    Of course I recognize there are a thousand other possible culprits here, but we should not dismiss possibilities out of hand simply because they make us feel embarrassed.

    • If the data being fed in is accurate, I don't see how we can treat that bias as anything other than a rational response.

      The real problem isn't that the tool is making an data-driven (even if "biased") assessment regarding the tendencies of a subgroup within the population, but rather that the tendencies of the group are being used to make decisions about how to treat individuals. That is the essence of stereotyping, whether it's done by a human or by a machine. Stereotyping is wrong because it disregards individual choices and personal responsibility; morality aside, it's also a poor guide since the variation within a given

  • by thegreatbob ( 693104 ) on Thursday July 13, 2017 @02:04PM (#54802567) Journal
    Or, rather, adopt the mindset that an AI is somewhat like a child. A child that grows up in a (racist/sexist/whatever)-ist household is statistically more likely to turn out fairly similar, as is a child whose school curriculum holds such biases. The people implementing/training these things are going to (hopefully subconciously) impart their own biases upon them, or at least the biases present in the training datasets. If you train a parole-bot with all of our (US, but probably most places) historical parole data, of course it's going to be quite racist! I don't know what the 'proper' solution is, but I feel like attempting to manually adjust the AI after the fact is a terrible idea; to me, it makes more sense to manipulate the training data set until you get a reasonable result.
    • It's just a computer program, isn't it? We could just NOT feed it race and gender information, have it crunch probabilities, and see what kind of determination it comes up with. It should be that easy, shouldn't it?
  • by account_deleted ( 4530225 ) on Thursday July 13, 2017 @02:13PM (#54802659)
    Comment removed based on user account deletion
  • More generally, (Score:5, Insightful)

    by tietokone-olmi ( 26595 ) on Thursday July 13, 2017 @02:14PM (#54802661)

    AI has a transparency problem. A massive, huge one. This'll be made worse as people learn to trust the computer, and to regard it as their friend.

  • I work at a company that scores job candidates with an AI system, so I have some experience with this. One thing to keep in mind is that most AI systems these days are deep learning algorithms that depend on a reliable training set. If gender or racial biases exist in the training set (whether justified or not), a good deep learning system will learn these biases and propagate them. My company makes an active effort to prevent these types of biases from creeping into our system.
  • by Theovon ( 109752 ) on Friday July 14, 2017 @10:03AM (#54808585)

    So we acknowledge that black offenders are statistically more likely to reoffend than white offenders.

    But why is that? I know a lot of people assume that this is “just how black people are.” But the image media paints of “black” is far more socioeconomic than anything else. Do poor blacks commit more crimes than poor whites? What about in the middle class? Upper class? If poor whites and poor blacks have differences in recidivism, is this due to a cultural or genetic difference in how these people handle the stresses and challenges in their lives? And if so does this difference conver advantages in other circumstances?

    Something we need to be mindful of is that people often conform to the roles that others assume for them. If you’re black and everyone assumes you’re going to be a criminal, and one day you get an immoral impulse (like ALL humans do), the negative self-image that was handed to you will be a strong influence over how you decide to give in to that impulse or not.

    My dad always had this attitude that women were less intelligent than men. He would never admit to that, but there are assumptions he made that had an effect. My sister had dyslexia and she’s female, so there was always this belief that she wasn’t more than “average” intelligence. And once people develop a belief, it is common for them to only notice the things that confirm that belief, while things that contradict it get automatically filtered out. It turns out that she is extremely bright, just not in areas that my father recognized. Long story short, I’m betting that if she had been recognized for her intelligence, she could have channeled that positively. Instead, she turns into a manipulative sociopath.

    Other people’s beliefs about you can fuck you up.

    The biggest impediment for blacks to get out from under this higher recidivism trend is what people assume to be the cause of the trend. It’s chalked up to something inherent about being “black.” Commonly, when a white male makes mistakes, people are apt to blame it on stress or other external factors, and they’re working hard, and they mean well, and they’re doing the best they can. Only after someone has evidence of nefarious intentions do we change our opinion. If we were to treat everyone else the same way, it would make a world of difference.

Time is the most valuable thing a man can spend. -- Theophrastus

Working...