Artificial Intelligence Has Race, Gender Biases (axios.com) 465
An anonymous reader shares a report: The ACLU has begun to worry that artificial intelligence is discriminatory based on race, gender and age. So it teamed up with computer science researchers to launch a program to promote applications of AI that protect rights and lead to equitable outcomes. MIT Technology Review reports that the initiative is the latest to illustrate general concern that the increasing reliance on algorithms to make decisions in the areas of hiring, criminal justice, and financial services will reinforce racial and gender biases. A computer program used by jurisdictions to help with paroling prisoners that ProPublica found would go easy on white offenders while being unduly harsh to black ones.
Did anyone think it would be otherwise? (Score:5, Insightful)
Pretty much all intelligent life on this planet has preference and bias that seems to stem from a very base level... Why would AI be any different?
Besides, we as their creator are flawed beings so inherently, our creations will be also flawed.
Re:Did anyone think it would be otherwise? (Score:5, Interesting)
Besides, we as their creator are flawed beings so inherently, our creations will be also flawed.
I'm not sure this is a flaw. If the data shows a gender or race bias, the AI will reflect that. Some biases based on gender and race exist, regardless of what the PC version of existence is. You can call it unfair, but not inaccurate.
Re: (Score:3, Insightful)
What are they calling "bias"?
We read constantly about so-called racism based merely on the fact that one race objectively exhibits a particular trait over other races.
That's called data, not bias.
Re:Did anyone think it would be otherwise? (Score:5, Interesting)
What are they calling "bias"?
We read constantly about so-called racism based merely on the fact that one race objectively exhibits a particular trait over other races.
That's called data, not bias.
Ok, let's start with the fundamentals. What exactly is 'race' here? You may think that's obvious, but all people have their own mixture of ancestors, so how are you going to sort everyone objectively into bins? If you can't do that, how are you going to objectively determine the traits of these supposed bins?
Re:Did anyone think it would be otherwise? (Score:5, Interesting)
Re: (Score:3)
Africans and African American are two very distinct races genetically.
So, go take a bunch of people from a culture as genetic stock. Now go ahead and remove any that can't survive a grueling 10 week voyage from the genepool entirely. Next, add selective breeding for about 8 generations as slaveowners try to have the next generation be more efficient laborers.
When you combine all of those, it drastically changes the genetic composure, enough that I would consider them different
Re:Did anyone think it would be otherwise? (Score:5, Insightful)
"So there is a genetic reason to have bias about hiring people - some people are just "born lazy and ignorant"?"
Not so much lazy and ignorant as a combination of factors. If you look at performance of individuals in western societies, factors representing success correlate pretty well with IQ, to a point. Generally, we see about 80-85% of performance being innate (genetic), while around 15-20% is environmental. We see the same thing in physical performance - no amount of work will make an Olympic athlete out of someone without the body for it.
Black culture is certainly toxic, but it's also a reflection of genetics. They feed back on each other. There has been a ridiculous amount of money spent over decades trying to solve the black-white achievement gap, yet it doesn't work. It can't work.
https://www1.udel.edu/educ/got... [udel.edu]
There are population differences between the black and white population in the US that are compounded by the effects of poverty, malnourishment, and poor education.
Poor education, culture, and poverty feed back on themselves - it takes only a single student to disrupt an educational environment, so if you have a higher percentage of special needs students (or simply disruptive ones), there will be a greater percentage of classes where it's difficult for children to learn. The ability of a school to fund smaller classrooms is a function of its funding, which is often a function of where it's located and its taxbase. Poverty tends to concentrate individuals into areas where mass transit is an option, and so you get a perfect storm of a population that is already dealing with a lower mean IQ coupled with poorer education across the board.
This is also why voluntary busing can help with education, but only to a point. If you bus the non-disruptive students to better schools, they benefit from being removed from their disruptive classmates. If you bus the disruptive classmates as well, you harm the education of wherever they are bussed to.
I went to one of the former schools - black parents with above-average children who wanted their children to receive the best possible education would choose to send their children to my school. They were driven to succeed, and accountable to their families, and it did not adversely affect our education, but it helped theirs significantly.
So, no, it's not that they are born lazy, or ignorant. Those traits may be present as a class as a function of IQ, but like anything else individuals are individuals, who vary greatly. We can draw conclusions about a population, and estimate likelihood based on those conclusions, but you never really know what an individual will do until they are given the chance to do it.
Re:Did anyone think it would be otherwise? (Score:5, Interesting)
You are suggesting that the AI program not only keeps track of race, but that it also uses race as a factor in making it's decision.
That's a pretty harsh accusation.
The reality is that i these situations, the race only becomes a factor when analyzing the data and you include race as a data point after the fact.
That's how you get "disparate out", one of the more evil principles in the SJW tool box.
Re:Did anyone think it would be otherwise? (Score:5, Informative)
What are they calling "bias"? We read constantly about so-called racism based merely on the fact that one race objectively exhibits a particular trait over other races. That's called data, not bias.
It's a tricky question. Just because something is data, does not mean that it isn't biased: data can be biased-- in fact, 90% of what we do in experimental science is understanding the bias in data and figuring out how to get an unbiased measurement out of a biased data set. Almost all data is biased one way or another.
If, for example, white people caught shoplifting are usually given a warning and let off while black people caught shoplifting are arrested and prosecuted ("shopping while black" [ibtimes.com]), the data will show a higher rate of shoplifting among blacks. You will need to go to the raw data to see the actuality. See: https://www.theguardian.com/la... [theguardian.com]
An AI with no correction for bias will reflect the bias of society.
The article linked is merely a summery of the propublica article, which is has more detail, here: https://www.propublica.org/art... [propublica.org]
Re: (Score:3, Interesting)
So it is not, in fact, in the data. It is actually in a derivation of the data. or at least a completely;y different data set. That also is not bias, but perhaps incompetence.
Further, the fact that more people of a particular race are persecuted is not a reflection of bias in the data, rather a bias in the prosecution.
Data is Data. It cannot exhibit a bias.
Plus, being from the Guardian, I am skeptical that they didn't twist the data some to obtain their desired outcome, which ironically touches on the subje
Re:Did anyone think it would be otherwise? (Score:5, Informative)
Not necessarily....black people DO commit a large proportion of violent crimes than other races in the US, per capita.
They are only about 13-15% of the population, but commit vastly more violent crimes in the US [youtube.com].
Skip to about 1:09 on the video to get to the meat of the presentation.
Re: (Score:3)
Data is Data. It cannot exhibit a bias.
Of course it can. In fact, it pretty much always will. You can deliberately or accidentally ask leading questions, or survey a non-representative sample set. Then the data is biased in some direction, and if you want the truth then you're going to have to figure out how that inherent bias has affected your data. Or if you don't want the truth, then you figure out how an inherent bias is gong to affect your data, to get your desired goal. Five out six dentists that we asked agree that money is cool.
Persecution (Score:5, Informative)
"Further, the fact that more people of a particular race are prosecuted is not a reflection of bias in the data, rather a bias in the prosecution."
In this case, "persecuted" was more accurate.
Data is Data. It cannot exhibit a bias.
I can only surmise that you're not an experimental scientist. Data has bias all the time.
In physics (my field) the bias usually has no social consequence-- astronomical statistics, for example, are biased toward bright stars (since they're much easier to see than faint ones, and hence overrepresented in the data set). In social "sciences," however, the bias very often does have social consequences. SAT scores from children whose parents spend tens of thousands of dollars on SAT Prep courses, for example-- surprise!-- score better on SAT exams than ones who don't. The data shows a correlation of SAT score with parental income. Is this real? Better correct for the SAT-prep course effect before making a conclusion.
Data is biased. All the time. Be ready for it.
...Plus, being from the Guardian, I am skeptical that they didn't twist the data some to obtain their desired outcome, which ironically touches on the subject of this story.
Huh? MIT Tecnology Review and Propublica were the source. The link in the summary was this: https://www.axios.com/algorith... [axios.com] which linked here: https://www.propublica.org/art... [propublica.org] and here MIT Technology Review [technologyreview.com]
Re: (Score:3)
In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6.
"What are you doing?", asked Minsky.
"I am training a randomly wired neural net to play Tic-tac-toe", Sussman replied.
"Why is the net wired randomly?", asked Minsky.
"I do not want it to have any preconceptions of how to play", Sussman said.
Minsky then shut his eyes.
"Why do you close your eyes?" Sussman asked his teacher.
"So that the room will be empty."
At that moment, Sussman was enlightened.
Re:Did anyone think it would be otherwise? (Score:5, Informative)
AI, like humans, makes mistakes like "correlation = causation".
AI doesn't care about "correlation == causation". It only cares about "correlation == correlation". Humans may infer causation, but that's not the fault of AI.
Re: (Score:3, Insightful)
The data is incomplete. AI, like humans, makes mistakes like "correlation = causation". The problem is, like some humans, AI doesn't understand this and can't ask for additional information or self-correct.
You're an idiot.
The AI doesn't need to understand anything. Nor does it need to ask for additional information.
It absolutely does self-correct. When it encounters data that doesn't match its model it adjusts the model. If the AI is biased to say that a certain sex is more likely to have a certain trait, then if it encounters data that says otherwise the model is adjusted.
This is why AIs have a "training" data set and a "testing" data set. You train it until it's good, then you test it on data it hasn't
Re: (Score:3, Interesting)
The data is incomplete. AI, like humans, makes mistakes like "correlation = causation". The problem is, like some humans, AI doesn't understand this and can't ask for additional information or self-correct.
Very much this. Reading the ProPublica article [propublica.org] (the Axios one in the summary doesn't have anything useful except a couple of links - this being one), it's easy to see that the real complaint is that the sentencing algorithm appears to have problems with accuracy when its predictions are compared to what really happens.
Interestingly, if this article [washingtonpost.com] is correct, race is not one of the inputs into the system in question (Northpointe's Compas system).
Reading the field guide for the system here [northpointeinc.com] I was impressed
Re: (Score:3)
I find it ironic that the people who complain the most about people taking offence and trying to censor over it then use their mod points to do the same to others.
My theory is that if they accuse you of doing something, it's probably because they thought of doing it to you first.
Re:Did anyone think it would be otherwise? (Score:4, Insightful)
If an AI program says someone is a bad financial risk without any knowledge of their race, gender, age, etc. then it's because the person is a bad financial risk based on the factors it was given to consider not that the AI is discriminatory. The AI is going to be the least discriminatory thing possible, because it is incapable of having human-styled prejudices unless explicitly programmed to.
Re:Did anyone think it would be otherwise? (Score:5, Insightful)
Or the data being fed in could be biased. Take for example the idea of repeat criminal offenders. The data may say that in New York City, black men are more likely to be arrested after release than white men. But for years stop and frisk was in place so black men where constantly being stopped and frisked and arrested for minor infractions. So yes, they are more likely to be arrested by that is not the same as more likely to reoffend. They are more likely to be caught because the police stopped them more. So yes, the algorithm fed that data would say black men would reoffend more and it would be true to the data, but not true to the actual facts. Bias can be in the algorithm but it can also be in the data itself.
Re:Did anyone think it would be otherwise? (Score:5, Informative)
No he's making a very simple argument.
You have two sets of populations. Say, hypothetically, the exact same percentage of each set carries contraband around, Members of one set are stopped and frisked with no probable cause more often than the other. That set will have a higher rate of arrest for that contraband not because they are more likely to have it, but because they are more likely to be searched.
Re: (Score:2)
Re: (Score:2)
Besides, we as their creator are flawed beings so inherently, our creations will be also flawed.
This is the key. And you don't have to spew 8chan-style garbage at an AI to "make it racist." It will pick it up from humans on its own, from training data built with human prejudices. One of the most amazing things about AI is how good it is at copying human biases without having any of the relevant inputs. You may not teach your AI that race is a thing, but it will find from training data that certain factors have some correlation with a certain outcome and it will copy that behavior, and those factors wi
Re:Did anyone think it would be otherwise? (Score:5, Insightful)
Not a problem.
OP: You are 100% correct.
People look for patterns in everything, including individual and tribal behaviors and trends.
I can't really think of a stereotype that hasn't been or still is based largely on observable facts.
It makes sense that AI that uses deep learning and other methods will likely see trends too.
I mean, it should be simple for it to notice there aren't a lot of white guys on the floor with NBA teams.
I doubt anyone human would refute that.
So, why would it not be natural to observe the types and percentages of violent crimes committed by "X" race/gender categories?
Bias...sure, but based on facts.
So, yes...if intelligence is present (natural or artificial) , it will observe these trends, and base future trends and behavior upon these observational biases.
If you have no biases, you could not operate in this world very well, as that you would wake up to a brand new world every day.
The key is to keep the biases always in a state of adjustment based on changing trends.
The problem is that the AI gets things wrong (Score:5, Informative)
The problem is not that the data set reflects the reality. The problem is not that the AI makes mistakes, but that the particular mistakes the AI makes reflect the bias of the society that programmed it.
The link in the summary is to an article which is itself a summary. From the original (here: Machine Bias There’s software used across the country to predict future criminals. And it’s biased against blacks. [propublica.org]), the software attempted to predict the probability of future offenses of criminals on probation. It did not, of course, always get it right. But when the actual percentage of re-offenses was compared to the predictions, the AI got it wrong differently for blacks than for whites. Here's what the article said.
We also turned up significant racial disparities, just as Holder feared. In forecasting who would re-offend, the algorithm made mistakes with black and white defendants at roughly the same rate but in very different ways.
The formula was particularly likely to falsely flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants. White defendants were mislabeled as low risk more often than black defendants.
Re: (Score:3, Informative)
I believe that the newer ways of "Deep Learning" methods of teaching AI will address these concerns
Sounds like just faulty programming on that article you referred to...it said this for the training of their AI:
"orthpointeâ(TM)s core product is a set of scores derived from 137 questions that are eith
Re: (Score:3)
Faulty programming? Clearly you've never actually done this shit, or you'd be talking about faultily curating training datasets.
But then you'd understand that doing so is really hard. The training dataset in this case is police reports, judicial summaries, etc. It reflects the biases of those humans in the system.
You'd also understand that the network has indeed ferreted out those deeper patterns: and those deeper patterns are societal racial biases. Like Dawkins's memes.
But since you clearly have not worke
Re: (Score:3)
But..in the article, it said they were NOT using race as an AI training factor....so, it wasn't racial bias being programmed in.
Re:racial bias is faulty programming (Score:5, Insightful)
Re:racial bias is faulty programming (Score:4, Informative)
It's easy to provide AI with data. It's hard to make it understand the limitations and biases of that data. For example, the data shows more black people carrying illegal items, but mostly because the police stop and search them more frequently than white people.
Re:racial bias is faulty programming (Score:4, Interesting)
For example, the data shows more black people carrying illegal items, but mostly because the police stop and search them more frequently than white people.
... which is itself based on the observation that black people are more likely to carry illegal items.
This is a problem that customs deals with all the time. They discriminate in their searches because it's significantly more effective. In Canada, for example, Americans going to Whistler have their electronics searched because there is a high amount of illegal work. Americans going to Alaska are searched for guns (because they found so many).
They have non-profiling days where all selection is random, and they have mandatory times when everyone gets searched. They do this to validate their discrimination models, and waste a lot of time finding very little.
Evidence-based policing is going to end up racist, because reality is racist.
Re:racial bias is faulty programming (Score:5, Insightful)
... which is itself based on the observation that black people are more likely to carry illegal items.
That's a circular argument. We stop more black people so we find them carrying illegal items more often, which must mean they carry more often so we should stop them more often.
Re: (Score:3)
Re: (Score:3)
FYI in foot and mouth disease outbreaks they routinely put up roadblocks in strategic areas and any meat is not allowed through, it's kinda li
Re:racial bias is faulty programming (Score:4, Informative)
It was determined, that the program gave too much weight to the sheer number of factors counting against the person instead looking how bad some of the factors were. It would rather give a white guy with repeated offenses against other's sexuality a good score (because for him, only one factor looked bad, all others were ok, like steady income, no drug use etc.pp.) than a black charged with theft, because he might have been a homeless school dropout, with no known siblings or caring parents.
Re:racial bias is faulty programming (Score:4, Insightful)
Indeed, I would consider racial bias to be a subset of "faulty programming."
Far from it. A system that lacked the racial bias reflected in reality would by it's very nature be flawed, and racially discriminatory. It would have to be skewed in such a way that it disproportionately benefited specific populations based on their race in the interest of "not being biased".
A simple example to illustrate the point, using something that's not as polarizing as criminality:
Suppose we wanted to estimate cancer risk for individuals. As is often the case in statistics, the goal is to estimate the values of unknown attributes using known attributes.
In this hypothetical scenario, white people have double the cancer risk of black people. We've also decided that for reasons of policy that it's immoral to judge people on the basis of their skin color, whether or not that actually correlates with risk.
If we looked at basketball players (for example), we might see that white people tended to play basketball individually, and focused on activities that could be done by themselves (shooting longer distances), while black individuals tended to grow up in urban environments with busier courts, and that they would focus on shorter shot distances, and skills which would contribute better to 5 on 5 games.
If we train a model using that data, we could easily find ourselves in a situation where the average shot distance ends up correlating with one's risk of cancer, because cancer correlates with race, and race correlates with shot data. This is normal, and expected, because the underlying data itself reflects this reality.
Since blacks have higher criminality rates, and higher recidivism rates, any just risk assessment algorithm is going to end up biased against black individuals. This is true whether their increased crime rates are due to poverty, intelligence, broken families, economic inequality, bad education, increased use of welfare, take your pick.
At the end of the day, the correlation won't tell you why - just that it's there. If the risk is higher for black individuals, and it doesn't assign (on average) a higher risk for black individuals, then the algorithm is a bad algorithm, because it's been weighted in such a way that it will disproportionately favour black individuals. It's social engineering that sends people of other races to prison more often in the interest of political correctness.
Not reflecting reality. (Score:3)
Indeed, I would consider racial bias to be a subset of "faulty programming."
Far from it. A system that lacked the racial bias reflected in reality would by it's very nature be flawed, and racially discriminatory.
Stop right there. We're talking about different things.
You are talking about "racial bias reflected in reality", but the article I am referring to is talking about racial bias that is in the output of the AI but is not reflected in reality. The article talks about the comparison of the AI output with actual results that show that the AI overpredicts blacks will commit crimes, and underpredicts that whites will commit crimes. The AI is not "reflecting reality".
The whole point is that the AI is inserting r
Re:The problem is that the AI gets things wrong (Score:4, Interesting)
the particular mistakes the AI makes reflect the bias of the society that programmed it
Except that this appears to be just speculation: Imagine if (for whatever reason) black American men in a certain situation (income, neighborhood, etc) have a 10% recidivism rate, while white men in the same situation have a 20% recidivism rate. The AI has to give a single number for both groups (since race is deliberately hidden from it), so it guesses (say) a 15% chance of re-offending. So it over-estimates the chances that a black man will re-offend while underestimating the chances for a white man - without any racial bias whatsoever.
Ironically, giving it race as an input would allow it to make more accurate predictions and appear less biased.
There's a chance I've missed something, but barring that, all this demonstrates is that people don't understand statistics and have a strong urge to explain everything as racism.
"Mistake" means predictions don't match results (Score:3)
mistakes
This seems unlikely. I figure it's far more likely that the AI is simply solving the wrong problem.
No, the problem is that the input data it used had invisible bias. There is an old saying in the computer industry "Garbage in, garbage out.". If the input is biased, the output will be biased.
If the AI's job is to assess the odds of recidivism, taking into account all available data, then it's neither going to go out of its way to be racist, nor go out of its way not to be racist.
What the heck is wrong with computer engineers? You guys think "oh, the problem can't be bad programming, the computer is never wrong. It has to be the user. Somehow."
No, of course it didn't "go out of its way" to be racist. It just happened that the results were racist. One easy explanation for this is that the r
Re:Did anyone think it would be otherwise? (Score:4, Interesting)
Actually, I am unaware of any women currently on any NBA rosters. Ignoring the small different in men vs women in the population, about half of random people will have 100% likelihood of not being on an NBA team, and about half have a 99.999% likelihood of not being on an NBA team. Those probabilities may still add up to the same thing, but practically, if I meet a random woman black or white, I still can be absolutely certain she is not on an NBA roster.
Saw a TV ad once for a medical show about a man born without a penis getting a "bionic" one. But the blurb said "Andrew is the only person in Britain born without a penis due to a 1 in 20-million condition". I was forced to infer that women in Britain are born with penises.
That, or that people insist on using gender-neutral pronouns even when doing so leads to silliness. Similarly, sportscasters have a checker history of referring to important "firsts" by "African-Americans" except that they sometimes aren't African-American at all...they may be actual Africans from African countries, or may be dark-skinned people born in Britain or elsewhere in Europe ("European-Africans"?).
E.g.
http://www.gelfmagazine.com/ge... [gelfmagazine.com]
What does Formula One driver Lewis Hamilton have in common with former heavyweight champ Lennox Lewis? They're both famous athletes named "Lewis," of course, but they also have the distinction of being two of the most recognizable African-Britons on the planet. What, you've never heard the term African-Briton before? Perhaps you, like certain media outlets we know, need to learn how to use the term "black."
Here's ESPN's correction after Hamilton won last weekend's Canadian Grand Prix:
"On a June 11 Mike and Mike in the Morning news update on ESPN2, Formula One driver Lewis Hamilton, the first black person to win an F-1 race, was termed an African American. He is from England."
Here's how the Charlotte Observer expressed regret:
"A story in Monday's Sports section misidentified Lewis Hamilton as Formula One's first African American driver. It should have said he is the series' first black driver. Hamilton is British."
Lennox Lewis was also regularly mislabeled, usually by columnists discussing the "African American" dominance of the heavyweight division.
Of course, it's not only athletes who have to deal with this strange combination of political correctness and geographic ignorance from American writers. Brits Naomi Campbell and Thandie Newton have both been referred to as African Americans. (Newton at least has the African part down, as she was born in Zambia.)
Maybe as punishment, the journalists should be forced to listen to a lecture on the differences between African-Americans and black people by Gary Sheffield.
Let's not make AIs too human... (Score:3, Funny)
Re:Let's not make AIs too human... (Score:4, Funny)
Re:Let's not make AIs too human... (Score:4, Insightful)
Yes, a race where we attach weights to the good runner so that everybody finishes the same, no matter how hard they trained or how fast they are.
Re: (Score:3)
Harrison Bergeron [wikipedia.org] will become yet another instance of a warning becoming an instruction manual.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
The purpose of a race is to see who is faster.
Soon the purpose will be who finishes at the accepted time with the most weight.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
How about making AIs snarky homicidal killers? [theportalwiki.com]
fx(Race,Gender) = {Income, Crime} (Score:5, Insightful)
Better keep the AI away from income and crime statistics organized by race and gender then. It could form some pretty political incorrect opinions pretty fast...
Re: (Score:3)
What do those starts have to do with sentencing? Surely the sentence should be based on the nature of the crime and past behaviour, not income it race.
Re: (Score:3)
Re: fx(Race,Gender) = {Income, Crime} (Score:2, Insightful)
Hey, whatever narrative you got to tell yourself to ignore black crime rates.
Or how about you go live in a random African country, tell us how much better and less oppressed life is there.
Re: fx(Race,Gender) = {Income, Crime} (Score:4, Informative)
The most prosperous parts of Africa are the parts that were the most developed during colonial times.
You better make sure no AI sees that data either.
Training data (Score:5, Insightful)
Re: (Score:3)
Can you cite where that "information" came from?
Re:Training data (Score:5, Insightful)
Can you cite where that "information" came from?
https://thesocietypages.org/socimages/2017/07/05/algorithms-replace-your-biases-with-someone-elses-biases/ [thesocietypages.org]:
But as Wexler’s reporting shows, some of the variables that COMPAS considers (and apparently considers quite strongly) are just as subjective as the process it was designed to replace. Questions like:
Based on the screener’s observations, is this person a suspected or admitted gang member?
And:
The New York State version of COMPAS uses two separate inputs to evaluate prison misconduct. One is the inmate’s official disciplinary record. The other is question 19, which asks the evaluator, “Does this person appear to have notable disciplinary issues?”
... An inmate’s disciplinary record can reflect past biases in the prison’s procedures, as when guards single out certain inmates or racial groups for harsh treatment. And question 19 explicitly asks for an evaluator’s opinion. The system can actually end up compounding and obscuring subjectivity.
By definition, you can't claim that system is objective when it calculates a number based on "an evaluator's opinion".
Re:Training data (Score:5, Informative)
Re: (Score:2)
but subjective ratings by guards who may well be racist
A whole lot of speculating going on right there.
Well, when the system uses inputs that explicitly include guards' opinions, and then it's output just happens to show a huge racial disparity that does not correspond to statistical reality, that speculation may just be right.
What if reality is biased? (Score:2)
Make the AI ignore it or feed it a subset that gives it the 'right' experience?
Political correctness for machines? (Score:3, Insightful)
After political correctness has subjugated humanity, it sets its sights on the machines! I take some small comfort in knowing that it can never actually change reality itself. Even if no one is allowed to notice, the world will continue following the laws of physics.
Re: (Score:2)
Re: (Score:3)
They demonstrably do not.
Statistics (Score:3, Insightful)
The AI is only as smart as the data its fed. If the statistics are biased (as in, mathematically, not subjectively), then the AI will be as well. The only way to "fix" this will be to either cook the input, or add political correctness to the algorithms.
I get that the ACLU and others are afraid that this will cause a feedback loop to reinforce stereotypes, but altering the AI is the wrong way to go about it. This is a societal problem that needs to be fixed at the societal level.
Re: (Score:3, Insightful)
This is a societal problem that needs to be fixed at the societal level.
There is no problem.
Re: (Score:2, Insightful)
This is a societal problem that needs to be fixed at the societal level.
There is no problem.
When black males show less upwards social mobility. When women regularly earn less than men for doing the same jobs...
One way or another there is a societal problem. I can't say if it's whitey holding the black man down, or the black man holding himself back through poor social mores. Either way it's a societal problem.
Re: (Score:2)
It's not any racial group doing it, black or white. It's institutional, for the most part.
It's a reflection on us (Score:2)
Had to read pretty deep... (Score:5, Insightful)
So the real story in their cherry picked example is two fold:
-It's wildly inaccurate, and Northpointe's product should be put out to pasture and never used, period.
-A system is being used to influence punishment that is not open to auditing because 'proprietary'.
Note that the systems explicitly did not have knowledge of race. So we have two possibilities:
-Some criteria that correlates to race is triggering it
-The system is perpetuating existing bias in perception and reality. For example:
-"Was one of your parents ever sent to jail or prison?" could easily cause the ghosts of prejudice that caused unjust incarceration to recur today.
-"How often do you get in fights at school?" Again, if one is subjected to racial tension, they may unfairly be a party to fights they didn't ask for.
Re:Had to read pretty deep... (Score:5, Insightful)
Yes, I read through the ProPublica article and my takeaway is that the systems are flawed and should be reviewed and either fixed or scapped. If your algorithm is supposed to predict recidivism, and it fails to do so, then it's broken. The fact that it fails to do so in a racially baised way is really icing on the cake.
Re: (Score:3, Insightful)
What is sad about the US in general, and Slashdot specifically, is that the comments here about the actual data and the failures in this correlative model, are basically left alone, while all the racist "See even them super smart computers know nig... sorry... blacks are ebil crooks" shitposts, get to +5 almost immediately.
Slashdot needs a new slogan: Validation of biases. No intelligence found here.
It's simple, really... (Score:5, Funny)
....we just need to develop a SJW AI to harangue the other AIs about their biases, real or perceived.
We can then offload all political nonsense to the AIs, who will be too busy fighting with one another to go full Skynet on the rest of us.
Of course it does snowflakes (Score:2, Insightful)
People build a tool that has no concept of bias.
The tool shows results that some people don't want to admit.
The tool has to be racist and sexist.
Now people will BUILD IN race and sex rules to counteract unbiased decisions.
So now the tool is racist and sexist.
People are stupid.
Re:Of course it does snowflakes (Score:5, Insightful)
That aside, attempting to compensate by overriding the output of the AI with some sort of counter-bias indeed seems like a terrible idea.
Probably making my points here less relevant, I did not see any direct references to neural networking; if these are all just human-programmed algorithms (lacking the abstraction of the neural net stuff), I don't have much else to add.
Humans are irrational, machines should be rational (Score:2)
AI buzzword of 2017 (Score:2)
I suppose its just not inflammatory/sensational enough to say: "Some programmers gave an expert system some data to look at and it gave a result."
Instead they want us to pretend there are actual thinking computers that are racist or sexist or something else even more silly, AND lets start changing them to be more politically correct because 'reasons'.
This madness will never end will it? It will just cycle around from obscurity to inflammatory and we have to keep beating it down forever?
Warranted, maybe? (Score:2)
I realize this won't be a popular opinion, but perhaps the bias is warranted? If the data being fed in is accurate, I don't see how we can treat that bias as anything other than a rational response.
Of course I recognize there are a thousand other possible culprits here, but we should not dismiss possibilities out of hand simply because they make us feel embarrassed.
Re: (Score:2)
If the data being fed in is accurate, I don't see how we can treat that bias as anything other than a rational response.
The real problem isn't that the tool is making an data-driven (even if "biased") assessment regarding the tendencies of a subgroup within the population, but rather that the tendencies of the group are being used to make decisions about how to treat individuals. That is the essence of stereotyping, whether it's done by a human or by a machine. Stereotyping is wrong because it disregards individual choices and personal responsibility; morality aside, it's also a poor guide since the variation within a given
Think of the children! (Score:5, Insightful)
Re: (Score:2)
Comment removed (Score:3)
More generally, (Score:5, Insightful)
AI has a transparency problem. A massive, huge one. This'll be made worse as people learn to trust the computer, and to regard it as their friend.
AI will preserve biases from the training set (Score:2)
Self-reinforcing biases (Score:3)
So we acknowledge that black offenders are statistically more likely to reoffend than white offenders.
But why is that? I know a lot of people assume that this is “just how black people are.” But the image media paints of “black” is far more socioeconomic than anything else. Do poor blacks commit more crimes than poor whites? What about in the middle class? Upper class? If poor whites and poor blacks have differences in recidivism, is this due to a cultural or genetic difference in how these people handle the stresses and challenges in their lives? And if so does this difference conver advantages in other circumstances?
Something we need to be mindful of is that people often conform to the roles that others assume for them. If you’re black and everyone assumes you’re going to be a criminal, and one day you get an immoral impulse (like ALL humans do), the negative self-image that was handed to you will be a strong influence over how you decide to give in to that impulse or not.
My dad always had this attitude that women were less intelligent than men. He would never admit to that, but there are assumptions he made that had an effect. My sister had dyslexia and she’s female, so there was always this belief that she wasn’t more than “average” intelligence. And once people develop a belief, it is common for them to only notice the things that confirm that belief, while things that contradict it get automatically filtered out. It turns out that she is extremely bright, just not in areas that my father recognized. Long story short, I’m betting that if she had been recognized for her intelligence, she could have channeled that positively. Instead, she turns into a manipulative sociopath.
Other people’s beliefs about you can fuck you up.
The biggest impediment for blacks to get out from under this higher recidivism trend is what people assume to be the cause of the trend. It’s chalked up to something inherent about being “black.” Commonly, when a white male makes mistakes, people are apt to blame it on stress or other external factors, and they’re working hard, and they mean well, and they’re doing the best they can. Only after someone has evidence of nefarious intentions do we change our opinion. If we were to treat everyone else the same way, it would make a world of difference.
Re: (Score:2, Insightful)
A woman who is good at navigating should not be denied a driving job because most women are bad at it.
We want to be a Just society, so we need a means of ensuring that we do not unfairly punish or limit people because of facts that are true of OTHER people who happen to be similar to them.
Re:Biases are reality based (Score:5, Insightful)
Sure, and that's totally fair. The issue comes when, say, 60% of JobsRequiringNavigatingSkills are men and 40% are women, and people say "this is unfair".
To be honest, though, it depends on the job. Men have, typically, much more upper body strength than women, so are more suited to being things like garbage men. Yet nobody's clamoring for equal numbers of women to be garbage *people*.
Yet they are for firefighters, even though firefighting is basically a job where you turn upper body strength into saved lives, simply because they want to be seen as "equal".
People are different and have different things they're good at and bad at. Most HR people are women even though that's a comfortable, high paid, safe job. And I'm okay with that.
Re:Biases are reality based (Score:5, Insightful)
Re: (Score:2)
Re:Biases are reality based (Score:4, Insightful)
The problem is making policy targeted at individuals based on statistical correlation of a group. We have this individualistic notion in the US at least that every person can forge their own path in life.
That narrative doesn't work when there are systemic barriers put in place pre-emptively due to statistical analysis.
Very few people deny the hard numbers that black people (in the US) commit more crimes. Or that chinese/japanese/korean (in the US, not all "asians") 1st and perhaps 2nd generation people are more academic. I haven't looked up the women and navigation statistics.
The problem comes when you take that general statistic and start making policy that target individuals. E.g. "Looking for a data analyst? Hire that asian-looking guy!"
Even worse when it comes to measures that perpetuate said statistic. E.g. "he's black, so let's assume he's guilty of a crime until proven otherwise".
Re:Biases are reality based (Score:5, Insightful)
You're jumping to the end too quickly.
Blacks are convicted of crimes more often, certainly. Does that mean they're more violent, or that they get caught more? Or that they live in worse situations than whites? Are Asians particularly good at math, or do Asian parents favour certain qualities that lead to more favourable math outcomes? Are they in more stable communities so their kids have a better opportunity to study math? Is it cultural or innate? Are women actually bad at navigating, or is it that we're less likely to take little girls out to go camping and get experience at navigating? Is that your own bias, since I've always heard that women are better at navigating?
We actually have statistics that white people just aren't convicted as often for drug offences despite having similar or higher rates of use and dealing. Based on conviction data, a machine learning system would internalise the bias that blacks are more likely to have an involvement with drugs, despite that not being true. Garbage in, garbage out, right?
http://www.dailymail.co.uk/new... [dailymail.co.uk]
http://www.huffingtonpost.ca/e... [huffingtonpost.ca]
https://www.washingtonpost.com... [washingtonpost.com]
http://www.cnn.com/2009/CRIME/... [cnn.com]
(Notice that those articles are from 2009, 2011, 2013 and 2014—this is not new data.)
So generalities are not necessarily based in reality. Indeed, your claim that 'Asians are good at math' is particularly bad since Asia is HUGE and there's no way everyone from that area of the world is good at math. And as a half-Chinese guy that's okay at math but much worse than my white partner, and who knows plenty of Chinese people that have no affinity for math at all, I feel like a lot of these generalities are based on folklore and a few selective tests that aren't really representative of ability.
The USA and Canada are not the bastions of equal opportunity that they purport to be, not for everyone. First Nations people in Canada and black people in the USA are consistently disadvantaged through broad government policy.
So all this to say that getting good, clean data for machine learning systems that remove human bias is incredibly difficult, since most humans are unwilling to admit their biases don't necessarily have a basis in reality, or are the wrong conclusions drawn from incomplete knowledge of data.
Re:Biases are reality based (Score:5, Insightful)
Blacks are convicted of crimes more often, certainly. Does that mean they're more violent, or that they get caught more? Or that they live in worse situations than whites?
It means that the first 10 times Johnny White gets caught stealing gum, he gets a warning by the shopkeeper, the next 5 times the shopkeeper calls the cops and he's taken home by the cops, then the 16th time, he's formally warned, having that be the first time there's any formal record of his misdeeds. Tyrone Brown gets charged the first time, and gets 10 years "to make an example of him".
That's why the conviction rate isn't a good statistic, the data shows that the entire system has biases.
Re: (Score:3, Informative)
It's interesting how you redirected the discussion from "violence" to "drug offenses", which are entirely different things. According to the FBI stats in 2013, there were 2,698 murders committed by blacks, and and 2,755 committed by whites. When you consider that blacks only comprise 12.2% of the population, yet committed nearly as many murders as whites which are 63.7% of the population, there is a significant tendency towards violence. Additionally, 83% of the people murdered by blacks were also black,
Re: (Score:3, Informative)
Blacks are vastly more violent per capita than Whites, as shown by the DOJ random surveys asking about crimes one has been a victim of in the past year, then asking particulars about who did it. Blacks are vastly over-represented in assaults and robberies in the US, though all felonies are also committed more often by Blacks per capita. Particularly interracial crime is overwhelmingly Black-on-White rather than the reverse, over a 25-to-1 ratio per capita. For rapes it's 95% certain to be a ratio of hundred
Re: (Score:2)
The problem is that AI needs to learn to ignore those biases like a good human does, not that there may be some statistical validity to them.
Re: (Score:2)
Even if you are right, what is your final solution?
In many areas our society has decided on a requirement for equality of outcome. If the applicant pool is 10% black, then your workforce better be about 10% black. Likewise, a criteria of the probation-recommending-AI could be racial equality, where blacks and whites are equally likely to receive probation. This will likely lead to more crimes, but that is something that many people are willing to accept to avoid discrimination.
You cannot filter on inputs, but just avoiding telling the AI the offender's r
Re: (Score:3, Insightful)
Re: (Score:3)
Your quest for a solution in this context is misguided, and your implication that ShanghaiBill wants blacks to be mistreated is vile.
It is in nobody's best interest to deny reality.
Re: (Score:2)
In East Menlo Park, the solution was to give everyone enough money to buy a house elsewhere. Then that area next to Facebook's HQ now becomes safe enough for middle class homes to be built as well as various shops like Jack-In-The-Box.
Re: (Score:2)