Google Research Promotes Equality In Machine Learning, Doesn't Mention Age 149
An anonymous reader writes: New research from Google Brain examines the problem of 'prejudice by inference' in supervised learning -- the syndrome by which 'fairness through unawareness' can fail; for example, when the information that a loan applicant is female is not included in the data set, but gender can be inferred from other data factors which are included, such as whether the applicant is a single parent. Since 82% of single parents are female, there is a high probability that the applicant is female. The proposed framework shifts the cost of poor predictions to the decision-maker, who is responsible for investing in the accuracy of their prediction systems. Though Google Brain's proposals aim to reduce or eliminate inadvertent prejudice on the basis of race, religion or gender, it is interesting to note that it makes no mention of age prejudice -- currently a subject of some interest to Google.
single parents != females (Score:4)
If even machines come up with measurable differences between work performance of males and females, then I think giving them in average the same amount of money or the same promotions is discrimination. I'm all for giving a woman who performs just as well as a man the same money, but if there are additional risk factors like a pregnancy or when the parent has to raise child, the person usually prioritizes these things over work, so why should work not be allowed to prioritize that person over others who do not raise children or do not drop out for weeks and months out of some work-external reason.
Re: (Score:2)
Re: (Score:2)
The problem was, this also meant a career-first woman who doesn't want to start having children is penalized because it's assumed she'll want to marry, have kids, etc.
Yes, if this is happening (and I guess it does), it an actual bad thing and needs to be fought. Most feminist don't make this difference though, and claim the pay gap is due to evil men hating women and wanting them to "stay in the kitchen" or something.
Re: (Score:1)
Re: (Score:2)
I'm all for giving a woman who performs just as well as a man the same money, but if there are additional risk factors like a pregnancy or when the parent has to raise child, the person usually prioritizes these things over work, so why should work not be allowed to prioritize that person over others who do not raise children or do not drop out for weeks and months out of some work-external reason.
A few things here. First off, while men obviously can't get pregnant, they can do child care. (Especially after the first few months when the average woman tends to give up on breastfeeding, if they do it at all.) Men can take want to take parental leave. These days, more and more men are "stay at home dads" or interested in "paternity leave" or whatever. It's still a minority, but it's growing.
So, even if you have an anti-child policy at your company, are you going to query men you're hiring on whet
Re: (Score:2)
Personally, I think life's too short, and I have more stuff to do than work.
Its great that you have this position for yourself, which I do have as well, but that doesn't mean that everyone who is working harder shouldn't be rewarded for it.
But we currently have moved toward a cutthroat environment that often rewards those who work long hours, never take vacation, sick days, or other leave, etc. Is that really the working environment you prefer?
If those people do these sacrifices, and their overall performance actually does get better, then it should only be natural to reward them. Everything else would be unfair.
Currently, well-educated "career women" tend to have some of the lowest birthrates, likely because of the feedback factors you identify. They prioritize work to get ahead, and then either wait until it's too late to have kids, or only have one or whatever.
There are even many great men who didn't have children because they didn't have the time, Nikola Tesla is an example. But this is simply the deal you have to make, raising chil
Re: (Score:2)
Even if you don't think it's a problem to discriminate based on assumptions and "potential", why should people who don't have kids reap the benefits of having younger generations existing without contributing at all?
Re: (Score:2)
Not if the differences cancel out (women perform less well in some areas and better in others), or pale in significance compared to the variation between individuals of both sexes (eg men score a 5.3 on my made up performance scale, women score 5.1, and the standard deviation for both groups is around 1.8).
Re: (Score:2)
However, what do you do if most of the people around you want a more moderating society, and they expect an economic environment that promotes working and raising a family?
Yes but most of the reasoning those people who want such a society use is claiming that women get less for doing the same work, or similar. They either lie about or don't even know it themselves.
And yes, I do think that each couple should decide for themselves whether they want a single payer household, or where both parents work, or one parent works only half time, or similar. But I think its ridiculous to give the parent who works only half time the same money as the full time parent just to be "fair to w
Let's start charging women more for car insurance (Score:1)
We can use the extra money to subsidise men's insurance premiums. Clearly, "prejudice by inference" is causing men to be charged too much.
I'm sure supporters of gender equality will agree with me.
Why have AI at all? (Score:5, Insightful)
Re: (Score:2)
As someone who's spent the past year studying/applying data science and machine learning, I think this is the most insightful comment posted so far.
It's incredibly effective to discriminate by education level, income level, religion, race, gender, age, home address, credit score, criminal history, and # of children. With these limited dimensions, you have an almost perfectly normally distributed cluster regardless of the topic being studied. If you do unsupervised learning on the raw data, the features will
Re: (Score:2)
That's an interesting question. Having variability in height can add a lot to the dynamics of the play. Personally I think watching high school or college sports is more interesting than professionals...deadly dull.
Re: (Score:2)
"Does anyone really want to watch a game of Basketball where the height distribution of the players perfectly mirrors the height distribution of the general population?"
No, but I want to watch a basketball game with a smart hoop that immediately adjusts its height to be propportional to the height of the player with possession of the ball.
Re: (Score:2)
It's illegal to discriminate on the basis of race, religion, or gender. We've decided that as a society, so that fewer people get bad treatment for things they can't change about themselves. (As a society, we usually treat religion as effectively unchangeable.) Further, it seems very unlikely that the differences attributed to race are actually a direct result of race, so you're using race as a proxy for other things, assuming, for example, that all blacks have certain (perhaps undefined) undesirable tr
Re: (Score:2)
The AI and its results are perfect because the smart private sector poured all its cash into that product line.
Its a bit like the final decades of East Germany with the state saying that larger units take over all remaining smaller dynamic areas of production.
All ability to be dynamic, to change with demand, quality, any slack or ability to ramp up was finally and fully lost.
Capitalism is about cha
You Mean Different Groups are Different?! (Score:2)
Re: (Score:2)
No, Google found out that past human racist decisions are corrupting their data pool.
Judging individuals based on group attributes (Score:4)
The problem Google is describing isn't limited to a subset of arbitrary tribal factors society deems to be off limits.
Entire reason for existence of these systems is making prejudiced decisions about individuals based on statistical evidence.
You can spend all day filtering out things that will get you sued or attract bad press but this doesn't address core fact these systems are intended to make prejudiced judgments about individuals based on statistical experience and evidence.
Being prejudiced can be practically helpful in some contexts but don't pretend that isn't what your doing, don't confuse it for fairness and don't bother making up a bunch of mystical bullshit about how your dataset or programmers are biased. Prejudice is the raison d'etre of these systems. It is what they are designed to do.
PC horseshit (Score:4)
How can there be "prejudice" if the system _does not have cognition_? It just approximates a function. If a woman is less (or more likely) to default on a loan, it'll just say so, SJWs be damned. That's why women see ads for shoes even if they never disclosed that they are women to Google. That's also why they see fewer ads for engineering positions (women are statistically much less likely to be interested in engineering fields).
It's a function approximation problem, and this happens to be the function that the real world data seems to support. Now you want to wreck it for some kind of affirmative action, thus decreasing its accuracy and driving an agenda of what you think the world should look like, rather than what it actually is.
Re: (Score:2)
that's not discrimination (Score:2)
It can be, but the concept of "gender" or "race" is meaningless to a machine learning system for loan evaluations, and it has no biases or prejudices. If a properly trained machine learning system disproportionately rejects applications of some gender or race, then that reflects an actual statistical regul
Re: (Score:2)
Except it's not that simple. The statistical reality that people in Philadelphia are less likely to have insurance, means it's more likely your insurance company will have to pay money if you're not at fault. Which means that it costs more to insure a car in Philadelphia. Which means fewer
Re: (Score:2)
A bias is something preconceived, i.e., something you believe before taking data into account. If it's "baked into the data", it's not a bias, it's a rational inference based on data.
Just because you can pull an explanation like that out of your ass doesn't mean it's true. In fact, the state's no-fault law combined with the generally shitty state of Philadelphia is more likely respon
Re: (Score:2)
What makes you think I pulled it out of my ass. I got it from the American Economic Review. It's a prestigious publication, peer reviewed. My paraphrase is the accepted explanation, full stop.
Which is why Pittsburg was also examined. It had twice as bad incidents of major automot
Re: (Score:2)
We aren't talking about what "could be" a bias, we're talking about loan applicants and loan outcomes. There is no reason to believe that racial bias is "baked into that data".
How nice. Nevertheless, the car insurance example you gave is not an example of "bias baked into the data", it's an example of a rea
Prejudices confirmed? (Score:2)
Let's say we put all available data in, sort out the crap data so the input is neutral.
Then we get exactly the prejudices out. This confirms them. Period.
This does not imply, that we should support them. This only implies, that they are there. People often jump to conclusions, that this implies causation, while it implies correlation. If some places have higher crime rate and some places have more black people there (another case of ML prejudices) and the data is correct, it's the correct decision for an in
Confirmation (Score:2)
It
Re: (Score:2)
Actually, we already knew about these correlations. It's not like the magic AI found that there's a statistical difference between races that we were unaware of. The difference is not that it's easy to blame prejudice on people and not AIs, the difference is that the people-based criteria were at least supposed to be designed to ignore race, while the AIs and their fanbois just treat the correlations as holy writ.
Are you trying to tell me that someone just like me but black would have a worse chance of
Re: (Score:2)
Um, no.
AIs are largely programmed through something called Machine Learning. Guess where the data comes from that provides the machine learning?
People. Papers, blog posts, databases, written by people.
People who have prejudices.
Re: (Score:2)
Re: (Score:1)
Re: (Score:2)
Re:Well... (Score:4)
You are highly overexagerating the level of "intelligence" of AI. The data going into a machine learning system is typically in the exact same format as what comes out. If you have a loan application application (sorry, couldn't resist myself) that predicts based on marital status and children, than the only type of data going in is long table with three columns; married (yes or no), children (yes or no) and repaid (yes or no). The AI is not going to get newspaper articles and infer all kinds of possibilities about what a marriage is. The only thing the AI knows about marital status is that status "yes" had different letters in it from status "no". The problem discussed here is that you cannot completely remove the data for "gender", as the combination of the data for "married" and "children" is not universally distributed amongst genders. Essentially, you cannot remove a bias unless all other data is completely independant of the data you want to remove.
Re: (Score:2)
Which is why social research often has between 1500 and 5000 measured variables - which AI is starting to use.
I think you greatly overestimate the ability of people to come to logical conclusions.
Re: (Score:2)
Ever hear of "overfitting"? If you feed a thousand input variables into an AI, and don't have an immense amount of learning data, the model will have a lot of accidental noise, such as figuring that left-handedmales in their thirties with BAs who earn $40K-$50K and live in owner-occupied houses in urban Alabama are very bad risks for no reason anyone can discern. As a general rule, if there's many more categories than lines of learning data, there's not going to be any constraint on how it evaluates a lo
Re: (Score:2)
The problem discussed here is that you cannot completely remove the data for "gender", as the combination of the data for "married" and "children" is not universally distributed amongst genders.
Actually, the *effect* of that bias *can* be removed by first removing the bias against men in the judicial system. You don't need to create an exception for single mothers if the number of single mothers is roughly the same as the number of single fathers.
In short, this is only a "problem" in that it is revealing a bias in data-production system (the courts).
Re: (Score:2)
You're assuming that, in the absence of judicial bias, the children would be awarded equally to father and mother. I see no reason to think this is true. It might be true if society pushed fathers to have as much to do with their kids as mothers, or something like that, but it's entirely possible that it's to the child's interest for the mother to have custody in more or less than half the individual cases.
Re: (Score:1)
I have to agree. Look at how propaganda was used on adult populations by all sides in WW2. The enemy was demonized and portrayed as sub-human, and that continues today in racist, radical religious, and sexist propaganda. Those that would spread hate put a lot of work into the task, expending much more effort than most peaceful people... who tend to just go about being good.
To assume that programmers don't have bias (e.g. "That old guy is a real dinosaur" and "That person is a diversity hire") conscious or u
Re: (Score:1)
By definition those things are without bias. You have no idea what you are talking about.
If your algorithm decides that women are less likely to repay loans and thus should be less likely to have one, or that men under the age of 30 should not be granted car insurance. It is not a success, it's a news story waiting to ruin your reputation. Irrespective of what the data says, it is a bias to any outward observer.
Re: (Score:2)
By definition those things are without bias. You have no idea what you are talking about.
If your algorithm decides that women are less likely to repay loans and thus should be less likely to have one, or that men under the age of 30 should not be granted car insurance. It is not a success, it's a news story waiting to ruin your reputation. Irrespective of what the data says, it is a bias to any outward observer.
If the algorithm make an initial decision this based on statistics, then it's doing its job correctly -- however, if it's based *solely* on those statistics and fails to account for the specific individual, then it fails. In general, men under 30 have higher rates of car accidents, but not all men do. Generalizations are not absolutes. As K said, "A person is smart. People are dumb, panicky dangerous animals..."
Re: (Score:2)
It comes down to that factual based AI decisions are clashing with society's lies.
Also don't confuse micro vs macro. Comparing 1 person to their group is likely the biggest logic fallacy out there.
Re: (Score:2)
We're not talking about lies here, we're talking about decisions about how to treat people. If the AI decides that women in general are too dangerous to lend to, the no woman will get a loan, no matter how reliable and deserving, and we consider that unacceptable. I don't know what you mean by micro vs. macro, since all an AI can do is apply rough categories and determine the likely characteristics of a group.
Re: (Score:2)
Re: (Score:1)
The problem, in general, is detecting the discrimination in the first place. The article keeps the explanation on the simplistic (and legally significant) terms by framing the issue as discrimination against "protected classes".
But the AI problem of 'prejudice by inference' is not limited to the socially negative connotation of prejudice as mentioned in the article. Your AI may be discriminating in unsuspected ways that cost your hypothetical insurance company profit by overcharging a customer category that
What happens if (Score:3)
the machine learning algorithm infers a difference which is real, but uncomfortable for us socially.
Let's assume that we can prove that the detected difference was in the case NOT introduced by human-created input-data bias.
I'll give an example: I'm left handed so I think I'm allowed to talk about this.
What if the system learns that left handed people in North America die a little earlier than right handed people.
And specifically that they die with higher frequency in car accidents.
(I'm pretty sure both sta
Re: (Score:2)
So does that mean its ok to increase life insurance premiums and automobile insurance premiums for left-handed people?
Handedness is not a legally protected class, so yes, it is "ok" to charge them more, if by "ok" you mean legal.
What kind of statistically valid discrimination IS ok? Any?
Plenty of forms of discrimination are legal. There are only a few that are prohibited. For instance, my company refuses to hire tobacco smokers. That is perfectly legal. Smokers have no rights.
Re: (Score:1)
Re: (Score:2)
Unfair discrimination has proven to be very stable. Something like sixty years ago, lots of establishments got more customers by excluding the black ones than they would have if they served blacks. Empirically, allowing people to discriminate at will causes more injustice than restricting some forms of discrimination.
Re: (Score:2)
What if the system learns that left handed people in North America die a little earlier than right handed people. And specifically that they die with higher frequency in car accidents.
Honestly I feel that me at least, as a computer scientist is unqualified to answer this question. My scientifically orientated mind wants to yell that "It's not biased, it's just data." But I understand this is an awfully naive and simplistic answer.
I'd rather leave the decision and therefore the consequences of that decision to someone who studies something more relevant like social sciences.
It would be nice, but your argument is flawed (Score:2)
There are some systems that are so complex (people going about their lives and having a chance of dying, for example) that you will never be able to predict the particular outcome for a particular individual, no matter if your computer brain is the size of a planet.
The best info we can ever get in advance about these complex systems is statistics about populations of the with similar characteristics in similar environments.
Re: (Score:2)
AIs do not currently have conscious or emotional biases. It is definitely possible for one to come up with an AI that has suboptimal calculations that wind up performing illegal discrimination, or just favoring one group over another with no basis.
The traditional definition of AI is the field that covers stuff we really don't know how to do. If we come up with an algorithm and apply it, it's not an AI. If it does its own learning, we're not going to be able to predict what it will come up with.
Re: (Score:2)
AI of course have bias, they are made by biased humans. What what human considers being neutral another will call being biased. For example, "affirmative action" is unfair and racist, says me.
Re: (Score:3)
I think that AIs, by definition, cannot have bias.
No. There is nothing in the "definition" of AI that prevents bias. AIs will be biased if the training data supports the bias. For instance, if the AI looks at loan default rates, it will conclude that blacks and Hispanics are worse credit risks than whites ... because they are. But discrimination in lending is still illegal even if it is supported by the facts, and even if it is determined indirectly by, say, zipcode, or given name.
Re: (Score:2)
Well they actually do. It is not because of hatred, but because the programmers put their biases into the programs, as well correlations not connection to the root causation.
For example. For age discrimination.
Say you are trying to find a workforce with the longest retention rate.
So it looks at the big data. and finds that People with skills in COBOL had a strong correlation to recent job losses. While C# doesn't have any strong correlation.
So this experienced developer who was working at a job fixing legac
Re: (Score:2)
Auto insurance is really cheap for 16yr olds here... you have to be 17 to drive.
Re: (Score:2)
"promoting equality" is euphemism for promoting an agenda using racism or ageism or other discrimination.
Re: (Score:2)
Re: (Score:2)
Because Donald Trump isn't president.
Re: (Score:2)
I think they gave her a medical pass due to her advancing Parkinson's Disease.....
Re: (Score:2)
Unless of course you will try to make the argument that there are loan managers out there willing to lose their job/raise/bonus/promotion in order to deny women loans and not meet his targets
Yes thats really what the SJW's think.
Re: (Score:2)
To be fair, there used to be a practice called redlining [wikipedia.org], which was an indirect but highly effective means of overt discrimination. Now as to whether the cause for discrimination was supported by statistical history of creditworthiness (or was born of just plain hatred/bigotry/etc) is another story.
Re: (Score:2)
So if women are 2x as likely to default as men on a loan (MADE UP NUMBER, NOT BASED ON FACTS), damn right that is important to know when considering to give the loan and at what interest rate. This would not be sexism or bigotry or wtv else regressive fascist femenazis and SJW would have you believe. It would be an important variable when measuring risk.
What if black people were more likely to default on a loan? Would you be OK with charging black people more than white people?
I understand what you're saying, and I understand why people might take various demographic information into account, but you (presumably) wouldn't support making legal random searches on black men, just because one in three end up in jail at some point in their life. We understand at a fundamental level that THAT is wrong.
People should be judged on their worthiness based on what t
Re: (Score:2)
Isn't that why the FICO score (and credit rating) was formed (that is, to provide a more objective means of reporting the creditworthiness of an individual)?
Re: (Score:3)
Yes, but being a single parent is a risk factor. You usually don't have as much time to focus on your job, etc. Or it can be the opposite: if you have a child, you want the best for them and maybe make extra sure you keep your current job, etc.
And about skin color, blacks have a larger unemployment rate than whites:
http://www.theatlantic.com/bus... [theatlantic.com]
So you are not supposed to look at the employment status because due to this you might infer the skin color and apply racist bias? This is just totally nuts. Of c
Re: (Score:2)
Sure, you should look at employment status. It's relevant. What would not be OK is to give unemployment status undue weight because it is different between races. It's becoming more important now since we're not designing the loan criteria ourselves, but are using powerful statistical techniques to come up with predictor functions, these aren't going to be perfect, and we can't reason about the functions. If the predictor function is biased against blacks in similar situations as whites, for example, t
Rights and rationality (Score:1)
If police were not a privileged monopoly, they would owe restitution for bad searches, just like a trespasser does. But given that it is a monopoly, we try to rein its power in with rules.
The idea that the world is better or more rational by ignoring rational inferences is mistaken. Take for example the effort to "ban the box" (which means employers don't get
Re: (Score:2)
What if black people were more likely to default on a loan?
They are.
Would you be OK with charging black people more than white people?
No. Our society's top priority should not be maximizing profit for the financial industry.
Re: (Score:2)
Of course, that's not actually the issue. What actually happens is that the financial industry raises the "normal rate" enough for them to make their money. Which means that Asian-Americans (best loan risk around, in general) pay more to allow African-Americans (arguably the worst right now. Could be Hispanic-Americans are worse, though) & Anglo-A
Re: (Score:2)
I would be okay with companies charging blacks more. If we as a society consider it important that the average blacks gets equal cost loans as the average white regardless of the fact that they on average default more then it's government's responsibility to make up the difference.
We shouldn't force the companies into pretending insane decisions are sane, insanity is not something we should strive for.
Re: (Score:2)
I'm also perfectly alright with people who dress like thugs getting hassled more by the police BTW. Even if that is on average racist.
Just don't dress like a thug.
Re: (Score:2)
It's similar to crime statistics. If you look at the raw figures you see something like a 300% disparity based on ethnicity for certain crimes, but once y
Re: (Score:2)
What if black people were more likely to default on a loan? Would you be OK with charging black people more than white people?
I understand what you're saying, and I understand why people might take various demographic information into account, but you (presumably) wouldn't support making legal random searches on black men, just because one in three end up in jail at some point in their life. We understand at a fundamental level that THAT is wrong.
People should be judged on their worthiness based on what they've done, not how they were born. A loan shouldn't be based on sex or colour.
On a related note, why is it ok for auto insurance companies to charge men more for policies than women?
Re: (Score:2)
Re:As long as they're still allowed to use data... (Score:5, Informative)
You seem to work from the assumption that women and minorities are more likely to skip out of their bills.
You don't need to "assume" anything. You can just google the data.
Women are less likely to default on their mortgages.
Women are more likely to default on their student loans, partly because their degrees are more likely to be worthless so they earn less.
Blacks and Hispanics are more likely than whites to default on all types of loans.
Asians are less likely than whites to default.
Re: (Score:2)
Crying it's not fair doesn't help anything. You have
Re: (Score:2)
If that is the case then the AI should have that data to parse objectively and make decisions on. There is no benefit to forcing loans to be given to people who don't pay them.
No benefit? The last time banks did that, they got $1.6 trillion in bailouts from taxpayers. Ka-ching!
Re: (Score:2)
Re: (Score:2)
Speaking as someone who does know something about mathematics, statistics, and AI, I have FAR less faith than you do in the ability of the AI to magically come up with an accurate model. If we could enter every relevant variable, and the AI could know how each of these affects things, you'd have a much better argument.
Re: (Score:2)
It does actually seem like a solid application for AI since you could work out a solid model in a spreadsheet for loan applications that contains the most relevant variable in an afternoon. The AI is really just for fuzzy pattern matching indicators that aren't obvious... like say any correlation with race, gender, age, and some types o
Re: (Score:2)
The model will be as objective as the training data. If the training data is loan applications and whether they were granted or denied, it will reflect the biases of the people or algorithms who made the decisions. If it is performance on loans granted, it will generally reflect those biases in reverse, since if (say) it's harder for blacks to get a loan, the loans that are granted to blacks will be on a more sound basis, and blacks will look like less of a risk. I don't see how to get unbiased training
Re: (Score:2)
You don't need to "assume" anything. You can just google the data.
The question is whether these distinctions are the best way of dividing up the data. From a basic stats standpoint, we need to be aware of confounding variables. And if our goal is trying to model something or assess risk or whatever, we need to choose the best metric to tell us what we want.
Just to throw out a few ideas:
Women are less likely to default on their mortgages.
Is this really about men vs. women, or is it about the type of woman likely to have her name on a mortgage? Traditionally, a lot of times a man in a relationship would tend to buy a ho
Re:As long as they're still allowed to use data... (Score:5, Informative)
Re: (Score:1)
well if the data backs up the claims, its not sexist, or racist
I am not sure I agree. If the data says that $minority group is more violent then $non-minority, it may be statically true for a given set of statistics but we all (should) know that correlation is not causation and it may be that $minority group on average lives in a more dangerous place. Higher insurance rates for $minority group members would be racist, but charging higher rates for people (with out regard to race) living in a dangerous place would not be racist.
The trick of course is to be careful abo
Re:As long as they're still allowed to use data... (Score:4)
I am not sure I agree. If the data says that $minority group is more violent then $non-minority, it may be statically true for a given set of statistics but we all (should) know that correlation is not causation and it may be that $minority group on average lives in a more dangerous place. Higher insurance rates for $minority group members would be racist, but charging higher rates for people (with out regard to race) living in a dangerous place would not be racist.
Causation is irrelevant in terms of insurance. The only thing that matters is accurately modeling risk. An algorithm doesn't have to know the reasons why kids are more likely to smash up their parents cars. It is only relevant that kids smash up their parents cars.
Re: (Score:2)
Causation is irrelevant in terms of insurance. The only thing that matters is accurately modeling risk.
"Causation" may be irrelevant, but confounding variables are definitely relevant to accurate modeling. If you get one correlation by looking at minority vs. non-minority, that might give you one model with a certain level of accuracy.
But if what's really going on is less a function of race than of location or socioeconomic status, then tracking those latter factors may give you stronger correlations and thus a better model (which increases profit).
For example, black people have higher incidents of car
Re: As long as they're still allowed to use data.. (Score:2)
Re: (Score:2)
Bigotry in general is more about the systems that society has in place that combine to make it so that people with certain backgrounds are disadvantaged with respect to others. These systems are extremely varied and reinforced by a variety of societal traditions, personal prejudices, business practices, government practices, and more.
At an individual level, bigotry involves supporting and continuing those systems of oppression, whether consciously or unconsciously.
Re: (Score:2)
Bigotry in general is more about the systems that society has in place that combine to make it so that people with certain backgrounds are disadvantaged with respect to others. These systems are extremely varied and reinforced by a variety of societal traditions, personal prejudices, business practices, government practices, and more.
At an individual level, bigotry involves supporting and continuing those systems of oppression, whether consciously or unconsciously.
I will agree with that. But sometimes it feels like in the effort to remove bigotry (which I'm all for), some legitimate differences between groups of people (which aren't in place due to society) are getting covered over, even to our detriment.
Re: (Score:2)
There is generally far more variation within groups of people than between them, though. For the most part, measured differences between different groups have proven to be due to research that didn't fully account for researchers' and society's biases.
Simple example: there's a stereotype that girls are bad at math. It's been demonstrated that merely reminding girls of the existence of that stereotype causes them to do worse on math tests. This is an example of stereotype threat, where the existence of th
Re: (Score:2)
Sure it can be. It depends upon the data and the questions being asked.
Learning algorithms match input data to output variables. They are trained by using a set of "known" relationships between the input data and the output variables (e.g. images that have already been classified as containing a dog or a cat or neither). If the training data is skewed as a result of prejudice, then the learning model will reflect that prejudice.
For example, there is today copious evidence that police are far more likely
Re: (Score:2)
"forcing the algorithm to be "fair" their accuracy and hypothetical profit goes down."
At least it's mimicking the real world.
Re: (Score:2)
Coding Affirmative action into a system may actually make it much more fare. As if there is a repute that you were being bias against someone you can show the calculation that that person was indeed not equally or near equally qualified as the hired person.
If your goal Affirmative action code would may just as simple as a sort by Race
so where you select top 1 Name from applicants
group by Name
having score = max(score)
order by score, race
Unlike in Star Trek, computers can rather easily have simple choices to
Re:affirmative action (Score:4, Insightful)
If you want to be fair, instead of "order by score, race", you should "order by score, random". Ordering by race is racism plain and simple. Why not sort by shoe size? The answer is simple: shoe size (for most jobs) does not apply when analyzing for job qualifications. Your job qualifications are (mostly) not dependent on the color of your skin (with exceptions such as actors).
To help those out with a lack of understanding - Racisim(2): racial prejudice or discrimination.
Re: (Score:2)
But they aren't ordering by "score, race". They are ordering by "score" and the score is racist (and ableist and sexist).
The only way for it to be fair in the social justice sense is to order completely by "random".
Re: (Score:2)
We often can't come up with a score that works. If the score overestimates the likelihood for whites to pay their mortgage and underestimates the likelihood for blacks, then we'll get better overall results by favoring blacks. (Substitute protected group to taste; this is, as mathematicians say, without loss of generality.)
Re: (Score:2)
If the AI agrees with you there are better statistical predictors it will simply "ignore" the single parent status. It's not prejudiced, it's just profit optimizing.
Why do you say a computer/software can't infer? (Score:2)
infer: "deduce or conclude (information) from evidence and reasoning rather than from explicit statements"
Can I infer that you haven't read much of the last 50 years' research literature in AI, formal logic, Bayesian inference, and machine learning?
This message about liberal SJWs (Score:2)
was brought to you by
the association of resource-extraction-company security goons and the national henchmen's association.
Cavemen... (Score:2)
Gonna protect the cave.