Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI Facebook Google Social Networks IT

Google Research Promotes Equality In Machine Learning, Doesn't Mention Age 149

An anonymous reader writes: New research from Google Brain examines the problem of 'prejudice by inference' in supervised learning -- the syndrome by which 'fairness through unawareness' can fail; for example, when the information that a loan applicant is female is not included in the data set, but gender can be inferred from other data factors which are included, such as whether the applicant is a single parent. Since 82% of single parents are female, there is a high probability that the applicant is female. The proposed framework shifts the cost of poor predictions to the decision-maker, who is responsible for investing in the accuracy of their prediction systems. Though Google Brain's proposals aim to reduce or eliminate inadvertent prejudice on the basis of race, religion or gender, it is interesting to note that it makes no mention of age prejudice -- currently a subject of some interest to Google.
This discussion has been archived. No new comments can be posted.

Google Research Promotes Equality In Machine Learning, Doesn't Mention Age

Comments Filter:
  • by NotInHere ( 3654617 ) on Tuesday October 11, 2016 @12:11PM (#53056241)

    If even machines come up with measurable differences between work performance of males and females, then I think giving them in average the same amount of money or the same promotions is discrimination. I'm all for giving a woman who performs just as well as a man the same money, but if there are additional risk factors like a pregnancy or when the parent has to raise child, the person usually prioritizes these things over work, so why should work not be allowed to prioritize that person over others who do not raise children or do not drop out for weeks and months out of some work-external reason.

    • by tlhIngan ( 30335 )

      If even machines come up with measurable differences between work performance of males and females, then I think giving them in average the same amount of money or the same promotions is discrimination. I'm all for giving a woman who performs just as well as a man the same money, but if there are additional risk factors like a pregnancy or when the parent has to raise child, the person usually prioritizes these things over work, so why should work not be allowed to prioritize that person over others who do

      • The problem was, this also meant a career-first woman who doesn't want to start having children is penalized because it's assumed she'll want to marry, have kids, etc.

        Yes, if this is happening (and I guess it does), it an actual bad thing and needs to be fought. Most feminist don't make this difference though, and claim the pay gap is due to evil men hating women and wanting them to "stay in the kitchen" or something.

    • Two things you have wrong here but I think the you are looking at this like an American. As a society we want to have women participate in the work force. As a society we want parents to take time off to raise children. So as a society we decide that overall we are better off if some companies are inconvenienced by having women take time off and prioritizing their children. We are taxing companies in a way because we think it is overall beneficial to all of us. Second, and I did stats at a credit burea
    • I'm all for giving a woman who performs just as well as a man the same money, but if there are additional risk factors like a pregnancy or when the parent has to raise child, the person usually prioritizes these things over work, so why should work not be allowed to prioritize that person over others who do not raise children or do not drop out for weeks and months out of some work-external reason.

      A few things here. First off, while men obviously can't get pregnant, they can do child care. (Especially after the first few months when the average woman tends to give up on breastfeeding, if they do it at all.) Men can take want to take parental leave. These days, more and more men are "stay at home dads" or interested in "paternity leave" or whatever. It's still a minority, but it's growing.

      So, even if you have an anti-child policy at your company, are you going to query men you're hiring on whet

      • Personally, I think life's too short, and I have more stuff to do than work.

        Its great that you have this position for yourself, which I do have as well, but that doesn't mean that everyone who is working harder shouldn't be rewarded for it.

        But we currently have moved toward a cutthroat environment that often rewards those who work long hours, never take vacation, sick days, or other leave, etc. Is that really the working environment you prefer?

        If those people do these sacrifices, and their overall performance actually does get better, then it should only be natural to reward them. Everything else would be unfair.

        Currently, well-educated "career women" tend to have some of the lowest birthrates, likely because of the feedback factors you identify. They prioritize work to get ahead, and then either wait until it's too late to have kids, or only have one or whatever.

        There are even many great men who didn't have children because they didn't have the time, Nikola Tesla is an example. But this is simply the deal you have to make, raising chil

    • by AmiMoJo ( 196126 )

      Even if you don't think it's a problem to discriminate based on assumptions and "potential", why should people who don't have kids reap the benefits of having younger generations existing without contributing at all?

    • by jrumney ( 197329 )

      If even machines come up with measurable differences between work performance of males and females, then I think giving them in average the same amount of money or the same promotions is discrimination.

      Not if the differences cancel out (women perform less well in some areas and better in others), or pale in significance compared to the variation between individuals of both sexes (eg men score a 5.3 on my made up performance scale, women score 5.1, and the standard deviation for both groups is around 1.8).

  • We can use the extra money to subsidise men's insurance premiums. Clearly, "prejudice by inference" is causing men to be charged too much.

    I'm sure supporters of gender equality will agree with me.

  • by Merk42 ( 1906718 ) on Tuesday October 11, 2016 @12:51PM (#53056563)
    The only "solution" will be if every living thing has the same result, so just ignore all values and hardcode the one output.
    • by Anonymous Coward

      As someone who's spent the past year studying/applying data science and machine learning, I think this is the most insightful comment posted so far.

      It's incredibly effective to discriminate by education level, income level, religion, race, gender, age, home address, credit score, criminal history, and # of children. With these limited dimensions, you have an almost perfectly normally distributed cluster regardless of the topic being studied. If you do unsupervised learning on the raw data, the features will

      • Does anyone really want to watch a game of Basketball where the height distribution of the players perfectly mirrors the height distribution of the general population?

        That's an interesting question. Having variability in height can add a lot to the dynamics of the play. Personally I think watching high school or college sports is more interesting than professionals...deadly dull.
      • "Does anyone really want to watch a game of Basketball where the height distribution of the players perfectly mirrors the height distribution of the general population?"

        No, but I want to watch a basketball game with a smart hoop that immediately adjusts its height to be propportional to the height of the player with possession of the ball.

      • It's illegal to discriminate on the basis of race, religion, or gender. We've decided that as a society, so that fewer people get bad treatment for things they can't change about themselves. (As a society, we usually treat religion as effectively unchangeable.) Further, it seems very unlikely that the differences attributed to race are actually a direct result of race, so you're using race as a proxy for other things, assuming, for example, that all blacks have certain (perhaps undefined) undesirable tr

    • by AHuxley ( 892839 )
      All your staff know is decades of AI coding, so the AI must be the only product the market needs are wants.
      The AI and its results are perfect because the smart private sector poured all its cash into that product line.
      Its a bit like the final decades of East Germany with the state saying that larger units take over all remaining smaller dynamic areas of production.
      All ability to be dynamic, to change with demand, quality, any slack or ability to ramp up was finally and fully lost.
      Capitalism is about cha
  • So g00gle found out that different groups really are different in a number of relevant factors and their conclusion was that evil cisgendered bigots when seeing inferior relevant attributes are going to automatically figure out an applicant is a protected minority and in their mind are somehow going to skip over the relevant reasons to discriminate and solely discriminate against based on them being a minority and the effect will somehow be distinguishable and worse than if they had just stuck with discrimi
  • by WaffleMonster ( 969671 ) on Tuesday October 11, 2016 @01:43PM (#53056959)

    The problem Google is describing isn't limited to a subset of arbitrary tribal factors society deems to be off limits.

    Entire reason for existence of these systems is making prejudiced decisions about individuals based on statistical evidence.

    You can spend all day filtering out things that will get you sued or attract bad press but this doesn't address core fact these systems are intended to make prejudiced judgments about individuals based on statistical experience and evidence.

    Being prejudiced can be practically helpful in some contexts but don't pretend that isn't what your doing, don't confuse it for fairness and don't bother making up a bunch of mystical bullshit about how your dataset or programmers are biased. Prejudice is the raison d'etre of these systems. It is what they are designed to do.

  • by melted ( 227442 ) on Tuesday October 11, 2016 @02:24PM (#53057263) Homepage

    How can there be "prejudice" if the system _does not have cognition_? It just approximates a function. If a woman is less (or more likely) to default on a loan, it'll just say so, SJWs be damned. That's why women see ads for shoes even if they never disclosed that they are women to Google. That's also why they see fewer ads for engineering positions (women are statistically much less likely to be interested in engineering fields).

    It's a function approximation problem, and this happens to be the function that the real world data seems to support. Now you want to wreck it for some kind of affirmative action, thus decreasing its accuracy and driving an agenda of what you think the world should look like, rather than what it actually is.

    • I don't agree with the full content of your post, but this does remind me of when I ordered the DVD collection of the show "Oz" from Amazon. For a few weeks I kept getting recommendations for homosexual related material, since it was a prison show on HBO, and naturally some homosexual things occurred there.
  • when the information that a loan applicant is female is not included in the data set, but gender can be inferred from other data factors which are included, such as whether the applicant is a single parent

    It can be, but the concept of "gender" or "race" is meaningless to a machine learning system for loan evaluations, and it has no biases or prejudices. If a properly trained machine learning system disproportionately rejects applications of some gender or race, then that reflects an actual statistical regul

    • If a properly trained machine learning system disproportionately rejects applications of some gender or race, then that reflects an actual statistical regularity in the world, not the result of discrimination or bias.

      Except it's not that simple. The statistical reality that people in Philadelphia are less likely to have insurance, means it's more likely your insurance company will have to pay money if you're not at fault. Which means that it costs more to insure a car in Philadelphia. Which means fewer

      • Hence, a bias is "baked in" to the data.

        A bias is something preconceived, i.e., something you believe before taking data into account. If it's "baked into the data", it's not a bias, it's a rational inference based on data.

        This cycle is why Philadelphia has far higher insurance rates than either other cities

        Just because you can pull an explanation like that out of your ass doesn't mean it's true. In fact, the state's no-fault law combined with the generally shitty state of Philadelphia is more likely respon

        • Just because you can pull an explanation like that out of your ass doesn't mean it's true.

          What makes you think I pulled it out of my ass. I got it from the American Economic Review. It's a prestigious publication, peer reviewed. My paraphrase is the accepted explanation, full stop.

          In fact, the state's no-fault law combined with the generally shitty state of Philadelphia is more likely responsible than that "cycle".

          Which is why Pittsburg was also examined. It had twice as bad incidents of major automot

          • That's certainly one type of bias. Another could be selection bias

            We aren't talking about what "could be" a bias, we're talking about loan applicants and loan outcomes. There is no reason to believe that racial bias is "baked into that data".

            What makes you think I pulled it out of my ass. I got it from the American Economic Review. It's a prestigious publication, peer reviewed.

            How nice. Nevertheless, the car insurance example you gave is not an example of "bias baked into the data", it's an example of a rea

  • Let's say we put all available data in, sort out the crap data so the input is neutral.
    Then we get exactly the prejudices out. This confirms them. Period.

    This does not imply, that we should support them. This only implies, that they are there. People often jump to conclusions, that this implies causation, while it implies correlation. If some places have higher crime rate and some places have more black people there (another case of ML prejudices) and the data is correct, it's the correct decision for an in

  • What they are really saying is that learning machines are confirming politically incorrect beliefs. A lot of stereotypes are based on a kernel of truth, and given enough processing power and data that truth is coming to the forefront. When people were crunching the numbers is was easy to blame prejudice or some kind of *ism. But learning algorithms don't have that, they just learn patterns. What there researchers are doing has nothing to do with fostering equality, it's about avoiding embarrassing truths.

    It
    • Actually, we already knew about these correlations. It's not like the magic AI found that there's a statistical difference between races that we were unaware of. The difference is not that it's easy to blame prejudice on people and not AIs, the difference is that the people-based criteria were at least supposed to be designed to ignore race, while the AIs and their fanbois just treat the correlations as holy writ.

      Are you trying to tell me that someone just like me but black would have a worse chance of

"Marriage is low down, but you spend the rest of your life paying for it." -- Baskins

Working...