Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI Technology

Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women (reuters.com) 381

Jeffrey Dastin, reporting for Reuters: Amazon's machine-learning specialists uncovered a big problem: their new recruiting engine did not like women. The team had been building computer programs since 2014 to review job applicants' resumes with the aim of mechanizing the search for top talent, five people familiar with the effort told Reuters. Automation has been key to Amazon's e-commerce dominance, be it inside warehouses or driving pricing decisions. The company's experimental hiring tool used artificial intelligence to give job candidates scores ranging from one to five stars -- much like shoppers rate products on Amazon, some of the people said. "Everyone wanted this holy grail," one of the people said. "They literally wanted it to be an engine where I'm going to give you 100 resumes, it will spit out the top five, and we'll hire those." But by 2015, the company realized its new system was not rating candidates for software developer jobs and other technical posts in a gender-neutral way. That is because Amazon's computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry.

[...] Amazon edited the programs to make them neutral to these particular terms. But that was no guarantee that the machines would not devise other ways of sorting candidates that could prove discriminatory, the people said. The Seattle company ultimately disbanded the team by the start of last year because executives lost hope for the project, according to the people, who spoke on condition of anonymity.

This discussion has been archived. No new comments can be posted.

Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women

Comments Filter:
  • by Higaran ( 835598 ) on Wednesday October 10, 2018 @12:46PM (#57456342)
    As hard as you want to say, sometimes you still need an actual person doing the job. That person will be biased in some way other another too, so I guess it's not a perfect system any way you look at it.
    • Re: (Score:2, Insightful)

      by Anonymous Coward

      Well, there is a huge imbalance between genders in IT, due to different reasons (which I don't want to speculate about, James Damore tried and we all know what happened). So the system must have a bit of positive bias towards women to correct for this - to help restoring balance. However, the cold heartless AI can't factor in this ... what data you feed in defines what model you get. So, is the data biased?

      • by rogoshen1 ( 2922505 ) on Wednesday October 10, 2018 @01:20PM (#57456548)

        there's a huge gender imbalance in nursing and primary education as well; when will society get around to 'fixing' that?

        • there's a huge gender imbalance in nursing and primary education as well; when will society get around to 'fixing' that?

          There are efforts to recruit more men into these professions. This is especially important in early primary education where there is evidence that boys, and especially black boys lacking male role models in their lives, do better with male teachers. Over feminization of education is not a good thing for our society. We need more men teaching kindergarten.

          • You first!

          • by ewhenn ( 647989 )
            I disagree. Artificially pushing people towards a certain career to "correct imbalances" is political correctness run amok. Men and women have different genetic compositions and biologically driven behaviors, and while there are certainly outliers men and women tend to have different interests. This has a real impact in career choices. You can try to recruit me all you want - but if I don't want to do the work because I have no interest in it then it's not going to happen, no amount of recruiting will c
        • Re: (Score:3, Insightful)

          by tlhIngan ( 30335 )

          there's a huge gender imbalance in nursing and primary education as well; when will society get around to 'fixing' that?

          They are being fixed. There are programs by nurse's unions on hiring more male nurses, and there are programs for teachers as well in increasing the number of male teachers teaching elementary school.

          Don't assume that because you don't know about it it's not a big deal. There are programs for increasing the proportion of females in trades (construction, welding, etc) run by various trades

        • by unimacs ( 597299 )
          There are scholarships for nursing programs targeted at men already and I personally know of schools that are deliberately seeking male teachers.
      • what data you feed in defines what model you get.

        No it doesn't. This is trivial to fix. You just slap a softmax layer onto the output of your NN to correct the bias.

        So if your candidate pool is 70% men and 30% women, and you want your output to reflect that, then instead of just accepting the gender-agnostic "best", your softmax layer shapes the output so that you pick the top 70% of men and the top 30% of women ... or any other statistical distribution that you want.

        So instead of "scrapping" their recruiting tool, Amazon could have easily fixed it with

        • If you have 70% men (e.g. 70 men) and 30% women (e.g. 30 women) - then pick the top 70% of the men and top 30% of the women: 1 you probably still have way too many people (58 hires), and if you took them all you would end up with ~84% men to ~16% women.

    • by lgw ( 121541 )

      As hard as you want to say, sometimes you still need an actual person doing the job. That person will be biased in some way other another too, so I guess it's not a perfect system any way you look at it.

      Amazon goes to extremes though for the hiring process for any professional job: the interview loop must include someone not associated with the hiring team, and that person runs the interview process (and gets lots of extra training and auditing). They're going to great lengths to avoid "we're desperate to hire anyone, so we'll take someone almost good enough", but the side effect is strict objectivity.

      But the law is pretty clear that you can't use any protected category in your resume sorting even if it's

      • But the law is pretty clear that you can't use any protected category in your resume sorting even if it's statistically predictive,

        No, the law is NOT clear on this.

        Many companies have been fined by the EEOC, not because their hiring process was shown to be biased, but because the outcome of the hiring process did not match the race and gender profile of the candidate pool. These were, of course, big companies with workforces big enough to statistically analyze. Smaller companies can get away with almost anything short of overt discrimination.

        So while the law does not explicitly require quotas, and even seems to prohibit them, in prac

        • Many companies have been fined by the EEOC, not because their hiring process was shown to be biased, but because the outcome of the hiring process did not match the race and gender profile of the candidate pool.

          Pics or it didn't happen.

        • by lgw ( 121541 )

          You need to distinguish between "deciding who to interview" and "deciding who to hire". There's no real defense on the former: you had better be interviewing at least as many people in each protected class, proportionally, as you have in your candidate pool. You can defend yourself on the latter, but you'd better be prepared to show that each and every interview was decided on objective criteria that don't include protected class. But you have a much bigger risk of a lawsuit than an EEOC fine on the la

    • Yeah I can't understand how folks cannot see that the code has the bias of the designer/developer. Sheesh.
  • Is this news? (Score:5, Insightful)

    by Lab Rat Jason ( 2495638 ) on Wednesday October 10, 2018 @12:46PM (#57456348)

    Train algorithm with data in hand, algorithm's output mirrors data provided. They can't possibly be shocked by this, can they?

    • Because most people have been convinced that AI can turn garbage inputs into perfect outputs. They don't understand the data given to Watson [wikipedia.org] is carefully curated. If you feed such an AI all of the garbage on the internet, you will get a garbage AI [wikipedia.org]
      • Because most people have been convinced that AI can turn garbage inputs into perfect outputs.

        On two occasions I have been asked, — "Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?" In one case a member of the Upper, and in the other a member of the Lower, House put this question. I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.

        /obligatory Babbage quote

    • Train algorithm with data in hand, algorithm's output mirrors data provided. They can't possibly be shocked by this, can they?

      I recently began working with AI, and was surprised at how simple the algorithms are. I think a lot of people expect them to be extremely complex.

      Training the AI is where it gets complex. This is where shaving a couple percent off computation times can really make a difference. Finding good training data. Selecting the right optimization algorithm. Waiting for the system to learn. That's what's difficult.

  • The law... (Score:5, Insightful)

    by fish_in_the_c ( 577259 ) on Wednesday October 10, 2018 @12:49PM (#57456372)

    When government reviews your hiring they expect you to show that your diversity level is consistent with the normal spread of minority groups ( some consideration of candidate pool MAY be given.)

    In other words, if your only criteria is hiring whomever best for the job, you will likely be operating illegally and subject to fines and lawsuits. This is the product of laws that are designed to create social engineering based restrictions based on someones religious idea that any measurable discrepancy in minority placement must be corrected.

    • Re:The law... (Score:4, Insightful)

      by cryptizard ( 2629853 ) on Wednesday October 10, 2018 @01:00PM (#57456432)
      If you are only hiring whoever is best for the job then you are probably hiring a lot of foreign workers that will do it for way less money. Should the government allow that to happen or do you believe in hiring laws when they protect you?

      Also, if you are running your company with only the end goal in mind then you are probably doing something illegal. Laws exist to protect the interests of society, not the interests of individual companies or people.
      • Re: (Score:2, Funny)

        Comment removed based on user account deletion
      • by Bert64 ( 520050 )

        This already happens.

        Many companies do hire lots of foreign workers for way less money, a lot of jobs are being offshored regularly.
        Many foreign workers are also employed locally especially for various low paid unskilled work, as they are willing to work for lower wages.

        For higher skilled positions, especially local rather than offshored ones foreign workers are also frequently used, but they are less likely to accept lower wages in such conditions, and why would they? If they are equally skilled such that

    • In other words, if your only criteria is hiring whomever best for the job, you will likely be operating illegally and subject to fines and lawsuits.

      Until you can demonstrate that women or non-whites are not physically capable of being "the best for the job", then this reasoning is flawed.

      And if your "proof" is the lack of not-white-men in these positions, you're falling for the same incomplete data as the AI in TFS.

      • by lgw ( 121541 )

        Until you can demonstrate that women or non-whites are not physically capable of being "the best for the job", then this reasoning is flawed.

        And if your "proof" is the lack of not-white-men in these positions, you're falling for the same incomplete data as the AI in TFS.

        It's not all-or-nothing, it's bell curves. If you allow race as a consideration for e.g. software development, you'll get a statistical model that tells you to mostly interview Asians. That's an accurate predictive factor (one of many) of how well a candidate is likely to succeed at the job. It's also illegal (well, not the discrimination against whites of course, but against all the non-pariah races).

        It's not about "incomplete data as the AI" unless the coders were complete idiots. Amazon has a huge po

        • It's not all-or-nothing, it's bell curves.

          Yes, you have to establish that such bell curves differ based on gender and not-whiteness. And again, if you're just looking at outcome as your proof, you're falling for the same incomplete data as the AI in TFS.

          It's not about "incomplete data as the AI" unless the coders were complete idiots. Amazon has a huge population of developers.

          That huge population was created with bias in mind, not only in initial hiring but in how women are treated in the workforce over the years. Thus you can not trust that this data is completely impartial.

          It's the nature of machine learning systems that you can't really prove what criteria they've "learned" from your training data

          Sure you can. The machine learning system will produce a result that closely matches the train

          • by lgw ( 121541 )

            Yes, you have to establish that such bell curves differ based on gender and not-whiteness. And again, if you're just looking at outcome as your proof, you're falling for the same incomplete data as the AI in TFS.

            The point is, it doesn't matter how you establish it. All the details you go into are irrelevant. The people you interview can't be less representative than the candidate pool of protected classes, regardless of why - even if you had ironclad proof, doesn't matter.

            It's the nature of machine learning systems that you can't really prove what criteria they've "learned" from your training data

            Sure you can. The machine learning system will produce a result that closely matches the training data. When your training data is the result of bias, the machine learning system will strive to continue that bias.

            Man, you really love to say "bias". But you've missed the point here:
            * You cannot, from the output of a machine learning system, prove that it isn't using X as criteria in screening candidates.
            * You might need to be able to prove that legally.
            *

    • by lgw ( 121541 )

      In other words, if your only criteria is hiring whomever best for the job, you will likely be operating illegally and subject to fines and lawsuits. This is the product of laws that are designed to create social engineering based restrictions based on someones religious idea that any measurable discrepancy in minority placement must be corrected.

      That's not quite true. The rules for who you interview are different from the rules on who you hire.

      What you say is true for the former: if the people you interview don't at least match the candidate pool for any protected class, you're boned regardless of why. It's a bit different for hiring, where you can defend yourself by showing that your process is objective.

      In practice, the government tend to focus a lot on who gets interviewed, as that's strictly numbers and easy to audit and enforce. The fear fo

    • Bingo! At a former employer, they went out of their way to increase the diversity of the recruiting pool ... to no effect. Mostly because while the Fed regulator was screaming about the 10% of the local population that was a 'minority ethnic group', they were ignoring the detail that group was less than 1% of the graduates from computing courses at the local colleges ... a number that was reflected in the makeup of the IT department.

      To be fair to the company, they broke 0.5% minority hiring, while the gra

    • When they review you for granting a government contract.

      Don't do business with government and they need more evidence than just statistics.

  • by Anonymous Coward on Wednesday October 10, 2018 @12:50PM (#57456378)

    Amazon trained their AI using the dataset that reflected their business practices as they currently are (flaws and all) but what they wanted was a data set for the practices they wanted to become (i.e. the ideal).

    Finding a training dataset that reflects the ideal is going to be extremely difficult, particularly in an area where that ideal is so poorly defined.

    • Re: (Score:3, Insightful)

      by HornWumpus ( 783565 )

      They just have to train it with data that shows only success from women and failure from men.

      Easy: Just make it against the rules to review women employees with anything other than 'perfect'.

      Their dataset might be 'just fine', could be their assumptions and goals are broken.

  • by Anonymous Coward

    It lets them make immoral business decisions but not be personally held accountable for them.

    Facebook shows real estate ads only to white professionals. Amazon only hires male chinese engineers. Google endlessly manipulates its search for political reasons.

    But when questions get asked, it's always that pesky old AI that did it!

    Get used to it.

  • by davide marney ( 231845 ) on Wednesday October 10, 2018 @12:56PM (#57456402) Journal

    Bias is a non-factual prejudice against someone. That is why it is considered unfair. If the facts are that 80% of the population of people who do the work you want are named "Dave", then it is not a sign of a moral failing if your AI exhibits a strong preference for another Dave.

    • by unimacs ( 597299 )
      It's a bias if the fact that 80% of the available pool is named Dave but the name itself has nothing to do with suitability for the position. The AI would be ignoring 20% of potential candidates for no reason.

      I wouldn't call it a moral failing if unintended, just bad design.
      • True bias is the AI not letting anyone named Dave do things.
      • by MobyDisk ( 75490 )

        but the name itself has nothing to do with suitability for the position

        I think the OP meant that the AI was preferring people named Dave, even though it didn't know people's names. That is analogous to what was happening with the AI filter: It didn't know people's sex, yet it was still picking more men than women.

    • AI systems are all about Biases, it finds a pattern and gives it a number. A we as humans have Biases because our brains find a pattern and qualify it. Patterns are not facts.
      On average Men are physically stronger then women this can be quantified and measured. However there are a LOT of women who are stronger then the average men, They are also a lot of Men who are weaker then then the Average Women.

      So if a job requires physical strength, you may get on average more men who can do the work. But this sho

      • However there are a LOT of women who are stronger then the average men, They are also a lot of Men who are weaker then then the Average Women.

        I don't think you understand how averages work.

    • by Njovich ( 553857 )

      Bias is a non-factual prejudice against someone.

      You should either stop making up word definitions, or start using a better dictionary.

      • Bias is a non-factual prejudice against someone.

        You should either stop making up word definitions, or start using a better dictionary.

        Better dictionaries could help... Google dictionary comes up with the basic definition the GP quoted. Having worked with aviation sensors I tend to think of bias more as it is defined for sensors:

        "If the output signal differs from the correct value by a constant, the sensor has an offset error or bias. This is an error in the y-intercept of a linear transfer function." https://en.wikipedia.org/wiki/... [wikipedia.org]

        In this case, and in the case of a lot of computer programs, bias can be generated that has nothing to do

    • Bias is a non-factual prejudice against someone.

      Floating point numbers are prejudiced against someone? [wikipedia.org]

  • by Thelasko ( 1196535 ) on Wednesday October 10, 2018 @12:56PM (#57456408) Journal
    Garbage in, garbage out. [wikipedia.org]

    If the training data has bias, then the AI will learn to have that bias.

    The trick is developing training data that doesn't reflect the biases of the humans that performed the task in the past.
    • by Tom ( 822 )

      Well, you could simply use the performance of people, instead of the judgement of said performance by others. Of course, that would require you to introduce proper metrics. Which is something that a lot of managers resist in this field as soon as it applies to "thinking jobs". I wonder if the fact that the group includes their own jobs has any influence on that distaste.

      • I do not doubt that loss of power is a factor, but there is a very real question of whether the business is willing to make the investments and stay the course long enough that the end result is worth the trouble. Skepticism here can be very rational.

        "Proper metrics" that cannot be gamed are very expensive. It is not just the rubrics, but building a culture that works with the rubrics. If you think "proper metrics" will simply work by virtue of their awesomeness, you are definitely doomed.

      • Well, you could simply use the performance of people, instead of the judgement of said performance by others. Of course, that would require you to introduce proper metrics. Which is something that a lot of managers resist in this field as soon as it applies to "thinking jobs". I wonder if the fact that the group includes their own jobs has any influence on that distaste.

        Would you like to share some of your proper metrics? Lots of (all?) companies use bad metrics, but is there any other kind?

  • We have good expert systems that can do amazing things with ultra controlled inputs.

    Pretending computers have a bias vs anything is really dumb, articles and submissions like this are for controlling the narrative and keeping people poorly informed. The general populace is much smarter than they are given credit for, especially when they have the right information.

    People that push this kind of nonsense ought to be ashamed. Slashdot used to be about the cool tech, anyone can go to vice(or pick your polit

    • Pretending computers have a bias vs anything is really dumb

      No, what's dumb is pretending the computer is doing anything beyond its training data.

      You give the computer biased training data, you get biased results. Not because the computer has bias, but because your training data is the result of bias.

  • by Anonymous Coward

    If the results are biased, the data is biased and the process is biased, maybe the bias is normal?

  • If it didn't scan the names on each resume, then it wasn't gender-biased.
  • by Noishkel ( 3464121 ) on Wednesday October 10, 2018 @01:04PM (#57456448)

    When you read this article it doesn't say anything about this algorithm not 'liking woman'. Based on the parameters it was given it chose to rank candidates based upon the factors it was trained to look for. It's also somewhat telling how the writers of this tripe chose to specifically highlight how the algorithm chose to downgraded candidates from two all female colleges without saying why they were downgraded. As if the fact that it's an all female school is more important than the quality of the candidates that came out of the school.

    At the end of the day this bullshit is more about how the media writes headlights to illicit emotional reactions instead of reporting the hows and the whys of a situation. And on that note I'd like to see someone actually start writing algorithms to to replace tech reporters so we can get ride of garbage tier activist journalism like this article.

    • by L_R_Shaw ( 5544684 ) on Wednesday October 10, 2018 @01:36PM (#57456646)

      Yes.

      Both this ridiculous garbage reporting and the apoplectic shitshow from ideologues in the press over James Damore's memo are not just the usual bland claims of sexism.

      There are long known and well researched gender differences in interest preference going all the way back to infants - long before any possible way to for the results to be explained by 'societal sexism' or other such nonsense.

      Feminist dogma is 100 percent counterfactual this basic and well researched science.

      Hence why the over the top attacks on anyone and anything that brings to light these fundamental differences in the abilities of men vs women in technological jobs.

      The reason there is such a huge disparity in male hires in tech companies is a direct result of those well established gender differences. The candidates being selected are at the very, very top end of the bell curve in both intelligence(where men have a significant advantage) and a lifetime of interest in and drive compared to female applicants in general.

      Of course the usual 'argument' and response anyone pointing these basic facts out is screetching that the claim is women aren't as capable as men.

      Any individual woman can be just as capable as a man in tech.
      However, that is not true at the population level where men will significantly outpace women in the number of highly qualified candidates.

  • by Karmashock ( 2415832 ) on Wednesday October 10, 2018 @01:05PM (#57456460)

    Purge any submission to the system of a gender identifier... women's or men's anything... remove names in case that is factored... literally provide nothing in the submission that would definitively define a gender.

    Then see what it does.

    My experience with these systems is that they don't actually factor gender but that the end result of is that there is a gender imbalance.

    However, if there is an imbalance and the system was given no indication as to gender then there is no gender bias.

    You can't cite persecution or preference if the system can't even know. And generally these fairly common and consistent imbalances are made without reference to gender itself.

    Generally it is factoring on other criteria that give the same result but which are not gender. Work experience is a big one... breadth of skill set is another.

    And if you took the total population and look at which portion of the population had that work experience and breadth of knowledge, you'd find it more closely matched the hiring patterns of these systems. Which means it isn't factoring on gender.

    Now... this is assumption to some extent on my part. I've audited these systems in the past and what I am describing above is the pattern I've seen.

    As to what the Amazon system was doing... I'd have to audit it.

    What I'd probably try is a word replacement/purge of all terms that would signify gender or I'd just change a bunch of rejected female resumes to say they were male and see if they got accepted and vice versa.

    If the system actually changed its decision based on gender then that's a smoking gun that it is doing things on the basis of gender.

    But I'd find that very surprising.

    Machine learning is unpredictable so I'm hardly going to claim to know what the damned thing was doing. For that reason I wouldn't actually use machine learning in this application. I'd use a very clear rules based system where everything it was doing was known to the programmers.

    Those systems are completely fine for this sort of work and you can very easily audit the code for them.

    The best way to deal with this is to first be gender blind. You literally do not factor for gender at all.

    That will give you an imbalance probably... you can make as many diversity hires as you need to after that. But your core hiring pool should be merit based unless you want to go out of business.

    • by jeff4747 ( 256583 ) on Wednesday October 10, 2018 @01:48PM (#57456712)

      Purge any submission to the system of a gender identifier... women's or men's anything... remove names in case that is factored... literally provide nothing in the submission that would definitively define a gender.

      Writing style can tend to be different between genders. And you can't really remove that without feeding it blank pages.

    • by Tom ( 822 )

      I'd just change a bunch of rejected female resumes to say they were male and see if they got accepted and vice versa.

      This exactly. Do it both ways. Make some male profiles female and some female profiles male and check what happens.

      Then identify other gender-typical features, one by one. Gaps in the CV (pointing to child raising times), remove any written text (gender styles of writing), etc.

      This could be a really cool toy to figure out some of the recruiting prejudices and understand what the gender gap actually is. Because we already know it is not just gender. It is also experience and other life choices that are indir

    • Purge any submission to the system of a gender identifier...

      It is not that easy. I assume they run some kind of pattern matching algorithm and it can just as easily focus on non-obvious gender differences as obvious ones. It could be that hobbies or volunteer work are just as good predictors of gender as a name (and therefore sources of bias). Maybe even the choice of words or textual layout could make a difference. It would take a lot of scrubbing and perhaps a complete rewrite of the job applications to remove all traces of personality.

      When they say the result is

    • by Nidi62 ( 1525137 )

      Purge any submission to the system of a gender identifier... women's or men's anything... remove names in case that is factored... literally provide nothing in the submission that would definitively define a gender.

      Then see what it does.

      Except there are plenty of things that are very germaine to a resume that show gender but cannot be filtered out without altering the resume. For example, if education has Mount Holyoke, Spelman College, even something like International University of Technology for Women, your algorithm will know it's a woman's resume. Or maybe they went to a coed school but were a president or founding member of their school's Women Coders club? Maybe they had a successful career but took 2 years off when they got preg

    • by eth1 ( 94901 )

      Purge any submission to the system of a gender identifier... women's or men's anything... remove names in case that is factored... literally provide nothing in the submission that would definitively define a gender.

      Then see what it does.

      Another interesting experiment would be to leave the gender identifiers in the training data, but make sure it's carefully balanced. Then test it by stacking it so that the only qualified candidates are all one gender, then see if the computer would hire unqualified candidates just to provide a politically correct result.

  • This comes up frequently in high-tech companies: If only we could automate decision-making without involving people! Imagine!

    This is literally the dumbest thing you could do, right up there with "B people hire C people." As an interviewer, I always looked at resumes to guide my interview approach, but in most cases it was impossible to make any decisions based on a resume. Even if you assume that the person didn't outright lie, you're looking at 4-line summary of 3-year work periods written by a writer w

  • is that the job market for tech is so crappy that they're writing special software to sift through the hundreds of resumes they get. Back in my day the hiring manager just looked over a few and picked one.
  • Pre-process the training data so that exactly half of the input is from male applicants, and half from female. If there are more male entries in the complete dataset, then randomly remove them until it's exactly 50:50. Yes, this means tossing out potentially valuable information. However, if male applicants make it to the top 5, then they could be further compared against each other using the full male dataset. Surely if I thought of this in 5 sec they could too?

    If the problem is that not enough women are i

  • Well, the last thing any of our new diversity-obsessed saviors want is to (openly) specify hiring criteria, and generally computers require you to specify things. So it's not surprising that we run into these little snafus.

    That said, I would have thought that "AI" would be a, er, godess-send for these folks ... just train it for awhile, and nobody will have any way to prove why it makes the decisions that it does. Sounds perfect for "diversity" hiring.

  • Simply reading the instructions the AI was apparently using, according to the article, tells me that whoever created this AI was either (1) a moron or (2) a bigot.

    • With all due respect, you suck at understanding how machine-learning AI works.
      People do not program in particular words/phrases to bias on.

      The system automatically compares all/many aspects of the document (e.g. all words and short word sequences) against the same aspects of resumes of "hired and successful" past applicants to the company.

      The system learns statistically which words/phrases/aspects of the documents correlate with "hired and successful".

      Programmers don't enter into this at all. It's general l
  • by recrudescence ( 1383489 ) on Wednesday October 10, 2018 @01:31PM (#57456622)

    The claim that "the industry is dominated by men and therefore we couldn't train this in a gender-neutral way" is totally bogus from a machine-learning perspective. All that is needed to eliminate a bias arising from dataset imbalance is to balance the dataset.

    More likely they realised that when using dispassionate criteria for optimal hiring, it would become very likely they'd not get the desired "Women > Men" politically correct outcome for all sorts of statistically valid reasons, and figured such optimal hiring was not worth its salt against all the money lost from lawsuits and bad PR in a time of a politically tense climate favouring women.

    I completely agree with their choice, and would do the same. No need to feed oil to the fire

  • by DontBeAMoran ( 4843879 ) on Wednesday October 10, 2018 @01:44PM (#57456686)

    The A.I. doesn't care about being politically correct.
    Maybe the A.I. has computed something we're not aware of.

    Unfortunately, people will force political correctness into the A.I. and we'll never learn the truth. /sarcasm (or is it?)

  • Tech geniuses create AI.

    Make 500 models, teach it to recognize some 50,000 terms.

    AI does HR's job too well.

    Executives kill project.
  • by Junta ( 36770 ) on Wednesday October 10, 2018 @02:52PM (#57457134)
  • job application software sucks and is easy to be Bias on any group that you want.

    also they want way to much info up front.

The optimum committee has no members. -- Norman Augustine

Working...