Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Businesses

NYC Passes Bill Requiring 'Bias Audits' of AI Hiring Tech (protocol.com) 75

A year since it was introduced, New York City Council passed a bill earlier this week requiring companies that sell AI technologies for hiring to obtain audits assessing the potential of those products to discriminate against job candidates. The bill requiring "bias audits" passed with overwhelming support in a 38-4 vote. Protocol reports: The bill is intended to weed out the use of tools that enable already unlawful employment discrimination in New York City. If signed into law, it will require providers of automated employment decision tools to have those systems evaluated each year by an audit service and provide the results to companies using those systems. AI for recruitment can include software that uses machine learning to sift through resumes and help make hiring decisions, systems that attempt to decipher the sentiments of a job candidate, or even tech involving games to pick up on subtle clues about someone's hiring worthiness. The NYC bill attempts to encompass the full gamut of AI by covering everything from old-school decision trees to more complex systems operating through neural networks.

The legislation calls on companies using automated decision tools for recruitment not only to tell job candidates when they're being used, but to tell them what information the technology used to evaluate their suitability for a job. If signed, the law goes into effect January 2023. Violators could be subject to civil penalties.
Notably, the bill "fails to go into detail on what constitutes a bias audit other than to define one as 'an impartial evaluation' that involves testing," reports Protocol. It also doesn't address how well automatic hiring technologies work to remove phony applicants.
This discussion has been archived. No new comments can be posted.

NYC Passes Bill Requiring 'Bias Audits' of AI Hiring Tech

Comments Filter:
  • by Joe_Dragon ( 2206452 ) on Monday November 15, 2021 @06:30PM (#61991733)

    ban the personality tests & race questions as well!

    • by AmiMoJo ( 196126 )

      While sadly a lot of people don't have a choice, if you are a skilled worker then you should ban personality tests and exams yourself.

      If the hiring company asks you to do one, just withdraw from the process. If they ask why, tell them that personality tests and exams are red flags that indicate a poor work environment and flawed hiring strategy that doesn't create good teams.

  • by gillbates ( 106458 ) on Monday November 15, 2021 @06:33PM (#61991741) Homepage Journal

    Given that 60% of college graduates are women, an algorithm which selects 50% male candidates is likely discriminating against women.

    Or, it could be discriminating against men, in STEM fields where women are far less than 50% of the candidate pool.

    Since this is politics, it won't be possible for the evaluators to consider that as groups, men and women often choose very different career paths. Granted, I don't want algorithms making hiring decisions, but I doubt this law will curb the inhumane practice - instead, it will end up giving women and minorities a preferential advantage in fields where they're "underrepresented". And in fields like Nursing and Teaching, may actually end up making it more difficult to hire them.

    End result - politics working as designed - nobody is happy.

    • ban categories like
      RACE
      AGE (other then under over 18/21 for jobs with min age rules)
      SEX
      DISABLED status
      personality types

      • Re: (Score:2, Interesting)

        ban categories like
        RACE

        That doesn't work. Race is excluded from many AI models. The results are still highly skewed by race because other data strongly correlate with race.

        For instance, an AI that made pre-trial release recommendations quickly learned that certain zip codes had much higher rates of failing to appear at trial. Those zip codes were predominantly black. It was also able to look at criminal history. Blacks commit different crimes. They use crack cocaine while whites use powder. Blacks sell drugs on street corn

        • by djinn6 ( 1868030 )

          If there is indeed a correlation between selling drugs on the street and not appearing at trial, why is that not fair game for the AI?

          I'm going to go out on a limb and say failing to appear at trials in the past and not appearing at trial in the future are correlated. What if it just so happens more black people fail to appear at trial in general? Should we ignore that data point too in the name of racial equality?

          • The cognitive dissonance is a mandatory attribute for being 'anti-racist' (or else, you're racist): You must simultaneously be outraged that police target black people more and thus arrest them more, and also be outraged that an algorithm would believe a black person is more likely to be arrested again. You must believe that black and white both have equal rearrest chances, and that black men are arrested more.
          • by HiThere ( 15173 )

            If you've failed to appear at trial before it should certainly count against you, but should it count against you if you neighbor down the block did? There probably *is* a statistical correlation, but I don't feel it would be fair to act on that correlation.

          • One is something a person can control, just show up in court, the other is being born black. Just like charging higher insurance premiums to males because they crash more, it is unfair that I should pay more irrelevant of my actions. But that is how people think and it generally works, but it can be unfair to individuals.

            It also can be self perpetuating, you assume that a person is a criminal, treat them as such they are more likely to be a criminal. I think this maybe a reason a higher proportion black peo

      • by AmiMoJo ( 196126 )

        It depends why they are asking for those things. If it's to give HR some anonymous stats so they can see if their job ads are appealing to wide audience, and to check that there are not systemic biases in their system, then it's fine.

        Of course that information should not be fed to hiring managers, it should be kept confidential and stripped from any material given to them.

      • This has been tried, many times. Here is an example [abc.net.au] from Australia. Canada and the UK have tried similar things.

        It usually results in more men and white/asian people being hired. Which the people running these programs see as a failure so they end the program. Right now companies and governments are very sensitive about diversity and so they often have quotas or targets to meet. But if you remove things like race, age, sex, etc. from applications then HR departments have no way to ensure diversity in their

    • You’re assuming it takes the gender of the name into account.

      • Youâ(TM)re assuming it takes the gender of the name into account.

        No, the assumption is that the algorithm will be successful in removing all racial and gender bias.

        The goal is not to remove bias, it is to create it.

        Because the name will have inherent clues on race and gender that is certainly the first thing to be removed from the algorithm. Once the name is removed then it will be very difficult for an algorithm to have any bias on race or gender. The city won't know when there is bias because they will know nothing about the job or the applicants. Companies hire ind

    • I don't want algorithms making hiring decisions

      Why not? Algorithms are objective and their biases can be measured and corrected.

      I doubt this law will curb the inhumane practice

      How is algorithmic hiring "inhumane"?

      The same number of people will be hired, so an algorithm may reject someone a human would choose, but an equal number have the opposite happen.

      • Re: (Score:1, Offtopic)

        by gurps_npc ( 621217 )

        You are clearly not an English major, and need to take some classes. I assume you English is your second language.

        1) These words contract themselves:
        "Algorithms are objective"
        and
        "and their biases can be measured..."

        It they have measurable biases then by definition they are NOT objective.

        [Also you are wrong, we can NOT measure them. We can prove they exist but not by how much.]

        2) Anything that uses algorithms is by definition inhumane because that word means lacking pity or compassion and no algorithm has

        • It they have measurable biases then by definition they are NOT objective.

          That is not what "objective" means.

          Here is Google's definition:

          Objective: Not influenced by personal feelings or opinions in considering and representing facts.

          An algorithm does not have "feelings or opinions".

        • by HiThere ( 15173 )

          And you are clearly not a statistics major. In statistics bias is an objective measurement.

          This doesn't mean it should determine the decision, but it is an intrinsic component of every way of evaluation of samples extracted form populations. All samples are biased, and if you know the entire population you can measure it. Otherwise you need to use various techniques to estimate it. And your decision should take the bias into account, to the extent possible.

  • They're going to push the you know what agenda and they're going to look at it. Disregard how it works entirely, and ask them, 'How does this ensure equality of outcome based on discriminating factors?" They won't care if it gives them good employees who make it profitable. They won't care of it looks at their education, qualifications or experiences. They're going to ask, "How does this ensure enough feminists, trans activists, and other special interest groups are hired equally regardless of skill, but wi

    • How you gather all that from a CV I’ll never know.

      • by Tyr07 ( 8900565 )

        Yeah I mean, you can't pull data from the rest of the actions that have been going on, historical behavior or anything.

        I don't get the concept that you have to evaluate each new thing as if everything done before didn't happen and that they aren't going to exhibit the exact same behavior as every other time. I recall a narcissist that I was dealing with who did something particular multiple times, but demanded more chances because /this/ time, /this/ time will be different, and their argument that I should

    • No, how it will go is "how many SJW profs/graduates do I need on the board to convince the female judge the audit is legit" and then they'll paper stamp ... except for the occasional sacrificial lamb.

      Don't be the small company using the auditor for big companies, it will massively increase your chances of being the sacrificial lamb.

    • Your pilot for this segment of your flight to Hawaii direct from NYC will be a person chosen for its 13 minority equalization points rather than the person's qualifications to sit in a cockpit in any status let alone pilot. Nonetheless you are in the safe hands of WOKE airlines falling out of the skies for the last 10 months....

      {O,o}

    • by AmiMoJo ( 196126 )

      It's an anti-snake-oil law. No hiring AI company can seriously claim that their product is objective unless they can back that up with evidence. If they don't have the evidence then they haven't looked for it, which means their product is snake-oil.

      • by Tyr07 ( 8900565 )

        That doesn't negate what my OP says at all though. They will redefine objective unless it fits the narrative that it's racist, anti feminist, anti trans etc. Even if the final pin is 'Well, no one who has identified as (Insert special interest group here) as even applied.' They'll turn around and go 'That's because they know you use this program to discriminate against them. With no proof. This is NYC we're talking about.

        Take the hypocrisy elsewhere, one agenda doesn't require proof of anything or any evide

        • by AmiMoJo ( 196126 )

          Seems unlikely. Most of the criticism is based on peer reviewed studies of the behaviour.

  • Who pays for this? (Score:2, Insightful)

    by MacMann ( 7518492 )

    Another sign New York City doesn't want your business. This costs money and it will have to be paid for by higher prices and/or higher taxes. Not only that but all businesses are assumed to be guilty of biased hiring processes until they prove innocence by providing audit results. It's unlikely anyone will take this to court over being unconstitutional since it would be a public relations nightmare to explain how protecting white heterosexual men in the workplace is a good thing.

    What will happen is NYC w

    • by AmiMoJo ( 196126 )

      Not only that but all businesses are assumed to be guilty of biased hiring processes until they prove innocence by providing audit results.

      Complete nonsense. This law regulates AI, and AI is sold as a product to laypeople (HR staff) to feed CVs into. So the proof will be provided by the manufacturer who will have done some tests to prove that their product is not biased, and any users challenged about it will refer back to the manufacturer's certification.

      That's how it works for most stuff. Construction companies aren't required to prove that the materials they use are fire safe, they rely on the manufacturer testing and certifying them, for e

  • by Aighearach ( 97333 ) on Monday November 15, 2021 @08:53PM (#61992027)

    Wow, look at all these neckbeards who don't understand what an "audit" does. Hint: It is neither a prescription nor a proscription.

    And who do the results of the audit get reported to? The companies doing the hiring! So they can understand WTF the tool they're using did.

    This is information for the companies out use the software, telling them how the software made decisions affecting them. If you're against that, I have to wonder what your true motivation is. Well, not really, we all know.

    • by djinn6 ( 1868030 )

      Is there anything to suggest these audits will be unbiased themselves? I can foresee a bunch of SJW types being hired to do that and pronouncing everything that doesn't discriminate against white men as "biased".

      Alternatively, if there's no mechanism to say who could do these audits, the companies will simply find someone to rubber stamp their software as "unbiased", which makes it just another pointless regulation.

    • by AmiMoJo ( 196126 )

      In Europe under GDPR rules the individual has a right to ask how automated decisions about them were made. So they would be able to query why the AI made the decision it did, and the answer "it's a black box, computer says no" isn't acceptable.

      • This sounds like what it would mean to the layperson, but you may actually just get a complicated technical "reason" that doesn't really help you.

        Europeans love to talk about how great their laws are, and GDPR is a good law that helps a lot of situations.

        But it won't help this situation at all; they can just tell you the answer. The AI scored you __ on ___ and ___ on ___ and ___ on ___. That's why the humans who made the decision, made the decision. It's not a bottomless pit of "why" where they have to

    • And who do the results of the audit get reported to? The companies doing the hiring! So they can understand WTF the tool they're using did.

      I imagine that whenever a discrimination lawsuit comes up, the companies will get subpoenaed for their audit data.

      • Indeed, that's the idea; get the companies to know what they're doing, so that they can choose what to do!

        Surely this protects them in a lawsuit... right? I mean, for companies trying to comply with the law it will, for sure. If they know what they did, they know how much to compensate to correct it, and where to compensate.

        And if not... that means the lawsuit might have a better chance of correcting the behavior!

        And for the software companies selling the AI, it rewards correctness on things they're already

    • Wow, look at all these neckbeards who don't understand what an "audit" does. Hint: It is neither a prescription nor a proscription.

      And who do the results of the audit get reported to? The companies doing the hiring! So they can understand WTF the tool they're using did.

      This is information for the companies out use the software, telling them how the software made decisions affecting them. If you're against that, I have to wonder what your true motivation is. Well, not really, we all know.

      Oh give me a break. Yes, I'm sure it will just stay advisory. Because that's so historically what government does, in these matters.

      "Nice little business you got there ... shame if something happened to it ... "

      Pay no attention to the 250 lb bully leaning over your shoulder ... he's just looking ...

    • This is a good start. But the applicant and the advertiser should also be forced to declare the application will undergo automated AI/Filtering at one or more stages BEFORE submission. I don't want to work for any employer or front end screener -contractor who uses AI, and waste MY time in sending an application that will not be read. I suspect any company who is forced to declare this fact - will miss the best applicants. All the best ever jobs I got, were done by hand by dedicated managers - not HR, who r
      • Disclosure would be nice, but like you say, it would reduce the value of the tool by a lot. And most companies don't want that. So it would be harder to implement.

        Hopefully this moves the ball in that direction. Right now, the average person has no idea that most companies are already using these tools.

        Once there are audits, then companies who get sued will have to turn over their audit results, and public education will slowly increase.

    • Been running a corporate subsidary for nearly 30 years.

      Every single audit I've had has
      - measured performance against AN EXPECTED BASELINE
      - at the end, provided recommendations about how we can meet the rules/performance/results expected.

      I think you have a stunningly naive view of what audits are, as if they're some sort of objective process that just pulls data together. They're 3rd-person reviews from someone who ostensibly doesn't have a vested interest in the result - that's their value. But to suggest

      • In finance, audits often don't measure performance at all, or use baselines, but instead compare what you wrote down to what your records say.

        In IT, audits are the same; what do the logs say happened?

        What sort of audits are you even talking about? The lack of key information in your claims really undercuts them. You either don't run anything, and made that up, or you don't have any idea what the audits are, you just know they show you a powerpoint about them once in awhile. And it talks about benchmarks, be

        • WTF "key information" would you be looking for?
          I'm going to identify neither myself nor my business on Slashdot, thanks.

          And yes, very MUCH our audits (about every 2 years) from our EU parent company
          - compare our books to reality
          - review our safety and IT security procedures
          - review our performance vs targets financially, productivity, etc.
          - ALWAYS include recommendations about processes, functions, accounting, IT, and safety procedures to "help" us conform to benchmarks and targets (in all those categories)

          • What a maroon. You don't even understand the words, but you claim to be some V I P.

            So I nailed it. You're saying that in the meeting where they show you the powerpoint about the audits, they also review your performance in relation to that data.

            And you can't figure out which part is the audit.

            And you don't imagine that in the case in the story, there are very specific pieces of information involved, that have already been specified. You don't even comprehend enough about legislation to understand that if th

            • And you're a moron if you think the 'audit' is somehow magically distinct from the analysis and review.

              • You can't comprehend the meaning of words.

                You just make up random other bullshit, and say, "It could mean anything!"

                You didn't even understand the powerpoint they made you.

    • Bias in medical AI can do a lot of harm.
  • by cascadingstylesheet ( 140919 ) on Monday November 15, 2021 @09:54PM (#61992165) Journal

    We all know what this really is. NYC is requiring companies to be biased, and there is a danger that an automated process won't know that, didn't get the memo.

    This is meant as a corrective to the danger that some unbiased hiring might break out.

  • There is no easy way to define "bias" in hiring. You could imagine a test that shows that identical applicants of different ethnicities and genders are treated equally, but that is a pretty low bar. The real problem is that there are all sorts of other things that correlate with protected characteristics, and there is no obvious way to measure bias based on those correlations because those correlations may never appear explicitly.

    Or course humans have all the same sort of failings when hiring.

    One o
  • Anyone else notice the last line of the summary?

    It also doesn't address how well automatic hiring technologies work to remove phony applicants.

    Anyone who has a web form or email address or any way to receive input from the public inevitably gets bombarded with nonsense, spam, and scam messages.

    Removing phony applicants is not even a public concern, so it's a good thing that the bill doesn't address that. Software vendors already have an incentive to include such features as a selling point.

  • It would surprise me greatly if AI selection of candidates is any more predictive of their success than the multiple other biased ways in which hiring happens.

  • Regardless of the whole racism/sexism/SJW!!! minefield, isn't the point of this kind of software, or really a human in HR, to be biased towards qualified candidates?
    Wouldn't the only way for something to be truly non biased, is if it just picked candidates at complete random?
  • Next thing you know, if one party gets an "unfair" number of votes, they get put in the stocks.

    This is like when the bonds raters down graded the US credit score and Obama audited them until the put it back up.
  • I think that artificial intelligence, in any case, won't be able to completely replace humans, and this is good, because people will still be able to work. When it comes to work, I prefer to work remotely. I also love to use sources like https://anywhere.epam.com/en/l... [epam.com] to find the right job for me. There are really a lot of options here.

UNIX enhancements aren't.

Working...