Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI Businesses Software

California Suggests Taking Aim At AI-Powered Hiring Software (theregister.com) 34

An anonymous reader quotes a report from The Register: A newly proposed amendment to California's hiring discrimination laws would make AI-powered employment decision-making software a source of legal liability. The proposal would make it illegal for businesses and employment agencies to use automated-decision systems to screen out applicants who are considered a protected class by the California Department of Fair Employment and Housing. Broad language, however, means the law could be easily applied to "applications or systems that may only be tangentially related to employment decisions," lawyers Brent Hamilton and Jeffrey Bosley of Davis Wright Tremaine wrote.

Automated-decision systems and algorithms, both fundamental to the law, are broadly defined in the draft, Hamilton and Bosley said. The lack of specificity means that technologies designed to aid human decision-making in small, subtle ways could end up being lumped together with hiring software, as could third-party vendors who provide the code. Strict record keeping requirements are included in the proposed law that double record retention time from two to four years, and require anyone using automated-decision systems to retain all machine-learning data generated as part of its operation and training. Training datasets leave vendors responsible, too: "Any person who engages the advertisement, sale, provision, or use of a selection tool, including but not limited to an automated-decision system, to an employer or other covered entity must maintain records of the assessment criteria used by the automated-decision system," the proposed text says. It specifically mentions it must maintain records for each customer it trains models for, too.

Unintentional filtering isn't covered by the newly proposed California law, which focuses on the ways in which software can discriminate against certain types of people, unintentionally or otherwise. [...] Hamilton and Bosley suggest that California employers review their ATS and RMS software to ensure it conforms to the proposal, enhance their understanding of how the algorithms they use function, be prepared to demonstrate that the results of their process is fair and speak with vendors to ensure they are doing what they need to do to comply. The 45-day public commentary period for the proposed changes is not yet open, meaning there's no timetable for the changes to be reviewed, amended and submitted for passage.

This discussion has been archived. No new comments can be posted.

California Suggests Taking Aim At AI-Powered Hiring Software

Comments Filter:
  • If you're a protected class, you have the right to be treated like a human being, and if you're not you're not? Is that it?

    Fuck AI hiring. Fuck lazy head hunters. What a sad world we live in.

    • AI hires carry the prejudices of developers...
      • by jellomizer ( 103300 ) on Saturday April 09, 2022 @11:16AM (#62431918)

        As well the prejudices of the creators of the data to be processed.

        If you take the last 50 years of HR data from a large company, and your AI Software just Categorizes Data Elements, and Correlates every Data Element to who has been hired and promoted, vs those who's resume has been skipped, fired or failed to promote. Then because of Racism, prejudice and institutional biases over the past time, will be still applied. As a name of someone who seemed like a minority would have a statically higher chance of being classified as do not recommend entry.

        AI is like an impressionable child. They will take what everyone says and go with it. It needs discipline and moral restrictions placed on it, so cases like Do not factor in Applicants Name, Estimated Age, Location of previous work experience or education in the recommendation even if there is a correlation. Because the data collected is faulty because it is from decades of biases.

        • Re: (Score:2, Insightful)

          by starworks5 ( 139327 )

          Imagine you were the NBA, and you train this model, and it says you should hire the tallest person, which is more likely to be black, do you propose that this is unlawful discrimination, or simply hiring the best person for the job?

          What you are witnessing, is just another part of the war against standardized testing, in favor of race based nepotism, under the premise of "equity". That is why in recent years there has been a push to eliminate the SAT for college, the Bar exam for lawyers, and claiming that m

        • The primary goal of artificial intelligence software systems is the ability to detect what makes people improve their performance and productivity over time. Artificial intelligence tools include machine learning and deep learning, which reports on analysis to improve clarity of planning, reasoning, thinking, problem solving, and learning. It's important to have a clear view of the end product and outcomes. Everything depends on the stage of readiness and concentration of resources allocated to the project,
        • The primary goal of artificial intelligence software systems is the ability to detect what makes people improve their performance and productivity over time. Artificial intelligence tools include machine learning and deep learning, which reports on analysis to improve clarity of planning, reasoning, thinking, problem solving, and learning. It's important to have a clear view of the end product and outcomes. Everything depends on the stage of readiness and concentration of resources allocated to the project,
    • by reanjr ( 588767 ) on Saturday April 09, 2022 @10:09AM (#62431794) Homepage

      No. The takeaway is that these systems are too big a liability to allow them to continue.

    • by AmiMoJo ( 196126 )

      You need something like GDPR to give everyone the right to have automatic decisions reviewed.

    • Using computers to filter out new hires is a quick and easy way to show you can't find an American worker so that you can request more h-1bs. You can find video on YouTube of lawyers teaching businesses how to avoid hiring local talent so they can get foreign talent in cheap. Of course you're not going to see much of that on corporate media
      • Deliberately configuring something like that into the AI is asking to get caught. The AI merely does what a human can be instructed to do, only faster. If you can tell a human to make the AI filter out American workers, why can't the headhunter do it manually?

    • Sorry to be a little off topic, but I'm currently looking for Android developers and would like you to recommend me something. So far, I have only found this company https://stfalcon.com/en/servic... [stfalcon.com] , and judging by the reviews, they do their job normally. But I would like to know about other companies that can help me, just to be able to compare prices and reviews. Can you advise something?
  • ... by excluding some job offers that has preference to black people (conforming local laws), because it was "against linkedin policy"...
  • by DrunkenTerror ( 561616 ) on Saturday April 09, 2022 @08:49AM (#62431662) Homepage Journal

    the Golgafrinchams had the right idea

  • ban AI based job application rules because they are supposedly discriminatory and how many make them mandatory because they are supposedly less discriminatory.

    • by splutty ( 43475 )

      The less discriminatory is proven to be very much untrue. Not that that ever stopped a politician from stating otherwise.

      The AI gets trained by humans and human interactions after all, and those are always discriminatory.

      • People like discrimination. It's just a matter of how you discriminate. Ask AOC and Ted Cruise, and you'll get different answers but both will be correct, from their perspective.

  • by geekmux ( 1040042 ) on Saturday April 09, 2022 @09:11AM (#62431704)

    Dear HR,

    You're hiring a human, aren't you? Fucking act like one.

    And the actual "protected class", is every human that needs a job, so let's drop the discrimination too.

    • Admittedly that huge uneasiness thoughtful scientists proclaimed that AI promised a human disaster when it began edging into areas wherein deep matters of fellow concerns are involved was seen only as something of a far off future. No doubt there are immense surprises yet to be revealed with radical algorithmic new developments but handing many human decisions over to unpredictable machines echoes to that SF horror of invasions by interstellar invaders with incomprehensible motivations. To add that to the s
    • Yes. Discrimination to "solve" discrimination is dumb. Tou will only create resentment and conflicts. The only way to solve discrimination is to aim at being as fair as possible. That means for a new job opening, you give as close as equal chance as possible to every candidate.
      • That does, of course, mean that of you're training an AI with past data, you need to counter the existing biases in order to get a fair selection criteria.
  • Problem #1:

    The draft regulations provide the following examples of Automated-Decision Systems:

    Algorithms that screen resumes for particular terms or patterns

    Google receives more than 3 million job applications a year, and has tens of thousands of open positions at any given time. Simply doing a search for "Javascript" to find potential front-end developers would be enough to trigger the law. Want to avoid that? Then every recruiter needs to sift through all 3 million applications by hand. Same with a smaller employer doing a HotJobs search. Completely impractical. So, you will be covered by the law.

    Problem #2: How do you train an AI system which

  • Eliminate the gender, age, race, hell, even the name from the application sent through the AI hiring software. No need to know any of that until the candidate is selected for the position, really. The only thing the AI needs to know is the qualifications and experience of the applicant and the serial number of the application, so it can inform the human in charge which application is that of the best candidate for the position.

    • The problem is that doing that doesn't help because other things correlate to that information. An AI / ML algorithm may select against protected classes even thoug people are not explicitly listed as members of that class. Someone may have attended a school that has a large number of students from a protected class, been hired by a business that hires more members or that class, be a member of a processional organization etc.

      It won't be anywhere near 100% accurate at recognizing members of that class
  • If you follow the link, the term "protected class" means 18 classes like race, gender, sexual orientation, marital status, et al. It does *not* mean minority or under-represented.

    The article is rather misleading, because it implies that an applicant can *be* a protected class. They can't. They just can't be rejected because of one of the 18 attributes, and that applies to everyone, whether they're over- or under-represented.

The 11 is for people with the pride of a 10 and the pocketbook of an 8. -- R.B. Greenberg [referring to PDPs?]

Working...