Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Google

Google's AI Will Help Decide Whether Unemployed Workers Get Benefits 58

An anonymous reader quotes a report from Gizmodo: Within the next several months, Nevada plans to launch a generative AI system powered by Google that will analyze transcripts of unemployment appeals hearings and issue recommendations to human referees about whether or not claimants should receive benefits. The system will be the first of its kind in the country and represents a significant experiment by state officials and Google in allowing generative AI to influence a high-stakes government decision -- one that could put thousands of dollars in unemployed Nevadans' pockets or take it away. Nevada officials say the Google system will speed up the appeals process -- cutting the time it takes referees to write a determination from several hours to just five minutes, in some cases -- helping the state work through a stubborn backlog of cases that have been pending since the height of the COVID-19 pandemic.

The tool will generate recommendations based on hearing transcripts and evidentiary documents, supplying its own analysis of whether a person's unemployment claim should be approved, denied, or modified. At least one human referee will then review each recommendation, said Christopher Sewell, director of the Nevada Department of Employment, Training, and Rehabilitation (DETR). If the referee agrees with the recommendation, they will sign and issue the decision. If they don't agree, the referee will revise the document and DETR will investigate the discrepancy. "There's no AI [written decisions] that are going out without having human interaction and that human review," Sewell said. "We can get decisions out quicker so that it actually helps the claimant."

Judicial scholars, a former U.S. Department of Labor official, and lawyers who represent Nevadans in appeal hearings told Gizmodo they worry the emphasis on speed could undermine any human guardrails Nevada puts in place. "The time savings they're looking for only happens if the review is very cursory," said Morgan Shah, director of community engagement for Nevada Legal Services. "If someone is reviewing something thoroughly and properly, they're really not saving that much time. At what point are you creating an environment where people are sort of being encouraged to take a shortcut?" Michele Evermore, a former deputy director for unemployment modernization policy at the Department of Labor, shared similar concerns. "If a robot's just handed you a recommendation and you just have to check a box and there's pressure to clear out a backlog, that's a little bit concerning," she said. In response to those fears about automation bias Google spokesperson Ashley Simms said "we work with our customers to identify and address any potential bias, and help them comply with federal and state requirements."
"There's a level of risk we have to be willing to accept with humans and with AI," added Amy Perez, who oversaw unemployment modernization efforts in Colorado and at the U.S. Department of Labor. "We should only be putting these tools out into production if we've established it's as good as or better than a human."

Google's AI Will Help Decide Whether Unemployed Workers Get Benefits

Comments Filter:
  • Here in Florida (Score:4, Interesting)

    by Powercntrl ( 458442 ) on Tuesday September 10, 2024 @07:26PM (#64778415) Homepage

    We don't need no stinkin' AI to say "no".

    The unemployment site just runs on Windows XP or something equally as archaic, and when it crashes and you get locked out while claiming your weeks, there's a helpful number for you to call. But there's this one weird trick red states love to use - nobody answers the phone!

    Saves the state a fortune, I can only assume. The unemployed can pull on their boot straps if they need money, or so I've heard on Fox News.

    • From what I've seen google's ai can't even answer math problems correctly. It gets about 50% of answers wrong in my experiance. Hard to see how this isn't an attempt to arbitrarily deny unemployment benifits to people because Nevada just doesn't want to give out unemployment benifits.

      • Actually, my favorite is when I ask ChatGPT to figure out the math on something and it just gives me a formula without actually solving it. Then when I ask it to solve it, it gets it wrong.

        • >cutting the time it takes referees to write a determination from several hours to just five minutes, in some cases

          I expect 95% of the workers involved in the said work to lose their jobs.

          Isn't this all about efficiency, or is it a way to spend more money to enrich government contractors?

          The simplistic solution would be, wait for it, reduce the complexity of the regulations and reduce the number of government workers.

          And how many of the 180,000 Nevada government employees will lose their jobs? https://fr [stlouisfed.org]

          • by whitroth ( 9367 )

            100% WRONG answer. I mean, unless you enjoy sitting and waiting in line to be seen at, say, the DMV for a license plate, and think everyone should.

            And, of course, there's the utter stupidity of contractors instead of hiring people. No, you shut up: I worked for a contracting company for 10 years, and I was at the NIH. Instead of hiring me, and paying my salary and benefits, YOUR tax dollars went to pay me. And my company manager. And the fed who managed the contracts. And my corporate manager's manager. Oh,

      • That's ok, to determine benefits, all Google's AI needs to know is your skin color and if you are non-standard gender! The more special interest checkboxes you hit, the better your chance at getting bennys
      • AI will have the same problem as earlier computer programs: The official will say "The computer says you're wrong and there's nothing I can do about it. Next!"

        To relate a story about officials. I once got a bill for unpaid business taxes. I don't own or partially own a business, and never have in the past. So I called up the city and got Betty on the phone. I argued with Betty that I don't owe any business taxes, but whe was dubious and not helpful, and she said I could discuss this at City Hall in perso

    • Re: (Score:2, Informative)

      by quonset ( 4839537 )

      You also send out the police to interrogate people [go.com] on why they signed a petition for a ballot initiative.

    • "We can't run the risk of an undeserving person getting benefits, therefore we must deny benefits to EVERYONE!"

    • As far as I know, British banks have a similar system in place for loans [vimeo.com]

      • by AmiMoJo ( 196126 )

        The UK still has GDPR rules, as they have not been trashed since we left the EU. Under those rules, you have a right to have automated decisions reviewed and explained.

        If AI says no, you can ask to have to explained why it said no, and have the decision reviewed by a human. Typically AI cannot explain these decisions in a satisfactory way, so it would need to go through human review against a set of criteria that you can check and challenge.

    • I think the real scam here is being able to ingest employment and benefits data, broker that data, and target ads based on it.

    • by whitroth ( 9367 )

      You've got that right. Notice it says there's this huge backlog... why is that? Maybe because they refuse to spend the money to HIRE MORE PEOPLE to review the cases? AND TRAIN THEM, FIRST?

    • But there's this one weird trick red states love to use - nobody answers the phone!

      OMG, they do that in Red States too? That sucks!

      I've lived in Blue States my entire life and much like tax breaks for wealthy film studios (local Democrats sold us on this "trickle down economics" concept) coupled with cuts to local health programs, I thought it was something that only happened here.

      That's disappointing.

      CNN hasn't picked up on it yet, but it's coming .... I can feel it.

  • Predictable (Score:2, Funny)

    by The Cat ( 19816 )

    So the AI takes your job and then it takes your unemployment.

  • On the form just claim you have extra fingers, the bot will favor you.

  • by penguinoid ( 724646 ) on Tuesday September 10, 2024 @07:50PM (#64778469) Homepage Journal

    "cutting the time it takes referees to write a determination from several hours to just five minutes, in some cases"

    That's basically the AI doing everything with the referee performing little more than the most basic of sanity checking.

    The only way this isn't dystopia is if the claimant can appeal to full human review, by humans who have no knowledge of the AI recommendation. Which basically means, this only works if it is used to rubberstamp some claims (which is a worthwhile goal).

    I wonder what the AI will say for "I used to work at the unemployment office, but then my job got taken by an AI".

    • No, this can theoretically (I have no domain knowledge) easily work well, efficiently, and reliably:
      It might for a lot of cases only take a few aspects of the case to be able to say that benefits should be denied (which happens most of the time: https://oui.doleta.gov/unemplo... [doleta.gov] ). If for an example case the AI indicates that benefits should be denied because of aspects X and Y, the human reviewer only needs to check the details of those to be able to confirm the denial. None of the other aspects then need

    • by Jahta ( 1141213 )

      "cutting the time it takes referees to write a determination from several hours to just five minutes, in some cases"

      That's basically the AI doing everything with the referee performing little more than the most basic of sanity checking.

      The only way this isn't dystopia is if the claimant can appeal to full human review, by humans who have no knowledge of the AI recommendation. Which basically means, this only works if it is used to rubberstamp some claims (which is a worthwhile goal).

      I wonder what the AI will say for "I used to work at the unemployment office, but then my job got taken by an AI".

      This is the problem. They are essentially black boxes; you can't determine why they produce particular outputs.

      That's fine when an AI is generating potential recipes for medication, which will then be rigorously lab-tested before they get anywhere near real people. But for things like welfare, or mortgage, or hiring decisions, where the process has to be open to scrutiny and even potentially legal challenge, then the lack of traceability is a major red flag.

    • I wonder what the AI will say for "I used to work at the unemployment office, but then my job got taken by an AI".

      I'm sorry Dave, I can't let you claim that.

  • The "reviewer" in this instance has the same incentives as the mortgage signers (such as from Wells Fargo) did. They will approve as many cases as they can, with as little due diligence as they can get away with, because that's where the OKRs will be set, where the employment reviews will happen, and where the consequences will settle.

    Dream on, if you think the individual robo-signer... er, reviewer will receive any more than minor blowback. And in the mean time, the claimant (who, universally, will have

  • And people who compete with them will get none. This is not a new game. It's extremely old.
  • I'm amazed at how many commentators imagine how well or badly AI might work in this environment without any objective evidence. Let's some testing before jumping to conclusions. AI does not have to be perfect whatever that means in practice just good enough to be justified by speed and cost savings. Of course we now have a problem of who evaluates the test results.
    • Machine learning for things like this is a way to ensure that future outcomes are the same as past outcomes, except eliminating outliers. If the past outcomes are unjust, machine learning ensures they cannot become more just. If the past outcomes are just, machine learning ensures that you can't adapt to new circumstances, which means the outcomes become less just over time.

    • by hey! ( 33014 ) on Tuesday September 10, 2024 @09:24PM (#64778657) Homepage Journal

      First you have to define what "working well" would be. Then you have to look at your training data and how it relates to your target population to even know if it's possible.

      The big problem with generative AI is how well it always *appears* to work. In many kinds of applications *appearing* to work well and *actually* working well are the same thing -- like generating a story from a prompt. But in other cases generative AI hallucinates credible looking justifications for wrong decisions, and will actually justify its bad decisions in a highly convincing way until you really rub its nose in its error. It's unconscionable to put this in charge of decisions that could possibly harm people unless those decisions are reviewed by people who have been trained to understand the technology's dangers.

      Fundamentally, as always, the problem is people, not the technology; specifically the faith people put in technology they don't understand [wikipedia.org]. Generative AI is particularly dangerous because it generates convincing looking errors.

      • It's unconscionable to put this in charge of decisions that could possibly harm people unless those decisions are reviewed by people who have been trained to understand the technology's dangers.

        You do realize that those people already do not matter. If they did matter, they wouldn't be applying for unemployment. The state of Nevada is just looking to give legitimacy to their system of denying free money to people who don't really deserve it. Try not being poor by doing something worthwhile and valuable to society so that your betters can profit off of it; otherwise, fuck off and die. Literally.

    • I'm amazed at how many commentators imagine how well or badly AI might work in this environment without any objective evidence. Let's some testing before jumping to conclusions. AI does not have to be perfect whatever that means in practice just good enough to be justified by speed and cost savings. Of course we now have a problem of who evaluates the test results.

      It's been tested, it sucks: https://en.wikipedia.org/wiki/... [wikipedia.org]

  • The reviewer would only examine those cases where the benefits were denied. This would actually save a lot of time since no humans would be required when benefits were allowed. That would give the reviewer plenty of time to examine the denials in depth, something that is probably not happening now.
    • That's fine when there's no incentive to reduce the approved rating. But that's not why these systems come about. The government wants to reduce its bill by X% and it doesn't really care whether or not there are that many inappropriate claims and so the AI will be "tuned" to generate more failures. And it will be tuned again when the humans keep overrulling it, not to make it more accurate, but to generate an ever higher number of disapproved to try and get to that semi-mythical X%. And along the way the hu

      • I said ideally. We do not live in an ideal world. I have no idea if reviewers work on a quota. After all, the great majority of claims are routine and approved essentially automatically. I am sure that reviewers are instructed to reject non-routine claims on a regular basis. It is cheaper for the company to put the onus of proving a claim on the claimant and their physicians. AI might actually help with this, but I doubt it. It is an economic issue not a technical one.

  • char *makeDecision( int empId )
    {
    return( "No" );
    }
  • I know what you're all thinking BUT: what about when we learn how to game the AI into giving the desired decision? The right combination of the right words in the right order, that's all it's gonna take. You get benefits, and you get benefits, everybody gets benefits, it's like Oprah up in here!
  • They tried this with unemployment benefits in Australia, and it was an unmitigated disaster: https://en.wikipedia.org/wiki/... [wikipedia.org]

    There are still a few weaselly politicians trying to explain to the public why they should keep THEIR jobs after supporting the failed program.

    • If the house you built is rubbish, it doesn't mean your tools are bad. With the current state of AI, it can be a great tool to assist in making decisions, but not to make those decisions themselves. Use the tool the right way. It kind of sounds like that's what they are doing here: providing an analysis that a human then uses to make a decision. This can be an enormous time-saver. There's still enough red flags to worry over, but as an experiment it'll be interesting to see what the results are.
      • There's still enough red flags to worry over, but as an experiment it'll be interesting to see what the results are.

        If you've ever been dependent on unemployment, you'll know that this is a terrible place to do "experiments." Even if you do eventually get through to a human and get your payments restored, this doesn't do shit if you've already been evicted.

  • ... claimants should receive benefits.

    There have been reports of these paper-pushers acting like insurance corporations, refusing all changes in circumstances: Most-times, the reason for refusal is the claimant not having the required paperwork (or paperwork getting 'lost'). That's a problem, because the branch has to deliver claim-form Quality of Service (QoS) but doesn't have to deliver any appeal-form QoS: Hence the backlog.

    A paper-pushing AI to find the paperwork will be a boon to everyone.

  • How can training AI on bad decisions lead to anything but more bad decisions?
  • by kackle ( 910159 )
    Isn't/shouldn't this be just a plain computer program: if he applied for no jobs in 90 days and he isn't ill and he hasn't claimed a disability then stopChecks();

Receiving a million dollars tax free will make you feel better than being flat broke and having a stomach ache. -- Dolph Sharp, "I'm O.K., You're Not So Hot"

Working...