Forgot your password?
typodupeerror
AI Canada

Canada's Immigration Rejected Applicant Based On AI-Invented Job Duties (thestar.com) 73

New submitter haroldbasset writes: Canada's Immigration Department rejected an applicant because the duties of her current job did not match the Canadian work experience she had claimed, but the Department's AI assistant had invented that work experience. She has been working in Canada as a health scientist -- she has a Ph.D. in the immunology of aging -- but the AI genius instead described her as "wiring and assembling control circuits, building control and robot panels, programming and troubleshooting." "It's believed to be the first time that the department explicitly referred to the use of generative AI to support application processing in immigration refusals," reports the Toronto Star. "The disclaimer also noted that all generated content was verified by an officer and that generative AI was not used to make or recommend a decision."

The applicant's lawyer was shocked "how any human being could make this decision." "Somehow, it hallucinated my client's job description," he said. "I would love to see what the officer saw. Something seriously went wrong here."

The applicant's refusal came just as Canada's Immigration Department released its first AI strategy, which frames artificial intelligence as a way to improve efficiency, service delivery, and program integrity. The department says it has long used digital tools like analytics and automation to flag fraud risks and triage applications, and is now also experimenting with generative AI for tasks such as research, summarizing, and analysis. In this case, however, the department insisted the decision was made by a human officer and that generative AI was not involved in the final decision.
This discussion has been archived. No new comments can be posted.

Canada's Immigration Rejected Applicant Based On AI-Invented Job Duties

Comments Filter:
  • Not all AI. Some is good, like image-recognition, audio transcription, etc. But these LLMs and GPT bullshitters need to die.

    • by Firethorn ( 177587 ) on Wednesday March 25, 2026 @08:56PM (#66062252) Homepage Journal

      I was just reading a story where a woman ended up in jail six months, extradited to North Dakota from Tennessee.
      The only evidence it was her was an AI facial recognition match between her social media/driver's license and the video of the actual suspect.
      It wasn't until the first court date that the public defender got her financial records showing she was in Tennessee when the crime actually happened.
      Then they kicked a southern state person out into ND winter without proper clothing, not even bothering to get her a ride back home.

      She lost her house and car due to non-payment because she couldn't pay bills while in jail.

      Looking, she'll probably end up with a $2-3M settlement.

      https://www.theguardian.com/us... [theguardian.com]

      • Not gonna lie, I'd take the 100 day hit for the $2-3mil settlement if it was me.

        Though I'd probably go metaphorically John Wik on them for losing my dog.

      • Looking, she'll probably end up with a $2-3M settlement.

        Good news everyone! The taxpayers don't have to pay that money. She will die penniless, cold, and hungry.

    • by gweihir ( 88907 )

      Indeed. The sooner, the better.

  • AI won't do it for you.

  • by Brain-Fu ( 1274756 ) on Wednesday March 25, 2026 @06:23PM (#66061936) Homepage Journal

    This business of having an AI do the legwork and then having a human review it and make a final decision keeps going badly. Humans are intrinsically lazy and the moment they get a few good results from the AI they are going to stop doing the validation and start rubber-stamping. It doesn't matter if policy disallows this, they will do it anyway. It doesn't matter if the human really cares; they won't be able to help themselves. Human laziness is too deep an instinct.

    It's the same with the self-driving cars where a human is required to stay at the wheel and alert so they can manually override the instant the AI starts doing something wrong. Humans CAN'T keep that up. It's not possible. The brain just doesn't work that way. The mind knows that it isn't doing the work, and it will get bored and lose focus or just nod off.

    Everyone is SO eager to have it both ways: "an AI does all the work but a human verifies it so we know its good." We just can't have it both ways. Once the AI does the work, the human stops verifying. That is how and why things went wrong here, it is how and why things have gone wrong for several law firms that submitted hallucinated historical court rulings, and it is how and why things will continue to go badly across all industries that adopt AI in such a role.

    "Human in the loop" is really easy to say. Much harder to actually do reliably.

  • by Luckyo ( 1726890 ) on Wednesday March 25, 2026 @06:25PM (#66061942)

    Main problem with AI in these cases is that it is so good, that people stop checking it.

    Even when they're explicitly employed to do so as is the case here. "It's been great last ten thousand cases I checked, it's right here too".

    • It's good. Until it isn't.

      I'm a computer scientist with three decades of expertise in computer networking. There's this one AI-based job board that keeps trying to match me with delivery driver jobs.

  • by rsilvergun ( 571051 ) on Wednesday March 25, 2026 @06:52PM (#66061990)
    Leading to arrest. So far it's been white people so the victims are at least alive. One of them was poor and couldn't afford a lawyer so they spent 6 months in jail before their public attorney got enough time to prove they weren't the person reported.

    Dumb people think the computers are smarter than they are. And so they think the computer is infallible because dumb people already think they are pretty smart.

    And the entire country of America right now is being run by dumb people top to bottom. Hell with the police we intentionally screen out intelligent people because the job is generally fairly boring and people who aren't dumb have a bad habit of quitting after a few months of expensive training. I've had a few buddies who thought being a cop might be a easy job with decent benefits so they went in for it and couldn't pass the psych exam not because they were crazy but because it said they were too smart to be a cop.
    • by Archfeld ( 6757 )

      "You are too smart to be a cop...."

      LMAO That is what they tell the ID10T's that can't be trusted with gun, badge or any responsibilty. Almost any reputable force requires a degree and advancement requires a higher degree. I am not saying a degree shows intelligence or more importantly common-sense, but I'd say your "buddies" might be more suited for the Marines.

  • I feel like a good idea for this sort of thing if it's going to be deployed is include the applicant in the loop.

    "Hi, your application will be rejected because:
    * You list your qualifications as an electrician, not a medical expert.

    If this anything is in error and you want to continue with your submission, please explain the error below and click "Contest" attesting that you believe this to be in error and someone will be sure to review more carefully."

    Even without AI it would be nice for job application for

    • > Hi, your application will be rejected because: * You list your qualifications as an electrician, not a medical expert. Dr. Frankenstein would like to disagree.
    • There's a bit of subversion here though: Going by my uncle, social security disability has made it even easier: They automatically reject every application the first time.

  • by hyades1 ( 1149581 ) on Wednesday March 25, 2026 @11:39PM (#66062378)

    There's no mystery here. The officer alleged to have verified the decision wasn't doing their job. You can frame this any way you like...the officer is overworked and couldn't keep up with the number of applications they're supposed to verify, or the officer is lazy, or the officer is incompetent, or perhaps the scientist's name identified their ancestry and the officer is a racist.

    In my opinion (backed by some experience) the most likely explanation is the department relies on the fact that many applicants who are rejected won't have the means to appeal a decision, and the spokesperson is simply lying when they claim AI isn't used to recommend or make a decision.

  • by necro81 ( 917438 ) on Thursday March 26, 2026 @09:44AM (#66062896) Journal
    These kinds of poor outcomes were described thoroughly in the book "Weapons of Math Destruction [penguinrandomhouse.com]" by Cathy O'Neil. She cites examples in bail / parole recommendation algorithms, HR screening tools, insurance, etc. In her view, a WMD [wikipedia.org] is a computer system that has some/most of these characteristics:
    * that makes serious decisions affecting people other than the person using the tool,
    * uses proxy measurements (zip code, socioeconomic status) for the thing they're actually trying to quantify (e.g., risk of recidivism),
    * whose inner workings are opaque, and/or built on data of unknown provenance,
    * are not or cannot be corrected in light of new data or mistakes,
    * are difficult or impossible to contest,
    * have little to no regulation.

    That was published in 2017, well before LLMs and AI really hit the scene. But the dangers were already apparent even then, and f*&k-all has been done to mitigate them.
    • Those are explicitly banned in the EU AI Act. Now, there are some serious problems with the AI act, one of which is that it has been lobbied to death by copyright holders, but this ban is something we can likely all agree on is a good idea.

  • by TwistedGreen ( 80055 ) on Thursday March 26, 2026 @01:26PM (#66063388)

    I hereby inform you under powers entrusted to me under Section 47, Paragraph 7 of Council Order Number 438476, that Mr. Buttle, Archibald, residing at 412 North Tower, Shangri La Towers, has been invited to assist the Ministry of Information with certain enquiries, the nature of which may be ascertained on completion of application form BZ/ST/486/C fourteen days within this date, and that he is liable to certain obligations as specified in Council Order 173497, including financial restitutions which may or may not be incurred if Information Retrieval procedures beyond those incorporated in Article 7 subsections 8, 10 & 32 are required to elicit information leading to permanent arrest, notification of which will be served with the time period of 5 working days as stipulated by law. In that instance the detainee will be debited without further notice through central banking procedures without prejudice until and unless at such a time when re-imbursement procedures may be instituted by you or third parties on completion of a re-imbursement form RB/CZ/907/X...

We can predict everything, except the future.

Working...