Canada's Immigration Rejected Applicant Based On AI-Invented Job Duties (thestar.com) 73
New submitter haroldbasset writes: Canada's Immigration Department rejected an applicant because the duties of her current job did not match the Canadian work experience she had claimed, but the Department's AI assistant had invented that work experience. She has been working in Canada as a health scientist -- she has a Ph.D. in the immunology of aging -- but the AI genius instead described her as "wiring and assembling control circuits, building control and robot panels, programming and troubleshooting." "It's believed to be the first time that the department explicitly referred to the use of generative AI to support application processing in immigration refusals," reports the Toronto Star. "The disclaimer also noted that all generated content was verified by an officer and that generative AI was not used to make or recommend a decision."
The applicant's lawyer was shocked "how any human being could make this decision." "Somehow, it hallucinated my client's job description," he said. "I would love to see what the officer saw. Something seriously went wrong here."
The applicant's refusal came just as Canada's Immigration Department released its first AI strategy, which frames artificial intelligence as a way to improve efficiency, service delivery, and program integrity. The department says it has long used digital tools like analytics and automation to flag fraud risks and triage applications, and is now also experimenting with generative AI for tasks such as research, summarizing, and analysis. In this case, however, the department insisted the decision was made by a human officer and that generative AI was not involved in the final decision.
The applicant's lawyer was shocked "how any human being could make this decision." "Somehow, it hallucinated my client's job description," he said. "I would love to see what the officer saw. Something seriously went wrong here."
The applicant's refusal came just as Canada's Immigration Department released its first AI strategy, which frames artificial intelligence as a way to improve efficiency, service delivery, and program integrity. The department says it has long used digital tools like analytics and automation to flag fraud risks and triage applications, and is now also experimenting with generative AI for tasks such as research, summarizing, and analysis. In this case, however, the department insisted the decision was made by a human officer and that generative AI was not involved in the final decision.
Re: (Score:2)
Yes, God is the Greatest.
And?
Isn't that what every religion that believes in a deity, also believe?
First visible troll food (Score:2)
Congratulations. You have fed the troll and propagated its vacuous Subject. You can collect your prize... In Canada!
Re: (Score:1)
Thank god. I’m tired of bland white people seasoned food. There’s more to cooking than salt and pepper and mayonnaise. Maybe the neighborhood can finally get a tamale lady!
Re:Good for Canada! (Score:5, Informative)
I suggest you visit Toronto some time if you think Canada is homogeneous...
Re: (Score:3)
Re: (Score:2)
Re: (Score:2)
AI needs to die (Score:2)
Not all AI. Some is good, like image-recognition, audio transcription, etc. But these LLMs and GPT bullshitters need to die.
Re: (Score:1)
Image recognition also not great (Score:5, Informative)
I was just reading a story where a woman ended up in jail six months, extradited to North Dakota from Tennessee.
The only evidence it was her was an AI facial recognition match between her social media/driver's license and the video of the actual suspect.
It wasn't until the first court date that the public defender got her financial records showing she was in Tennessee when the crime actually happened.
Then they kicked a southern state person out into ND winter without proper clothing, not even bothering to get her a ride back home.
She lost her house and car due to non-payment because she couldn't pay bills while in jail.
Looking, she'll probably end up with a $2-3M settlement.
https://www.theguardian.com/us... [theguardian.com]
Re: (Score:2)
Not gonna lie, I'd take the 100 day hit for the $2-3mil settlement if it was me.
Though I'd probably go metaphorically John Wik on them for losing my dog.
Re: (Score:2)
Looking, she'll probably end up with a $2-3M settlement.
Good news everyone! The taxpayers don't have to pay that money. She will die penniless, cold, and hungry.
Re: (Score:2)
Indeed. The sooner, the better.
Proof read! (Score:2)
AI won't do it for you.
Human Nature vs Policy (Score:5, Insightful)
This business of having an AI do the legwork and then having a human review it and make a final decision keeps going badly. Humans are intrinsically lazy and the moment they get a few good results from the AI they are going to stop doing the validation and start rubber-stamping. It doesn't matter if policy disallows this, they will do it anyway. It doesn't matter if the human really cares; they won't be able to help themselves. Human laziness is too deep an instinct.
It's the same with the self-driving cars where a human is required to stay at the wheel and alert so they can manually override the instant the AI starts doing something wrong. Humans CAN'T keep that up. It's not possible. The brain just doesn't work that way. The mind knows that it isn't doing the work, and it will get bored and lose focus or just nod off.
Everyone is SO eager to have it both ways: "an AI does all the work but a human verifies it so we know its good." We just can't have it both ways. Once the AI does the work, the human stops verifying. That is how and why things went wrong here, it is how and why things have gone wrong for several law firms that submitted hallucinated historical court rulings, and it is how and why things will continue to go badly across all industries that adopt AI in such a role.
"Human in the loop" is really easy to say. Much harder to actually do reliably.
Re: (Score:2)
They need to put in some false positives to make sure the humans are paying attention.
Re: Human Nature vs Policy (Score:4, Insightful)
You literally want to train AIs to know when humans are not paying attention to what the AIs do?
Well, that's one way to go...
Re: (Score:2)
Re: Human Nature vs Policy (Score:5, Informative)
Re: Human Nature vs Policy (Score:2)
So? It works doesn't it?
Re: (Score:2)
Agreed - especially in the area of autos. My wife's car is about where I think the sweet spot is. It has lane assist, with auto steering to stay in lane, and keeps a selectable min distance from the car in front. That's it. I only use it on the highway. And I think that is a great compromise - you have to keep your hands on the wheel, you don't have to really do much - you are just kind of going along with the car. But you are still there to take over if needed. And I disengage ["manual override enga
Main problem with AI (Score:5, Insightful)
Main problem with AI in these cases is that it is so good, that people stop checking it.
Even when they're explicitly employed to do so as is the case here. "It's been great last ten thousand cases I checked, it's right here too".
Re: (Score:3)
It's literally in the OP. It's not the AI that is at fault, it's the person who's job it was to sanity check to output. That person didn't do it.
"The disclaimer also noted that all generated content was verified by an officer and that generative AI was not used to make or recommend a decision."
But the "humans are better at this, AI sucks" crowd can't even read the OP. Imagine you hire someone like this to make complex decisions like one needed in the OP, by the tens of thousands.
The error rate would be hila
Re: (Score:2)
Re: (Score:2)
Fun part? US government is actually pretty good at governing compared to alternatives.
Consider something like it's primary competitor of PRC, where bureaucracy is so hilariously bad that leadership has no idea what going on in the nation, and has to rely on things like electricity consumption numbers when they try to determine how much economic activity has taken place.
Re: (Score:2)
They do. That's why that Mormon guy could just grab a list of those organisations from the relevant government website and went to each location to show they're fake and exactly how much money they are defrauding the government on camera
Re: (Score:2)
You can just watch the videos. It's made very clear where they sourced the information on the fraudsters.
This is why quite a few people followed in his footsteps and did the same thing. Went onto the relevant government website, pulled the data, and went to places. It's not like Shirley is the only one. He's just the one who started the trend.
Re: (Score:3, Interesting)
"The disclaimer also noted that all generated content was verified by an officer and that generative AI was not used to make or recommend a decision."
Get this: they are very obviously lying.
Re: (Score:2)
Yep, and if the system isn't built to shield those doing the "verifying" then there should be a clear audit log proving exactly that.
Want to bet that log is incomplete and/or missing? I can't read the paywalled article to find out.
Re: (Score:2)
Unlikely, as this is a budgetary item. Managers can go to prison for fraud and be liable for damages if they failed to have the person working this role if it is indeed required to be filled for this task. It's a key part in how bureaucracy diffuses responsibility for mistakes, and one thing that bureaucrats tend to follow with religious fervor.
Far more likely scenario is one I list above.
Re: (Score:2)
> It's literally in the OP. It's not the AI that is at fault, it's the person who's job it was to sanity check to output. That person didn't do it.
No, it's both of their faults.
The AI generated a wrong answer. That means it's at fault.
Nobody checked the answer. That means the person responsible for checking was at fault.
If the AI generates wrong answers, they don't suddenly become right by virtue of someone else not checking it.
Re: (Score:2)
This is the low IQ take. It requires not understanding the following:
1. How error rate works.
2. How error mitigation works.
3. How error correction works.
4. Who is at fault when #3 fails.
Re: (Score:2)
It's good. Until it isn't.
I'm a computer scientist with three decades of expertise in computer networking. There's this one AI-based job board that keeps trying to match me with delivery driver jobs.
Also several cases of face recognition software (Score:3, Insightful)
Dumb people think the computers are smarter than they are. And so they think the computer is infallible because dumb people already think they are pretty smart.
And the entire country of America right now is being run by dumb people top to bottom. Hell with the police we intentionally screen out intelligent people because the job is generally fairly boring and people who aren't dumb have a bad habit of quitting after a few months of expensive training. I've had a few buddies who thought being a cop might be a easy job with decent benefits so they went in for it and couldn't pass the psych exam not because they were crazy but because it said they were too smart to be a cop.
Re: (Score:1)
IMPORTANT ADVICE:
Post AC on your own posts to make sure it looks like there's more than one person who is obsessed with rsilvergun and wants to touch his pee pee. There's only two of us so we have to make it look like there are more or people will think we're not popular.
Re: (Score:3)
There is a basic concept at play, if a single innocent person gets abused by government because of a broken procedure or process, that is far far worse than if a few people who are guilty are released(as long as those incorrectly released are not dangerous criminals themselves). You would rather see one bad person cause 20 innocent people to go to jail though, as long as it's not YOU or someone from your family who are incorrectly or unfairly thrown in jail.
This also applies to what is going on with ICE i
Re: (Score:2)
"You are too smart to be a cop...."
LMAO That is what they tell the ID10T's that can't be trusted with gun, badge or any responsibilty. Almost any reputable force requires a degree and advancement requires a higher degree. I am not saying a degree shows intelligence or more importantly common-sense, but I'd say your "buddies" might be more suited for the Marines.
Re: (Score:1)
ID10T's that can't be trusted with gun, badge or any responsibilty.
Cops, you mean.
Re: (Score:2)
Self-Review (Score:2)
I feel like a good idea for this sort of thing if it's going to be deployed is include the applicant in the loop.
"Hi, your application will be rejected because:
* You list your qualifications as an electrician, not a medical expert.
If this anything is in error and you want to continue with your submission, please explain the error below and click "Contest" attesting that you believe this to be in error and someone will be sure to review more carefully."
Even without AI it would be nice for job application for
Re: Self-Review (Score:2)
Reminds me of SSD (Score:2)
There's a bit of subversion here though: Going by my uncle, social security disability has made it even easier: They automatically reject every application the first time.
No mystery here (Score:3)
There's no mystery here. The officer alleged to have verified the decision wasn't doing their job. You can frame this any way you like...the officer is overworked and couldn't keep up with the number of applications they're supposed to verify, or the officer is lazy, or the officer is incompetent, or perhaps the scientist's name identified their ancestry and the officer is a racist.
In my opinion (backed by some experience) the most likely explanation is the department relies on the fact that many applicants who are rejected won't have the means to appeal a decision, and the spokesperson is simply lying when they claim AI isn't used to recommend or make a decision.
Re: No mystery here (Score:2)
Canada has been letting go of a lot of the workers in the name of cost savings.. so everyone is overworked now and you can't really expect much from them.
Weapons of Math Destruction (Score:5, Informative)
* that makes serious decisions affecting people other than the person using the tool,
* uses proxy measurements (zip code, socioeconomic status) for the thing they're actually trying to quantify (e.g., risk of recidivism),
* whose inner workings are opaque, and/or built on data of unknown provenance,
* are not or cannot be corrected in light of new data or mistakes,
* are difficult or impossible to contest,
* have little to no regulation.
That was published in 2017, well before LLMs and AI really hit the scene. But the dangers were already apparent even then, and f*&k-all has been done to mitigate them.
Re: Weapons of Math Destruction (Score:3)
Those are explicitly banned in the EU AI Act. Now, there are some serious problems with the AI act, one of which is that it has been lobbied to death by copyright holders, but this ban is something we can likely all agree on is a good idea.
Brazil is Ripe for a Remake (Score:3)
I hereby inform you under powers entrusted to me under Section 47, Paragraph 7 of Council Order Number 438476, that Mr. Buttle, Archibald, residing at 412 North Tower, Shangri La Towers, has been invited to assist the Ministry of Information with certain enquiries, the nature of which may be ascertained on completion of application form BZ/ST/486/C fourteen days within this date, and that he is liable to certain obligations as specified in Council Order 173497, including financial restitutions which may or may not be incurred if Information Retrieval procedures beyond those incorporated in Article 7 subsections 8, 10 & 32 are required to elicit information leading to permanent arrest, notification of which will be served with the time period of 5 working days as stipulated by law. In that instance the detainee will be debited without further notice through central banking procedures without prejudice until and unless at such a time when re-imbursement procedures may be instituted by you or third parties on completion of a re-imbursement form RB/CZ/907/X...