Forgot your password?
typodupeerror
AI The Courts

'AI Slop' in Court Filings: Lawyers Keep Citing Fake AI-Hallucinated Cases (indianexpress.com) 135

"According to court filings and interviews with lawyers and scholars, the legal profession in recent months has increasingly become a hotbed for AI blunders," reports the New York Times: Earlier this year, a lawyer filed a motion in a Texas bankruptcy court that cited a 1985 case called Brasher v. Stewart. Only the case doesn't exist. Artificial intelligence had concocted that citation, along with 31 others. A judge blasted the lawyer in an opinion, referring him to the state bar's disciplinary committee and mandating six hours of A.I. training.

That filing was spotted by Robert Freund, a Los Angeles-based lawyer, who fed it to an online database that tracks legal A.I. misuse globally. Mr. Freund is part of a growing network of lawyers who track down A.I. abuses committed by their peers, collecting the most egregious examples and posting them online. The group hopes that by tracking down the A.I. slop, it can help draw attention to the problem and put an end to it... [C]ourts are starting to map out punishments of small fines and other discipline. The problem, though, keeps getting worse. That's why Damien Charlotin, a lawyer and researcher in France, started an online database in April to track it.

Initially he found three or four examples a month. Now he often receives that many in a day. Many lawyers... have helped him document 509 cases so far. They use legal tools like LexisNexis for notifications on keywords like "artificial intelligence," "fabricated cases" and "nonexistent cases." Some of the filings include fake quotes from real cases, or cite real cases that are irrelevant to their arguments. The legal vigilantes uncover them by finding judges' opinions scolding lawyers...

Court-ordered penalties "are not having a deterrent effect," said Freund, who has publicly flagged more than four dozen examples this year. "The proof is that it continues to happen."

This discussion has been archived. No new comments can be posted.

'AI Slop' in Court Filings: Lawyers Keep Citing Fake AI-Hallucinated Cases

Comments Filter:
  • Make it stop quickly (Score:5, Interesting)

    by RitchCraft ( 6454710 ) on Sunday November 09, 2025 @03:17PM (#65784292)

    If a lawyer introduces hallucinated slop into a case that lawyer loses their license for at least one year. The lawyer must then retake the bar exam to regain their license. That'll make them think twice about using AI slop.

    • When I was convicted of marijuana possession before the state legalized it, how come the prosecutors got away with writing the wrong amount of marijuana I was found with, which the judge caught but then simply corrected in court and moved on with the sentencing? Why shouldn't those prosecutors have been punished for hallucinating slop?

      • by TheMiddleRoad ( 1153113 ) on Sunday November 09, 2025 @03:47PM (#65784340)

        I mistake is different from glaring lack of professional conduct.

        • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Sunday November 09, 2025 @05:25PM (#65784540) Homepage Journal

          I mistake is different from glaring lack of professional conduct.

          Using non-local AI in any way in court filings which are supposed to be confidential until filed is glaring lack of professional conduct right up front. Allowing AI hallucinations to get in to your court paperwork even once is the same. They should lose their license for one year the first time, five years the second time, and permanently the third.

          • I agree.

          • by tlhIngan ( 30335 )

            There is no excuse for submitting AI slop. When you file a court brief, you sign it indicating that you read it and it is as accurate as you can verify. You may quibble over details but you indicate everything you put in the file is factual.

            Putting in fake case citations means you didn't read what you filed which means you violated your duty as a lawyer when you filed it.

            Also - checking citations isn't hard. There's this tool called "Google" that you can spend 5 minutes with looking up citations. It doesn't

            • I think this kind of misses the actual problem. "AI slop" is no worse than "human lawyer slop." The underlying issue is that the vast majority of lawyers take a position or come up with an argument and *then* look for cases to support it, quite willingly citing cases that may even be realistically neutral on the question. That, in turn, has perpetuated the idea that there is no value in mere argument without direct precedential support. While I'm well aware of the intended benefits of predictability and con
          • by AmiMoJo ( 196126 )

            To be fair, lawyers do use external services that in theory leak a lot of information about their cases all the time, and have done for decades. Databases of case law are the obvious example. The searches give an insight into what the lawyer is thinking, what their likely arguments will be, things they may have overlooked.

            Naturally those services offer confidentiality, the same as the phone company promises not to listen to the lawyer's call to their client, unless legally compelled to.

            The question is, are

      • Because it's not the same.

        Humans make honest mistakes all the time, we fix them and move on, and we give benefit of the doubt. If we can prove that the mistake isn't an honest one, then it's different, but you haven't mentioned anything (here or below) that suggests otherwise in the case you mention.

        Using generative AI is making a conscious decision to use a technology that makes shit up. There should be no tolerance for that.

    • by gurps_npc ( 621217 ) on Sunday November 09, 2025 @04:21PM (#65784392) Homepage

      Technically Judges cannot do that directly. Instead the procedure is:
      1) Report to Bar
      2) Have a Hearing by the Bar
      3) The bar can decide to take their license for X amount of time.

      But I do agree that is what the Judges should be doing.

      The judges can however hold anyone in contempt of court for any reason at any time. Do not even have to be in court. You can appeal it, but as long as the Judge was somewhere near reasonable, you will not succeed.

      If however the Judge does something like hold the Umpire at his kid's baseball game in Contempt, then yes you will almost certainly win the appeal.

    • It all comes down to seeing AI for what it is: a tool. I used it in a case where i couldnâ(TM)t find representation. I didnâ(TM)t take its draft as factual. I looked at what it produced and found a case that had nothing to do with mine that it sited. I then made it correct itself, and then run through every case and statute again. I did several rounds of this. I then made it prove its case as correctly scribed and give multiple rounds of back and forth between how it expected the case would play o
    • by dmay34 ( 6770232 )

      Not lose their license. The case should be immediately dismissed in favor of the other side.

      Let the client sue the lawyer for damages. That will fix the issue far faster than even disbarment.

  • by oldgraybeard ( 2939809 ) on Sunday November 09, 2025 @03:20PM (#65784296)
    The law firms are hit with huge (firm destroying) penalties and lawyers are disbarred for the lies under oath since they submitted the briefs to the court.
    • No need for all that. Either "Judgement is for the other side" or "Case dismissed." Clears the docket, and slows down these kinds of submissions until they're at least doublechecked.
      • > No need for all that. Either "Judgement is for the other side" or "Case dismissed." Clears the docket, and slows down these kinds of submissions until they're at least doublechecked.

        Interesting. I think you've changed my mind about this.

        Economic incentives are probably the way to go.

        • by madbrain ( 11432 ) on Monday November 10, 2025 @03:14AM (#65785152) Homepage Journal

          It doesn't seem fair the clients should suffer the burden of bad representation. Even in civil cases. Let alone criminal.

          Economic incentive needs to target the right party.

          • It doesn't seem fair the clients should suffer the burden of bad representation. Even in civil cases. Let alone criminal.

            Economic incentive needs to target the right party.

            You also don't want them to be selecting for poor representation because they receive the outcome they desire - pushing off punishments.

          • Alright so I don't know much about the US legal system. Imagine my lawyer uses genAI, gets caught for "hallucinations" and I lose the case. Can I then sue my lawyer for that? How likely is it that some other lawyer accepts to represent me in that case?

    • by AmiMoJo ( 196126 )

      Typically we don't destroy an entire firm for the misconduct of one employee, unless it's so extreme that it justifies screwing all their other clients. Imagine if your case was headed to court and your lawyer said their firm had been wiped out by another employee using AI, so you need to find another lawyer and hope the court is willing to accommodate the delay. Even if the court is, re-doing much of the process, document exchange, and so on will take a lot of time and create more expense, that you might e

  • by rsilvergun ( 571051 ) on Sunday November 09, 2025 @03:29PM (#65784310)
    But the courts are paying more attention now because it's ai and it's bad press.

    But I would bet money if you did a exhaustive analysis of court filings you would find plenty of made up citations that nobody ever looked into.

    Then again lawyers are famous for their honesty and decency so maybe I shouldn't make assumptions.
    • by anoncoward69 ( 6496862 ) on Sunday November 09, 2025 @03:39PM (#65784326)
      The lawyers have probably never read these now or in the past. They had their paralegals create them, they've probably just replaced paralegals with AI
      • by gweihir ( 88907 )

        Which now bites them in the ass. This is a classical case of greedy dumb assholes trying to do things "cheaper than possible".

      • by evanh ( 627108 )

        A distinct shift in who messed up. The AI can't be blamed.

    • "famous for their honesty and decency" And keep in mind, most judges were lawyers!
      And what does that say about the trust we should have in today's justice system and government in general? Lots of lawyers in government!
    • by cusco ( 717999 )

      Fake references have a long and disgraceful history. Cram 30 or 40 citations into a filing and the judge might look at the first few and assume the rest are more of the same. IIRC this has occurred even in case filed before the Supreme Court, where the last citations were actually cases which pointed in the opposite direction that the lawyer wanted but no one bothered checking for quite a long time.

  • This means you

  • by registrations_suck ( 1075251 ) on Sunday November 09, 2025 @04:22PM (#65784394)

    This gets into the issue of how to debug an LLM?

    Since this keeps happening, it seems like when a model is trained for legal cases, it should be done in such a way that the underlying source material is tied to the case.

    For example, if the model reads about Johnson v. Smith, 1972 in "Legal Shit today, issue 175", then it should be possible to query the LLM and ask it what the source material is for that case, and get back "Legal Shit Today, issue 175". This feature would at least start to allow you know how it is coming up with hallucinations.

    With that in place, the sequence of events then becomes:

    Write me a court filing for issue (whatever)
    Give me the list of cases cited in the doc you just wrote
    Show me the sources for those cases cited
    Manually research the sources, verify each case cited

    • Manually research the sources, verify each case cited

      Clearly this not even even being done by an automated tool, let alone a human. An LLM which is given access to a database of actual cases could reasonably be successful at checking whether the cased cited even exist which isn't being checked now!

      • Well that's kind of my point. Other than, I was proposing human do it, rather than having an AI do it.

        • I'm not against a human doing it, I just don't get why they're not having the software do it, when it's feasible for it to do it.

      • by gweihir ( 88907 )

        Ah, no. LLMs cannot do fact-checking. They hallucinate even with full access to verified facts.

        • You can however feed the output of one LLM to another LLM, and ask it to fact check. I have done so. It is worthwhile. I think in the case of completely made up cases, the second pass (or third) is very likely to flag it due to no correlation in its database.

          For the logic in legal arguments, though, I would not expect it to help.

    • by gweihir ( 88907 )

      You cannot "debug" an LLM. Not possible. That is one of the reasons why LLM use requires a high level of on-target expertise and a lot of care.

      • Do you remind yourself of Herbert Simon proclaiming in the 1960s that neural networks were a dead end because perceptrons couldn't do XOR?

        "Simon was right in one sense: early perceptrons couldnâ(TM)t scale to general intelligence.
        But he was wrong in thinking the flaw was essential to connectionism.
        Once computational power, training methods, and deeper architectures arrived, neural networks surpassed the very symbolic reasoning systems Simon pioneered."

        • by gweihir ( 88907 )

          What a stupid comment that entirely misses the point.

          • Is the point that you are making a lot of assertions why LLMs are unworkable just as Simon did about neural networks?

            • by gweihir ( 88907 )

              No? Why would you even think that except if you just wanted to throw crap?

              Here in the real world, a perceptron is a very simple thing and an LLM very much is not. Are you unaware of that?

              • ---

                When the perceptron was invented in the 1950s, it wasnâ(TM)t seen as a âoesimple thing" at all. Frank Rosenblatt and the U.S. Navy promoted it as a major step toward machine intelligence. A New York Times headline in July 1958 read âoeElectronic âBrainâ(TM) Teaches Itselfâ (NYT, July 13 1958: [https://www.nytimes.com/1958/07/13/archives/electronic-brain-teaches-itself.html](https://www.nytimes.com/1958/07/13/archives/electronic-brain-teaches-itself.html)). Rosenblatt predict

  • Comment removed based on user account deletion
    • Re: (Score:2, Insightful)

      It just means the penalties are insufficient.

      If the penalty for speeding were life imprisonment, you would see a huge, but not total, reduction in speeding.

      Assuming the law were enforced. If you don't enforce the law, the penalty is only theoretical.

      • by gweihir ( 88907 )

        It just means the penalties are insufficient.

        That is the no-clue cave-man answer. All research shows that increased penalties have no positive effect, but make the problem worse.

        • Well, speaking for myself, I can tell you that if I'm driving, I'm probably speeding. If the penalty were life imprisonment, that would certainly put an end to my speeding.

          I don't think I'm particularly unique in that regard, but sure, anything is possible.

          • Have you heard of the forbidden fruit effect? Do you think prohibition stopped drugs?

            Are you certain that the harm of a few mis-citations merits the kind of draconian response you're calling for, and will the unintended consequence be that the people elect AI Trumps to undo your ridiculously over-the-top punishments for using AI?

          • by gweihir ( 88907 )

            I am not speaking for myself. I am speaking for solid results from criminology. You think you are smarter and know more than tons of researchers that have studied this for a long, long time?

          • Why wouldn't you vote to end such a ridiculous penalty? Remember when we the people undid drug prohibition in many states?

          • The idea that harsh punishments for crimes will lead to people not committing crimes in fear of those punishments is not new. In Victorian times you had the 'death penalty for everything' approach to sentencing in many countries. It's been changed in large part because it didn't work. People ended up stealing because of poverty and once they were liable to be hanged, many figured, the law already can't throw any more at them, so they continued robbing and killing and became outlaws.

            Maybe you won't become a

        • All research shows that increased penalties have no positive effect, but make the problem worse.

          It also shows that if the penalty is insufficient then they have no positive effect. A fine that people with a lot of money can easily afford is just a prohibition which only applies to the poor, with a license fee. Look to speeding tickets which scale with income for a fair model.

  • I'd like to refer you to the case of Finders vs. Keepers.
  • It never ceases to amaze me that we have a seemingly-magical tool that can do many hours of research in just minutes and the person using the tool can't be bothered to take a couple of minutes to fact-check the info the seemingly-magical tool shat out. And the cherry on top is going to court and confidently presenting that unchecked info in front of a judge while being on public record. We're taking laziness to levels never fathomed before.
    • by gweihir ( 88907 ) on Sunday November 09, 2025 @06:31PM (#65784668)

      The thing is that LLMs cannot fact check. Apparently the users of LLMs are, in this case, too lazy or to dumb (or both) to do it themselves.

      There is indications that for many things, LLMs actually decrease efficiency and using them is a costly mistake. This ("better search") is supposedly one of the few areas where they save time. But fact-checking LLM results actually requires more skill and competence than generating the results manually and it MUST be done for results of reasonable quality. Apparently, that little problem is still not common knowledge.

  • [C]ourts are starting to map out punishments of small fines and other discipline [....] Court-ordered penalties "are not having a deterrent effect," said Freund, who has publicly flagged more than four dozen examples this year. "The proof is that it continues to happen."

    Disbar them then - they're not doing their fucking job properly, so why should they get called "lawyers/attorneys" then?

    And if they do want to be a lawyer/attorney again, then I guess they'll have to re-enroll, study, and take the bar exam again. Without using any A.I.

    Imagine if a plumber relied on AI to fix your massive water leak. Would you seriously pay for that?!?

    • by cusco ( 717999 )

      If it fixed the leak? Why not? Especially if it enabled them to fix it faster and/or cheaper.

      Just FYI, AI is in use in the construction trades already, most people aren't aware of that. For your example a draftsman can feed the plans of a building into an adequately trained system and map out the most efficient routing for plumbing and cabling. AI is operating excavators, scheduling contractors, driving inspection robots, recognizing bad concrete pours from drone images, and the list keeps growing. In

      • map out the most efficient routing for plumbing and cabling.

        That's a toy program for finding shortest distance. It won't understand specifications or building code. It won't understand working with multiple trades. It definitely won't understand any style other than post-modern brutalist.

        • by cusco ( 717999 )

          It won't understand specifications or building code.

          It most certainly does. I did say "adequately trained".

          • It

            So no program name, just "it"?

            trained

            As in LLM and not Expert System?

            Specifications are basically unique to each building.
            Training an LLM on specifications will only make it hallucinate more than usual.

    • Treat it as if they wrote it themselves, which means fraud, falsifying documents, lying under oath.. etc.

  • RFKjr's administration have been using AI to generate justifications for policies that all are hitting exactly the same problems:

    * AI is inventing studies that never existed
    * AI is using quotes from real studies that aren't in the studies
    * AI is generating summaries of studies that are the opposite of what the study itself actually concluded

    and he's referencing these AI generated summaries in congressional hearings.

    • by gweihir ( 88907 )

      The funny thing is that all of that would be very easy to verify. But LLMs are completely unable to verify. Such a great tool...

      • LLMs are completely unable to verify.

        That's an exaggeration. You can give a LLM access to real things and they can use those real things to verify. I just flatly do not understand why they are not. It wouldn't make them infallible, but it would go a huge way towards improving the situation, and they are clearly not doing it. They could also use non-AI software tools to check up on the AI output. I'd bet that you could even use a plagiarism detection tool for this purpose with little to no modification, but I'd also bet this kind of tool alread

        • by gweihir ( 88907 )

          No, they cannot. LLMs cannot do logical reasoning and that is a requirement for any type of fact-checking or verification. All they can do is correlations and that is not enough.

          While using non-LLM AI or non-AI tools is a possibility for fact-checking (in simple cases as the one here), you overlook that basically the only advantage of LLMs is that they are comparatively easy to create. Creating these "other tools" that would be needed here would be a lot more expensive and a lot more work than creating gene

          • All they can do is correlations and that is not enough.

            It's enough to determine whether a citation even exists, which is what this story is about.

            • by gweihir ( 88907 )

              And that is a problem that is actually pretty hard and expensive to solve. For software. An experienced and smart human can do it easily. A machine cannot.

  • Prison time (Score:4, Insightful)

    by gweihir ( 88907 ) on Sunday November 09, 2025 @06:26PM (#65784656)

    At the rates these fuckers charge, this is completely unacceptable and should be regarded as fraud.

  • Here's the database (Score:5, Informative)

    by tobiah ( 308208 ) on Sunday November 09, 2025 @07:19PM (#65784766)

    The article above mentions a database used to track these AI hallucinations in legal cases, but fails to link or name it. Here it is:
    https://www.damiencharlotin.co... [damiencharlotin.com]

    The database is well-structured for further investigation. One interesting point is that the majority of transgressions were committed by pro se parties, with lawyers a close second.

  • by Tschaine ( 10502969 ) on Sunday November 09, 2025 @07:35PM (#65784806)

    High profile hallucinations are in the news every few weeks. The MAHA report cited medical research that never happened.

    It's been a constant problem since ChatGPT was released. There is no solution on the horizon.

    And yet, there are still people who are betting that this technology will be an economic game changer.

    • by cusco ( 717999 )

      It is a game changer, just not the way it's being used. The accellerating advances of robotics are almost all from the use of AI trained in specific tasks. ChatGPT couldn't operate a robot, but neither could the AI that runs Boston Robotics' Spot converse with you.

    • by MobyDisk ( 75490 )

      There is no solution on the horizon.

      The solution is to do their own work! Or at least check it! This use of AI is like when people just Google for something and copy/paste the first hit.

      And yet, there are still people who are betting that this technology will be an economic game changer.

      What the news doesn't show is the millions of people using AI successfully every day. We know that every day people incorrectly use hammers, wrenches, screwdrivers, drills, cars, and guns. Yet they are not declared useless. AI is a tool, and in the hands of a someone with genuine interest in using the tool appropriately, it is a useful one. The best path

  • It seems certain that plenty of these AI-hallucinated precedents are being entirely missed in court cases. So what happens if a case in which a hallucinated precedent figures becomes a precedent in and of itself?

    Could a fake precedent become sane-washed in this manner to the extent that it becomes precedent? I just asked my wife, who is in a position to know, and she confirms that yes, it is entirely possible. So we're not just looking at AI fucking up one decision incorrectly. We're in danger of AI actuall

  • Lazy frakkers. They never heard of proofreading? I understand using AI to get ideas, but to turn over your professional reputation to one is pathetic. It's on the same level as plagiarism.

  • Citations (Score:5, Insightful)

    by ledow ( 319597 ) on Monday November 10, 2025 @07:02AM (#65785352) Homepage

    The real solution would be a citation system.

    Something like LexisNexus has every court case that happens in the country.

    So... why not have an official version of that, tied in with the official court transcripts and when you cite a case, you need to give that citation number from the official database. If you're citing only a few lines, you link to those few lines.

    You wouldn't be able to cite a non-existent case, at best the case you cite wouldn't match what you claim it does, and with individual statement citations (HTML literally does it already), you could prove in one click that that series of words actually appears in that cited case.

    You want to stop this? Then open-source the law instead of hiding it behind stupendously expensive private commercial services like LexisNexus.

You can tune a piano, but you can't tuna fish. You can tune a filesystem, but you can't tuna fish. -- from the tunefs(8) man page

Working...