Forgot your password?
typodupeerror
Government Social Networks The Courts The Internet

Senate Passes a Bill That Would Let Nonconsensual Deepfake Victims Sue (theverge.com) 63

The U.S. Senate unanimously passed the Disrupt Explicit Forged Images and Non-Consensual Edits Act (DEFIANCE Act), giving victims of sexually explicit AI deepfakes the right to sue the individuals who created them. The Verge reports: The bill passed with unanimous consent -- meaning there was no roll-call vote, and no Senator objected to its passage on the floor Tuesday. It's meant to build on the work of the Take It Down Act, a law that criminalizes the distribution of nonconsensual intimate images (NCII) and requires social media platforms to promptly remove them. [...] Now the ball is again in the House leadership's court; if they decide to bring the bill to the floor, it will have to pass in order to reach the president's desk.
This discussion has been archived. No new comments can be posted.

Senate Passes a Bill That Would Let Nonconsensual Deepfake Victims Sue

Comments Filter:
  • Expand it to include the tools that allow this. I realise there will be a issue with open source tools but at the very least you shouldn't be allowed to profit of such an activity!
    • Re:It's a start (Score:4, Interesting)

      by Locke2005 ( 849178 ) on Tuesday January 13, 2026 @09:23PM (#65922718)
      Here's a thought: Mandate that all AI tools include watermarks in their output. You know, they same way copying machines won't copy dollar bills?
      • Problem is watermarks are normally quite easy to remove at least non destructive ones anyway. Quick google suggests there's lots of websites claiming to remove the destructive ones too.
        • Would an AI be able to tell the watermark is removed? I agree that an unremovable watermark would have to cause some (hopefully not perceptible by humans) loss of data.
          • by allo ( 1728082 )

            If you can tell that a watermark was removed, the watermark wasn't (fully) removed. The point about removing is, that you cannot tell afterward.

        • It's not a problem. When watermarks are legislated correctly, then evidence of watermark removal should be immediate grounds for summary conviction.

          There are already many other ways to identify fake/AI generated images besides looking for a watermark. The watermarks are not strictly necessary for the task.

          The reason for putting in a watermark would be to show voluntary cooperation with law enforcement. When the watermark has been removed, someone will have broken the law. Then it's just a matter of time

      • >"Here's a thought: Mandate that all AI tools include watermarks in their output."

        Seems distasteful, but I am in full agreement at this point. Even if they can be removed, it will help and evidence of removal might be detectable as well.

      • That won't work and will cause worse problems. Nothing stops bad actors from using AIs that don't watermark their output and all you've done is train people to look for watermarks to determine what to trust. All you've done is trained people to turn off their brains and there's absolutely incentive for bad actors to produced AI generated content without a watermark to dupe the population.

        Here's a hypothetical video of Locke2005 torturing a kitten. The lack of a watermark must mean it's real. Quick everyo
        • Ultimately it becomes an arm race between the good guys using AI to detect AI output and the bad guys training AI to produce undetectable output. Kind of like the current computer security arms race. I just thought mandating full disclosure might help a few people in the short term.
    • by allo ( 1728082 )

      If you allow people to sue, you do not have a problem with open source, because you tackle the right point: creators (users) and distributors of the images (X) and not technology. There are a lot of tools to create fakes and one wonders if this act should really be restricted to AI (why not cover the Photoshop created fakes?) but they are not only "dual use" but also "primarily legal use" and it is only a minority that abuses them.

  • Donald tRump is the only consensual deepfake victim I know of...
    • He\s also non-consensual. I wonder if this gives him ideas to sue SNL and any other impersonators he doesn't like (who am I kidding, of course it will).

      • SNL is blatant satire, which is protected speech. I know, that actual legal standards have never stopped delusional don from suing before...
  • I know people always expect that laws forbidding something only apply to those they disagree with, but I wonder how, just for example, Sassy Justice [wikipedia.org] would not be "nonconsensual deepfake".
    • Actually yes reading the https://www.congress.gov/bill/... [congress.gov] as a non lawyer it does look like it would cover that sort of thing which is concerning. Parodies shouldn't be squashed.
      • The point of using a deepfake is to imitate the target person in a way that can convince others that it's real, or at least plausible. That is way beyond "parody."

        You do not need a convincing doppleganger of an individual to parody them. If anything, such an exact recreation is antithetical to what parody is and how it works.
        =Smidge=

        • The show's audience knows it's a parody, obviously... it seems pretty unlikely that anyone would be convinced that it's real. Maybe I'm giving people too much credit though.
          • There are people who read satire blogs/"news" sites and are convinced that the stories are real, because even though the site plainly labels itself as satire these people are often only presented with a link directly to the article and they do not take the time to check the source of the material.

            Yes, you are 100% giving people too much credit.
            =Smidge=

    • by dirk ( 87083 )

      I have not seen Sassy Justice, but the law does look to apply specifically to "intimate images", which I assume Sassy Justice would not fall under. Also, I assume parody could be used as defense in the case as well.

    • by mudimba ( 254750 )

      I haven't seen Sassy Justice, but given what I know about the South Park universe, I imagine it would not meet the criteria of: is indistinguishable from an authentic visual depiction of the identifiable individual when viewed as a whole by a reasonable person.

  • by Teppy ( 105859 ) on Tuesday January 13, 2026 @11:10PM (#65922864) Homepage

    Should painting [mymodernmet.com] also be criminalized?

    • There's about one person in the world who can pain that well, there are about 7 billion who can tell an AI to make a deepfake. Do you think it's worth wasting legislative bandwidth on paintings?

      This is why libel and even criminal libel laws exist. It's really easy to write shit about people and literally any one can do it. That's why despite muh freeze peaches and muh first amendment libel laws still exist on the books and have not been struck down as unconstitutional for example.

    • they aren't criminalizing anything. they are establishing a category of tort (a legal liability for which you can be sued by private entites) and establishing the initial boundary in law. this is a common practice.
  • Won't do nothing about the websites that host them eh? How long will it take to go through the courts years? and what's the solution? The pictures will still be out there the people HOSTING it have no repercussions. There should be a NEW Law like for Child Porn, Hosters are Required to pull it off THEY get Gigged. Should be the Same for Fake AI pictures/Vids. Not let it take Months to be settled but immediately. If Hosters want to Host it, THEY Should take the Responsibility.
  • There are real people that can hand paint a nude painting of people. Many of them accept commissions.

    Will they be sue-able too?

    If not, will you have to prove it was hand drawn?

    Art is subjective, but we all know what we like.

    • by mudimba ( 254750 )

      I'm not a lawyer, but my interpretation of the law is that they would be OK if they used traditional paints. If they hand drew the image using something like Photoshop or Painter, they might still be liable.

      is created through the use of software, machine learning, artificial intelligence, or any other computer-generated or technological means.

      Maybe an ambitious lawyer would pull up some renaissance manuscript describing oil paint as high technology, but I think most judges are going to interpret that to mea

    • Will they be sue-able too?

      You might well be able to sue the under some kind of defamation law if the picture ever got out (and how would you even know to sue them if it didn't).

      Turns out makin' shit up about people is a problem that isn't new.

  • by hdyoung ( 5182939 ) on Wednesday January 14, 2026 @12:51AM (#65922968)
    Make it into law, or if it does it’ll be filled with loopholes so big you could drive a Qatar-purchased gold-plated 747 through them.

    Remember who is in the oval office. Remember who controls Grok. The law will be swiss cheese, or more likely just die in committee.
    • Passing unanimously means it's likely veto-proof, and anyway it's hard for me to imagine why you think Trump would want to veto this one.

      • by tlhIngan ( 30335 )

        Passing unanimously means it's likely veto-proof, and anyway it's hard for me to imagine why you think Trump would want to veto this one.

        Easy, Musk et. al. pays Trump money to avoid "hindering AI growth opportunities".

        And Trump's vetoed veto-proof bills - see the two he vetoed this year. They had bipartisan support but Trump just rattled his saber and many (R)s chickened out.

        So unanimous approval now, but (R)s are of the belief that if they lose support of Trump, someone else will get in. So they chicken ou

        • this actually helps protect musk because it puts the tort on the user who actually does the thing instead of the company that makes the bot. you are grossly misunderstanding the issue.
          • I think that you're the one that misunderstands. The AI companies make a lot of money selling adds/subscriptions for these services. The threat to Musk isn't legal. Between his 250 million dollar donation *ahem*payment*ahem *cough*BRIBE*cough to Trump and the army of lawyers he employs, legal accountability doesn't even make his top 10 list of concerns. The threat is to his pocketbook. Anything that chills the AI ecosystem will reduce revenues. Especially the AI ecosystem, which is so overhyped and overleve
  • If someone does something intentionally that causes another person harm, can't that person already sue for damages? Why are we creating a new law to allow this? Is there some legal obstacle preventing this? If there is, why not remove the obstacle instead of creating more laws. Or is this new law just virtue signaling?
    • Social platforms enjoy *some* immunity from being sued over user posts. This could be an attempt to erode that. It is also likely tailored to celebrities or politicians (Trump may be able to sue Youtube / Youtubers who impersonate him thereby chilling dissent) while being ineffectual for actual victims of deep-fakes (doesn't that stuff mostly come from Russia?).

      The fact that it would have been political suicide to vote against this bill (just imagine the attack ads) makes me legit nervous about wher

      • The fact that it would have been political suicide to vote against this bill (just imagine the attack ads) makes me legit nervous about where we're headed.

        There are masked agents kidnapping and sometimes murdering citizens. But what worries you is that most people would like laws that make it easier to deal with defamation generated from a new kind of automated defamation machine which didn't exist before.

        Publishing falsehoods, including faked pictures, about people has never been legal, but as of today wit

  • If people have a right to their own image, the US should consider making nonconsensual photography illegal, even in public. N.b. this is the law in Europe.

    I think most of the US requires consent for audio recording. Why should this be different for video?

    • >"If people have a right to their own image, the US should consider making nonconsensual photography illegal, even in public."

      There is a huge difference between being casually photographed or videoed in public and kept original, and taking those and modifying them with AI (or other) tools to change what was seen/heard.

    • by stripes ( 3681 )

      If people have a right to their own image

      In the US people have basically zero rights to use of their image if the image was of them in a public place and taken with a “normal” lens (“normal” was not given a legal definition). I believe it is legal if the person in the image was in a non-public place but the person was taking a picture from a public place. I expect “normal” was intended to mean “you can’t stand in a public street and take a picture with an extreme telephoto through a bedroom window

  • register your own image rights - lots of existing law precedent for that , then claim 100k per view.
  • Rather than blaming the tool maker for what people do with it, expose the offenders themselves (the person who makes and uses the image, not the person who makes the program that made it possible to make the image) to legal tort. Problem will eventually be solved.
  • the company who owns the AI which makes the work should also be responsible
    • How about the people who made the food who fed the people who created those tools, the chairs they sat in, the cars they drove, beds they slept in, maybe eve their mothers for giving birth to them? I know, people today will argue that the responsibility is always on the person or entity with deepest pockets.

      I miss the days when people were held responsible for their own actions.

"Conversion, fastidious Goddess, loves blood better than brick, and feasts most subtly on the human will." -- Virginia Woolf, "Mrs. Dalloway"

Working...