Forgot your password?
typodupeerror
AI The Internet

Sora 2 Watermark Removers Flood the Web 33

An anonymous reader quotes a report from 404 Media: Sora 2, Open AI's new AI video generator, puts a visual watermark on every video it generates. But the little cartoon-eyed cloud logo meant to help people distinguish between reality and AI-generated bullshit is easy to remove and there are half a dozen websites that will help anyone do it in a few minutes. A simple search for "sora watermark" on any social media site will return links to places where a user can upload a Sora 2 video and remove the watermark. 404 Media tested three of these websites, and they all seamlessly removed the watermark from the video in a matter of seconds.

Hany Farid, a UC Berkeley professor and an expert on digitally manipulated images, said he's not shocked at how fast people were able to remove watermarks from Sora 2 videos. "It was predictable," he said. "Sora isn't the first AI model to add visible watermarks and this isn't the first time that within hours of these models being released, someone released code or a service to remove these watermarks." [...] According to Farid, Open AI is decent at employing strategies like watermarks, content credentials, and semantic guardrails to manage malicious use. But it doesn't matter. "It is just a matter of time before someone else releases a model without these safeguards," he said.

Both [Rachel Tobac, CEO of SocialProof Security] and Farid said that the ease at which people can remove watermarks from AI-generated content wasn't a reason to stop using watermarks. "Using a watermark is the bare minimum for an organization attempting to minimize the harm that their AI video and audio tools create," Tobac said, but she thinks the companies need to go further. "We will need to see a broad partnership between AI and Social Media companies to build in detection for scams/harmful content and AI labeling not only on the AI generation side, but also on the upload side for social media platforms. Social Media companies will also need to build large teams to manage the likely influx of AI generated social media video and audio content to detect and limit the reach for scammy and harmful content."
"I'd like to know what OpenAI is doing to respond to how people are finding ways around their safeguards," Farid said. "Will they adapt and strengthen their guardrails? Will they ban users from their platforms? If they are not aggressive here, then this is going to end badly for us all."
This discussion has been archived. No new comments can be posted.

Sora 2 Watermark Removers Flood the Web

Comments Filter:
  • AI (Score:4, Funny)

    by r1348 ( 2567295 ) on Tuesday October 07, 2025 @07:43PM (#65710926)

    This article was written by AI, posted by AI and read by AI. I'm an AI too.

  • by Powercntrl ( 458442 ) on Tuesday October 07, 2025 @07:51PM (#65710946) Homepage

    I've already seen plenty of videos go viral on X with the Sora watermarks still intact. We're truly living in a post-fact era, where people choose what they'd like to believe even when it's plainly obvious the video is AI.

    It's like the saying goes: You can't fix stupid.

    • by Brain-Fu ( 1274756 ) on Tuesday October 07, 2025 @08:01PM (#65710964) Homepage Journal

      The ability to use AI to create a video exists. OpenAI is not the only one who provides this, and competitors will make even better ones in the future, one way or another. Eventually people won't need to remove watermarks, because they will be able to make AI powered videos that are just as good or better without any watermarks in the first place. This whole watermarking thing is just a PR gesture with no real impact at all.

      We live in a world of deepfakes. There is no way to prevent their creation. All we can do is find new ways to adapt to this reality.

      • It probably just sounds like a humblebrag, but... don't watch video. Dont use Youtube. I've adapted. I don't use it*. You could adapt the same way. With few exceptions** its either merely trite, of just all out garbage. Outside of a few use cases, seriously, just don't use it. Learn guitar.

        I stopped using YT during the pandemic. Some friend sends me a video proving that vaccines are "deadly". We all know, independent of facts, I can easily find a video rebuttal that "proves" vaccines are better for you than
        • I am similar to your beliefs. It is a waste of my time to watch videos, except for a few use cases. I saw another person post that you should not post on /. I disagree. That same person probably bitches and whines that they are being censored.
          • We're drowning in generated diarrhea. That's why the web is unsearchable anymore, it's growing by percentages everyday.
            The whole internet, and the "economy" for that matter, is just a self fulfilling prophecy.
            Infinite growth is essentially, cancer. There will be an end, by natural or "market forces".
            Unfortunately, it can't come soon enough, and we still may be left with the "long tail"...
            i.e. we still have radio after TV... We'll still have "the internet" after TV came along.
            I pity the hostages of this syst
    • by ranton ( 36917 )

      We're truly living in a post-fact era, where people choose what they'd like to believe even when it's plainly obvious the video is AI.

      I enjoyed watching Avengers even though I know Iron Man and Thor aren't real. Being entertained by videos of things that don't exist in real life existed before GenAI.

    • by allo ( 1728082 )

      Why shouldn't they go viral? Some things go viral BECAUSE they are photoshopped cleverly.

    • by Krneki ( 1192201 )

      Always have been

      Ideology goes brrrrrrrrrrrrrrrrrrrr

  • According to Farid, Open AI is decent at employing strategies like watermarks, content credentials, and semantic guardrails to manage malicious use.

    Isn't it possible to use OPen AI's Sora to create videos that aren't malicious? If it is, then experts like this are severely mistaken.

    • They've also nerfed it rather hard lately. Even attempting to generate rather generic sounding things like dancing cartoon hamsters will run into the "This content may violate our guardrails concerning similarity to third-party content." error.

  • it seems to me that removing such jobs would actually be a task well fitted to an AI . . .

  • That's three more than I expected to work. As opposed to "immediately infect you with 63 different malwares simultaneously".

  • Doesn't removing the artist's signature usually reduce the value of a work?

    That this isn't the case for Sora 2, tells me something about Sora 2's reputation.

  • Watermarks are to mark videos of people who do not care or want to mark that it is an AI video (majority), but for the rest it is virtue signalling. Everyone motivated enough can remove watermarks and someone wanting to do something criminal with a "deep fake" even has a incentive to invest a lot more work to it than the casual user. That the watermark removers flood the web is just an effect of the open market. Sora 2 has a watermark, some people dislike it, so some people make money by providing tools tha

  • ... make my fakepron anonymously. Don't you?

  • We're talking about possibly the biggest company in the AI race. They didn't know that tech already exists for removing watermarks?
  • If a video is under 1 minute long then automatically presume it's fake.
  • Images, video, audio, even large bodies of text can be watermarked and it should be a legal requirement that any publicly accessible content generator is robustly watermarked. Watermarking should happen in a manner which makes removal difficult without noticeable degradation of the input.

    And require social media platforms to detect & tag AI generated content and increase the prominence of warning of AI generated content based upon the number of views. And detect and flag AI generated advertising while

    • by Anonymous Coward

      > Watermarking should happen in a manner which makes removal difficult without noticeable degradation of the input.

      this is fundamentally not possible. At most you need to double the single loss done by the original watermark to make it unreadable, and since watermarks strive to damage the signal as little as possible, it's always going to be easy to remove them.

      the most successful watermarks succeed only because of security-by-obscurity... the moment you know how they're written they can be compromised.

      • by DrXym ( 126579 )
        It's also fundamentally not possible to prevent people escaping from prison. That doesn't mean prisons don't apply defence in depth and various mitigations to reduce the chances of it happening.

        That is the same attitude for any security measure whether it is a barrier, a trip circuit, or a forensic technique used after the fact. Or watermarks where there can be overt marks, meta data and embedded data.

        There are papers about the effort required to effectively remove watermarks and even those efforts can

  • by Dantu ( 840928 ) on Wednesday October 08, 2025 @07:54AM (#65711612)
    Watermarks will always be relatively easy to remove unless they're highly intrusive. I think we need to go the other way and start signing Real content.

    Imagine certificate Authority similar to what we use for SSL that assigned to the camera vendors. So your smartphone has a cert verified by Samsung Etc that confirms images really came from that camera sensor. There are definitely some limitations but it should be possible to at least have the originals and a lower resolution version signed.
    Throw in better legal protections for digital certifications and you haven't eliminated fakes but you make them much more difficult to produce and riskier to distribute.

    • by spitzak ( 4019 )

      I do believe "authentic" watermarks would help a lot. They will have to be locked to a lot of details about the file, you will not be able to color correct, resize, crop, or change the compression method. Probably allow cutting movies between frames however. Some one-way writing of the watermark is put into the camera, with the decoding key/result added to a database that does anything it can to insure only actual cameras are registered.

      Watermarks people want to be remove can be made much harder by making t

  • What I really need is a good, reliable PDF unlocker. Preferably free.

fortune: cannot execute. Out of cookies.

Working...