Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Google AI

Google DeepMind Launches Watermarking Tool For AI-Generated Images (technologyreview.com) 16

Google DeepMind has launched a new watermarking tool that labels whether images have been generated with AI. From a report: The tool, called SynthID, will initially be available only to users of Google's AI image generator Imagen, which is hosted on Google Cloud's machine learning platform Vertex. Users will be able to generate images using Imagen and then choose whether to add a watermark or not. The hope is that it could help people tell when AI-generated content is being passed off as real, or help protect copyright. [...] Traditionally images have been watermarked by adding a visible overlay onto them, or adding information into their metadata. But this method is "brittle" and the watermark can be lost when images are cropped, resized, or edited, says Pushmeet Kohli, vice president of research at Google DeepMind.

SynthID is created using two neural networks. One takes the original image and produces another image that looks almost identical to it, but with some pixels subtly modified. This creates an embedded pattern that is invisible to the human eye. The second neural network can spot the pattern and will tell users whether it detects a watermark, suspects the image has a watermark, or finds that it doesn't have a watermark. Kohli said SynthID is designed in a way that means the watermark can still be detected even if the image is screenshotted or edited -- for example, by rotating or resizing it.

This discussion has been archived. No new comments can be posted.

Google DeepMind Launches Watermarking Tool For AI-Generated Images

Comments Filter:
  • by jetkust ( 596906 ) on Tuesday August 29, 2023 @03:20PM (#63806932)
    I understand the importance of watermarks. But how does voluntarily watermarking an AI generated image help anyone tell if it's AI generated? If they want to pass it off as real wouldn't they just not add the watermark.
    • Re: Sounds backwards (Score:3, Interesting)

      by wbcr ( 6342592 )
      This is to stop low-tech actors from using a commercial service (that can decide to implement this feature) to flood the Internet with AI generated fakes. Training your own model that does not watermark is expensive and requires some skill
    • by Logger ( 9214 )

      Exactly this. Thank you.

      Until trusted device makers watermark everything, there will be no trust. And finally, maybe, a good application for a distributed crypto ledger.

      Devices can publish public keys to the ledger, and users of trusted devices can share signed image hashes to the ledger to prove a photo is not-altered. Use a distributed ledger since this isn't something we'd trust any one entity to own.

    • I understand the importance of watermarks. But how does voluntarily watermarking an AI generated image help anyone tell if it's AI generated? If they want to pass it off as real wouldn't they just not add the watermark.

      I think we tend to live in a world of laws written in reverse.

      It's not a matter of not watermarking. It's the threat of not watermarking content which is later to be proven AI generated. Then established law can likely come after you in some way that demonstrates a deterrent.

      Of course that idea of security usually falls flat at the first border crossing, since global jurisdiction isn't a thing and never will be.

  • You just use another img2img neural network to destroy the watermark? (The article only mentions cropping and rotation as transformations that preserve the watermark.)

    • You just use another img2img neural network to destroy the watermark? (The article only mentions cropping and rotation as transformations that preserve the watermark.)

      You'd probably want to train the second network to destroy the watermark (otherwise it might just preserve it in the transformation). At which point you might be able to train a 3rd network to tell that another network had erased the watermark (this is definitely easiest if everyone uses the same watermark removal tool).

      Either way, I'm sure you could somewhat reliably remove the watermark, but I could see this being effective if all the major generative tools agree to add watermarks.

      People, in general, are

    • If the watermark is in the pixel data and the image still looks nearly identical to the original than obviously they've modified some of the least significant bits in some of the RGB or perhaps even A channels (in the case of transparent images).
      To erase this watermark you would just write a tool to randomize the least significant bit in every channel. Extra points if you analyze the gradient of each channel in each direction and change the value so it appears smooth.
    • I was about to say. I can't see how a watermark that "subtly modifies some pixels" could be robust to nonlinear transformations (warping, dithering, variable scaling, etc.) Either there's more to it or it'll be easily bypassed.
  • People start uploading non generated images just to label them at "AI"? This watermarking feature sound shallow and poorly thought

The truth of a proposition has nothing to do with its credibility. And vice versa.

Working...