

Google Photos Will Soon Show You If an Image Was Edited With AI 32
Starting next week, Google Photos will label when an image was edited with AI. The Verge reports: "Photos edited with tools like Magic Editor, Magic Eraser and Zoom Enhance already include metadata based on technical standards from The International Press Telecommunications Council (IPTC) to indicate that they've been edited using generative AI," John Fisher, engineering director of Google Photos, wrote in a blog post. "Now we're taking it a step further, making this information visible alongside information like the file name, location and backup status in the Photos app."
The "AI info" section will be found in the image details view of Google Photos both on the web and in the app. These labels won't be limited strictly to generative AI, either. Google says it'll also specify when a "photo" contains elements from several different images -- such as when people use the Pixel's Best Take and Add Me features. [...] "This work is not done, and we'll continue gathering feedback and evaluating additional solutions to add more transparency around AI edits," Fisher wrote.
The "AI info" section will be found in the image details view of Google Photos both on the web and in the app. These labels won't be limited strictly to generative AI, either. Google says it'll also specify when a "photo" contains elements from several different images -- such as when people use the Pixel's Best Take and Add Me features. [...] "This work is not done, and we'll continue gathering feedback and evaluating additional solutions to add more transparency around AI edits," Fisher wrote.
Option to ignore those Pinterest thieves? (Score:1)
Now that would be great as there it literally no original content on Pinterest.
Re:Option to ignore those Pinterest thieves? (Score:4, Informative)
Just put "-pinterest" at the end of your search. (And "-alamy" and a couple of other stock photo sites).
IIRC, there's addons that will do that automatically, even.
Re: (Score:2)
Re: (Score:2)
Doesn't work on DDG, maybe I'll just have to go back to using google.
Not good enough. (Score:3)
AI metadata will be easily stripped from an image file.
Better if the metadata was steganographed into the image as well, or signed with a hash of the picture or... SOMETHING so the AI marking can't be removed without destroying an indicator of tampering.
Re: (Score:2)
People will learn to use the clearly-labeled-as-AI works to figure out when others are doing it silently, because they'll recognize the same basic flaws. Right now that's too many fingers, abdomens that are way too long with too many muscles and multiple navels, fragments of people and objects sticking out of one side of a foreground subject but not the other side even though they should, people who look like they're falling over backward because the people and the scenery being referenced weren't shot from
Re: (Score:3)
An embedded steganographical "mark of AI" could still be removed...for example by opening the pic and saving it again, preferably in a lossy format. Or by taking a screenshot or a photo of the picture and saving that in a different format. The absence of the mark would not prove that a picture was not AI generated or AI modded. You will always find ways to get rid of such extra baggage.
The idea of using signatures to prove that something was or was not AI generated is not bad, though. In fact, that is what
Re: (Score:2)
C2PA is basically trusted computing for digital media. Let the big companies sign that your media is authentic. Use free and open software (that cannot be controlled if it includes AI) and they won't sign it. Do we really want to outsource trust in authenticity to large companies and pressure people to handle files in a way that large companies are still willing to sign them, or otherwise miss out on the "authentic" badge?
Re: (Score:2)
Don't let perfect be the enemy of good. Most people will just use Google's AI tools to edit a photo and then post it directly to social media. They either don't care or aren't able to do anything about the watermark. It will help prevent such pics going viral due to their misleading nature, and even if someone does remove the watermark the fact that the original exists and can be traced (e.g. with Google's image search tool) helps debunk it.
It's also helpful for things like body image issues, when it is mad
Re: (Score:2)
Re: Not good enough. (Score:1)
So there's a market for high-quality man-made fake (Score:3)
Traditional tools and good manual image doctoring leaves no trace. Whoever is willing to pay the price will always be able to create convincing fake material.
The danger is, if people get used to thinking that no warning from Google means it's legit, the hand-made fake stuff will become that much more credible.
Re: (Score:2)
That's the point though, there is a cost to traditional image manipulation. Money, time. With AI the cost has been greatly reduced, in fact to basically zero in a lot of cases, which is why there is no an explosion of AI generated fake imagery.
Re: (Score:2)
But that's the danger you see:
When fake is trivially easy to create, nobody believes anything anymore. That's super-bad.
But when fake is rare and indiscernable from reality - and worse, vetted by a big data monopoly that, for reasons unknown, people trust and think is an authoritative source of information - then it has an impact that cheap fakery doesn't have and it's harder to disprove, and that's super-bad too.
Re: (Score:2)
BTW you may be too young to know this but the traditional way to deal with the single drop is to follow the money. Even the ancient Romans knew this, and coined the phrase "cui Bono"
Re: (Score:2)
you may be too young to know this
Dude... My Slashdot ID is half yours :)
Photo but not Metadata Edited (Score:2)
Of course the real fun will start when someone edits the metadata of a non-AI edited image so that Google will report it as an AI enhanced image. If anything this feature is just going to confuse
Re: (Score:2)
It wouldn't be hard to edit the metadata to do what you describe at all. All you'd have to do is run the image through what amounts to a null transform, so that it can be tagged without any meaningful changes. Using an Img2Img process with the weight of all inputs other than the original image being set to zero would quite likely get this result.
How detailed is the info? (Score:2)
It needs to give details as to what was done exactly. A label that just says AI was used doesn't tell us if the creator was just using the "easy button" for basic picture enhancement or if significant changes in the content of the image were done. Even if the label was something embedded in a way a malicious actor could not remove (unlikely), they can just lie and explain away the AI usage by claiming it was for basic picture tasks, if there is no detailed info given.
So this is like my ComfyUI output then. (Score:2)
Anyone who wants to take the output of my ComfyUI sessions can easily replicate the workflow because it's embedded in the metadata. If I pull it into an external editor, this gets broken and disappears. But I actually want to distribute with that metadata, because if people want to replicate the look, I want to make that as simple as possible on behalf of the people making the tools. If this kind of metadata starts being used as an AI "tell", that's fine. My generations are subject to the same flaws and foi
Re: (Score:2)
Buy my AI Cleaner off app/play store! (Score:1)
Free download, first 5 cleanings free and then only $2.99/month for the basic 10 images plan or only $6.99/month for the pro version with unlimited AI Cleaning capability! Cancel anytime!
We wash that meta data right out of your hair!
Re: (Score:2)
Re: (Score:2)
You'll be hearing from my lawyers on Monday as I have the right to not have anyone do something that destroys my business model.
I have both patented and copyrighted the concept of cleaning AI metadata.
It will fail (Score:2)
As long as you can do a screenshot this will be easily circumvented.
I cant think of a way around this.
Google are wasting money on this.
Not a Panacea (Score:2)
This is an addition to the metadata panel in google photos: date, place, exif data; the product where you store your own photos and share with friends and family. That's NOT Google Image Search most people are thinking about being full of fake images.
It's a reminder for the photo owner that the image does not represent the reality they experienced, not a warning notice for others. So no, this isn't about addressing fakes and bad actors, it's about adding useful information for photo storage users.
All AI should be watermarked as such (Score:2)
It should be a legal requirement that if a photo, video or audio is made with AI, or manipulated with AI is digitally watermarked with information to say when, what engine, user id and what the nature of alteration was and any other pertinent information required to identify it as such. It might put an end to some of the bullshit deepfakes going around if they could be identified easily.
Re: (Score:2)
Re: (Score:2)
The reality is that AI is being abused all over the shop. Disinformation would be the big one but everywhere is a potential target. I don't think requiring certain kinds of AI generated content to require a passive watermark is a big deal to people using it appropriately. But it would allow someone who suspects AI content to find out when and where it was created and call it out, report the abuse, use it to sue the creator. It might also help AI engines since they won't accidentally ingest their own garbage