Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI Technology

Adobe Is Using AI To Catch Photoshopped Images (engadget.com) 59

An anonymous reader shares a report: Adobe, certainly aware of how complicit its software is in the creation of fake news images, is working on artificial intelligence that can spot the markers of phony photos. In other words, the maker of Photoshop is tapping into machine learning to find out if someone has Photoshopped an image. Using AI to find fake images is a way for Adobe to help "increase trust and authenticity in digital media," the company says. That brings it in line with the likes of Facebook and Google, which have stepped up their efforts to fight fake news.

Whenever someone alters an image, unless they are pixel perfect in their work, they always leave behind indicators that the photo is modified. Metadata and watermarks can help determine a source image, and forensics can probe factors like lighting, noise distribution and edges on the pixel level to find inconsistencies. If a color is slightly off, for instance, forensic tools can flag it. But Adobe wagers that it could employ AI to find telltale signs of manipulation faster and more reliably.

This discussion has been archived. No new comments can be posted.

Adobe Is Using AI To Catch Photoshopped Images

Comments Filter:
    • Ok, I can see this if they are examining a DIGITAL copy of an image...where you still have pixels you can examine.

      But what if the altered image has been printed, can the AI then check things to see if it is altered?

  • undetectable shops (Score:5, Insightful)

    by KiloByte ( 825081 ) on Friday June 22, 2018 @01:56PM (#56829904)

    Sounds like that very same AI would work wonders for hiding just those signs. No one will know comrade Yezhov was ever there!

    • by ranton ( 36917 )

      Sounds like that very same AI would work wonders for hiding just those signs. No one will know comrade Yezhov was ever there!

      Perhaps, but just like it is easier to destroy a car than it is to build a car, it will almost certainly be far easier to notice alterations than it is to create them.

  • This is amazing stuff. AI really is a game changer, before it we just had algorithms and programs. Pretty soon we will wonder how we ever lived without AI!
  • When I read this, what immediately struck me, is how easy it would be to make a simple program that does this, no special "A.I." needed to do it, unless you consider the software some sort of A.I.

    Here's how I'd do it:

    1) Scan the image at various tile sizes, and search for circular geometry using + and - color level differentiation, you can call it a mild form of edge detection.

    2) Scan the image for unusual color range variations, such as sampling a part of the image for regular noise (level average), and th

    • You don't understand: in 2018 that is AI.
    • You just described the old fashioned, pre-machine-learning way of doing it. You guess at some calculations you think might be useful. "Let's look at color differences between adjacent pixels and look for circles that stand out. That should match how a lot of people use Photoshop. Oh, and let's look at noise levels, and check for regions that are different from the rest of the image." So you code them up by hand and try them out. If they work you say, "Great, I've got an algorithm!" And if they don't,

  • Generative adversarial network [wikipedia.org]

    Generative adversarial networks (GANs) are a class of artificial intelligence algorithms used in unsupervised machine learning, implemented by a system of two neural networks contesting with each other in a zero-sum game framework. They were introduced by Ian Goodfellow et al. in 2014.

    This technique can generate [i.e. more or less from scratch] photographs that look at least superficially authentic to human observers, having many realistic characteristics (though in tests peo

  • Almost all photos online are retouched. Sharpen, unmasked, contrast or brightness changes, smoothing, hue changes, rotation, and the like.

    Adobe has too much hope and is smoking to much dope.

    • by Junta ( 36770 )

      I presume the ambition is not to spot retouching, but outright faked imagery.

    • by dfghjk ( 711126 )

      You fail to consider the source of these images. There is no such thing as "non-retouched" when the image starts out as raw data from a Bayer sensor. Detection of global manipulations is not the goal nor can it be.

  • Comment removed based on user account deletion
  • for Police and Security agencies.

    The next logical step is to create an AI that finds the flaws and repairs them.

  • Maybe easier is to spot images where PS users kept exif or other information telling that it was PSed (personally I use Gimp)
    • You mean the same tags those apps add when you resize or crop a photo with them? How useful.
    • by Raphael ( 18701 )

      Maybe easier is to spot images where PS users kept exif or other information telling that it was PSed (personally I use Gimp)

      Obviously, if you want to create a fake you should either remove all metadata (EXIF, IPTC, XMP and proprietary tags) or copy it from the original image. If you claim that you took an image straight from your camera and it contains Photoshop tags or a comment "Created with GIMP", then you will be busted.

      Tampering with the metadata is an important step in creating good fakes. However, there is a lesser known property that can often identify the true source of a JPEG image: its quantization tables. The JPEG

  • by Maxo-Texas ( 864189 ) on Friday June 22, 2018 @02:23PM (#56830072)

    If AI can find it, then AI can hide it too.

    We are reaching a point where we can't trust photographic or video evidence without a secure chain of custody.
    With enough power, anything can be corrupted/faked.

    • If AI can find it, then AI can hide it too.

      Look into Generative Adversarial Networks [wikipedia.org]. The principle you're talking about has actually become a (very, very promising) training technique.

      We are reaching a point where we can't trust photographic or video evidence without a secure chain of custody.

      Check out the newest video based fakery presented at SIGGRAPH 2018 [youtube.com]:

      This shit is insane.

    • Generally it's not even the photo that is false, it's the subtext. You tell a story and show a photo that looks close enough to the story and seems to "prove" it, suddenly people take it as undeniable truth, just because there is a photo. It has been done ever since you could print photos in newspapers and books, soviets were really proficient at that. They were also pretty good at analogue photoshopping.
  • I don't think any photographer takes a photo from their camera and just puts it on the Internet anymore. Every photo goes through some editing process even it it's just to fix lighting levels, crop, or add crappy Instagram filters. Hopefully the AI can ignore all that stuff to actually find the photos with people added or removed from scenes.

    For video, artists have gotten so good with CGI in movies and TV that it's almost impossible to tell that a scene was manufactured. Will the AI be able to detect tha

    • by ScentCone ( 795499 ) on Friday June 22, 2018 @02:55PM (#56830294)

      I don't think any photographer takes a photo from their camera and just puts it on the Internet anymore. Every photo goes through some editing process even it it's just to fix lighting levels, crop, or add crappy Instagram filters.

      Even when they DO take an image directly from a camera in a usable format (say, a JPG), the image has already gone through a tune-up, lens correction, compression, etc. It may not be Photoshop running on the camera, but it's still a highly processed image by the time it lands as a JPG on that camera's (or phone's) storage.

      • by dargaud ( 518470 )
        Yeah, but you have homogeneity of process. With the typical Photoshop manipulation, you clone part(s) of the image onto itself (which can be detected with 2D auto-correlation), or you clone parts of another image onto it (which can be detected by looking at the noise).
        • Was referring to the comment about how almost all photographers manipulate images before sharing them, rather than taking them right out of the camera. My point was that the camera's software is already doing a substantial amount of the kind of manipulation that "most photographers" are (in the way that poster seemed to mean) going to do anyway. Just telling the camera to render JPGs that are more saturated or contrasty would head off 99% of what "most photographers" do in post production anyway.
  • It's a tool. The software isn't "complicit" in anything. Photoshop doesn't make fake photos, PEOPLE make fake photos.
  • I wonder if this would just detect cloning/pasting of new subject matter (the goal it would seem to me), or if it would also be confused by heavy use of other common editing techniques that do not really alter image content.

    For example, heavy sharpening of an image can often introduce aliasing, and heavy use of contrast can often make some areas of an image far more pixellated than others, or alter some colors in ways that appear different compared to the rest of the image.

  • >"Adobe Is Using AI To Catch Photoshopped Images "

    Oh well, I guess that means they can't detect Gimped images, nor any other photoedited images.

  • Important development on the week that Time magazine posts a completely fake photo (perhaps obvious) of an event that did not happen (not at all obvious) as its cover. And then fails to issue a retraction.

"The vast majority of successful major crimes against property are perpetrated by individuals abusing positions of trust." -- Lawrence Dalzell

Working...