Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Google AI Media The Courts

Is Google's AI-Driven Image-Resizing Algorithm Dishonest? (thestack.com) 79

The Stack reports on Google's "new research into upscaling low-resolution images using machine learning to 'fill in' the missing details," arguing this is "a questionable stance...continuing to propagate the idea that images contain some kind of abstract 'DNA', and that there might be some reliable photographic equivalent of polymerase chain reaction which could find deeper truth in low-res images than either the money spent on the equipment or the age of the equipment will allow." An anonymous reader summarizes their report: Rapid and Accurate Image Super Resolution (RAISR) uses low and high resolution versions of photos in a standard image set to establish templated paths for upward scaling... This effectively uses historical logic, instead of pixel interpolation, to infer what the image would look like if it had been taken at a higher resolution.

It's notable that neither their initial paper nor the supplementary examples feature human faces. It could be argued that using AI-driven techniques to reconstruct images raises some questions about whether upscaled, machine-driven digital enhancements are a legal risk, compared to the far greater expense of upgrading low-res CCTV networks with the necessary resolution, bandwidth and storage to obtain good quality video evidence.

The article points out that "faith in the fidelity of these 'enhanced' images routinely convicts defendants."
This discussion has been archived. No new comments can be posted.

Is Google's AI-Driven Image-Resizing Algorithm Dishonest?

Comments Filter:
  • Wait, what? (Score:5, Interesting)

    by Rei ( 128717 ) on Saturday November 19, 2016 @03:41PM (#53322903) Homepage

    People are using this sort of thing in court?

    I think these is a very interesting field for consumer needs, but I have to agree, that's disturbing if they're allowing what... let's face it... is data made up by an AI that "looks right", to convict people.

    • Re: (Score:3, Insightful)

      But they were guilty.

      • Re:Wait, what? (Score:5, Interesting)

        by knightghost ( 861069 ) on Saturday November 19, 2016 @04:08PM (#53323013)

        Yes, they use it in court. I once watched a federal prosecutor use this and lie so blatantly to the court that his own (image) expert witness sued him for false representation. Yet the defendant was still convicted based almost entirely on that upscale image "evidence" and served several years in prison.

    • People are using this sort of thing in court?

      Why is that a bad thing? Human eyewitnesses are notoriously unreliable, so it is possible that this technology could result in fewer false convictions. Similar questions were raised about DNA evidence, but it has resulted in far more exonerations of the innocent and convictions of the guilty than the other way around.

      You need to get over the delusion that our current justice system is infallible. Far from it: When DNA evidence first became reliable enough to use as evidence, many old cases were reexamin

      • by Oligonicella ( 659917 ) on Saturday November 19, 2016 @04:49PM (#53323141)

        It's bad because it's allowing a piece of software to become a witness. One you cannot ask questions of unless you want to force the importing of the whole development team for each trial. Just having "working knowledge" of how the software functions is insufficient.

        And because "I extrapolated from other cases what the defendant should look like" wouldn't go over well if given by a human expert.

        "so it is possible" - Same extrapolation from scant information.

      • by msauve ( 701917 )
        "Why is that a bad thing?"

        Because it is manufacturing evidence from whole cloth.
        • by Rei ( 128717 )

          Indeed. It's the digital equivalent to an artist looking at a vague picture and painting in details onto it.

    • by msauve ( 701917 )
      But, CSI [youtube.com].
    • This should be easy for a defense attorney to invalidate. Hallucinated images (assembled largely from a corpus of previous images to "enhance" some evidence) are not the same as an image that is run through an abstract de-blurring algorithm.
      It's probably easy to demonstrate the problem with some examples, so that judge and jury "gets it".
    • by Anonymous Coward

      The article links to http://www.crime-scene-investigator.net/admissibilitydigitaleveidencecriminalprosecutions.html, which has summaries of dozens of cases where 'enhanced' images were admitted as evidence. Given that there seems to be a pretty high standard for evidence (and the fact it wasn't developed for forensic techniques), I think the article is exaggerating the likelihood of google's algorithm being admitted. This could be a good way to get sketches of suspects, starting with security footage or s

  • ...from the fact that Google is run by shape-shifting reptoids.

    WAKE UP, SHEEPLE!!!!! /Cue obligatory XKCD

  • by guruevi ( 827432 ) on Saturday November 19, 2016 @03:55PM (#53322953)

    You can't really upscale resolution but you can "enhance" images (especially raw ones) to a point. A lot of shots may be over or underexposed with some details left in one or more of the channels but visually blocked out, having thousands of minuscule changes and filtering go through a human in the hope of seeing something would be nearly impossible and having a filter to weed them out is helpful.

    JPEG and similar compression are like MP3 - you can filter out what the algorithm defines as outside of the human realm to perceive but a lot of those assumptions are faulty leading to noticeable artifacts. However it is very hard to recover the data lost in "lossy compression" although you can make some assumptions to recover them.

    The other problem with using these filters is that they're called artificial intelligences. They are not intelligent and calling them that leads to an assumption of infallibility. They're a form of Bayesian filtering and we've been using that since at least the days of OS/2 to "enhance" images, I used a demo of a program back then that did just that: inferences on JPEG to make a type of vector image. We just use more powerful clock cycles and more storage to have them perform better but they're not and never will be magic.

    • by raymorris ( 2726007 ) on Saturday November 19, 2016 @04:17PM (#53323025) Journal

      > They are not intelligent and calling them that leads to an assumption of infallibility.

      That's an interesting comment. I'd think the opposite. I'm intelligent, and often wrong. Gears are dumb, and always perform multiplication correctly, never giving the wrong result. To me, intelligence implies the ability to come up with different answers, some of which may be wrong. If it can't come up with unexpected answers, it's just a dumb machine, I'd think.

      • This.

        I've been in the business 49 years starting when the slide rule was the calculator of choice.

        "Artificial Intelligence" (AI) started with a basic definition that always circled back to the human brain as a reference for "intelligence."

        In later years, a more realistic description of AI required us to drop the human brain part, but many people failed to catch the move.

        A machine will only be intelligent when it can commit suicide because Facebook is down.

        • by msauve ( 701917 )
          I had a friend who often pointed out that a common definition of "life" (from the first Google hit for "definition of life": growth, reproduction, functional activity, and continual change preceding death) only works if you exclude fire.
      • by guruevi ( 827432 )

        Perhaps infallible is the wrong word.

        The problem to the lay person is that the 'AI' in contemporary media is portrayed as a sort of super-intelligence that is purely logical and thus superior to humans (and subsequently morally 'better' as well). It's easy to say by an attorney that a non-human, self-aware entity enhanced a perfect digital replica of the scene, it is therefore free of any human bias and thus a 'perfect' proof.

        To go with your gears example, when people use gears all the time and they're alwa

    • by rl117 ( 110595 )
      Agreed. You can enhance an image correctly if that processing only makes use of information in the original image. For example, deconvolution, despeckling, contrast enhancement. These change the image, but the process is either neutral (no information loss) or lossy (some information loss). You can't *add* missing information to an image, because that implies making assumptions about the image which are likely to be incorrect for most cases. Validating such assumptions are correct is extremely difficu
      • You're assuming that image enhancement algorithms are "neutral" solely if they use information already in the photo and don't add missing information to an image. But the very act of choosing which algorithms to use to "enhance" an image is not neutral - it's biased towards enhancements which disproportionately fit our expectations for how the real world works.

        For a real example, look at the upscaled photos of the boy's face in TFA. The upscaling algorithms other than bicubic look for edges, and strive
  • by Gravis Zero ( 934156 ) on Saturday November 19, 2016 @04:18PM (#53323027)

    Enhancing an image for increased resolution isn't dishonest... unless you present it as the absolute truth. The reality is that it's a probabilistic view of the unenhanced version which is to say that it probably looks as presented in the image but there are other possibilities that could match that image. Honestly, I doubt it's worse than a human's memory of image because human's don't store information as PNGs and our recall is far from perfect.

  • by pavon ( 30274 ) on Saturday November 19, 2016 @04:18PM (#53323029)

    All upscaling algorithms are making up data based on assumptions on what "typical" hi-res images should look like given their low-res counterparts. That doesn't mean they are lying or misrepresenting. Furthermore, some assumptions are most statistically valid than others, and some produce more aesthetically pleasing results than others, actually resulting in images that are genuinely more likely to be closer to the true image than nearest neighbor.

    Nowhere in google's paper are they suggesting that these images be used for forensic purposes, nor claiming that they are finding "deeper truth" or additional information in the images than what actually exists. They developed an approach that produces better results for common classes of images than previous algorithms, which is useful for a large number of applications that don't require the same level of rigor that forensics do.

    • by Tablizer ( 95088 )

      All upscaling algorithms are making up data based on assumptions on what "typical" hi-res images should look like given their low-res counterparts. That doesn't mean they are lying or misrepresenting.

      It's essentially using AI and statistics to guess. While not "lying or misrepresenting", it should be considered just that: a guess.

      If anyone is convicted based on such AI guesses, they should be let out of jail.

  • You can't get something from nothing. That's a fact. Humans can fill in some gaps and AI could probably do the same, but there is no guarantee the results are correct.

    On the other hand, if it could actually discern more from a video (which humans can also do, but probably not quite as well), it might be able to "enhance" individual images to some extent and have accurate results.

    That people can be convicted by the results is a little scary, but at some level no different from a jury misinterpreting a low re

    • by Viol8 ( 599362 )

      "That's a fact. Humans can fill in some gaps and AI could probably do the same, but there is no guarantee the results are correct."

      True, but for something like straight lines or curves that may have missing sections, filling in the missing bits would probably give a reasonable fascimile to the original. But sure, at the end of the day, whatever they call it , its just educated guesswork by a program. In most cases though it won't matter so long as it *looks* sharper and more detailed, whether the fine detai

    • by yes-but-no ( 4133651 ) on Saturday November 19, 2016 @05:24PM (#53323295)

      You can't get something from nothing.

      2, 4, 8, x, 32, 64. Can you guess x?

      It's not from nothing.. image captures nature; nature runs under physics; n physics under mathematical laws. So it is reasonable to guess what a missing pixel-block will be based on other sets of observations of similar situations.

      • Re: (Score:3, Insightful)

        by Anonymous Coward

        There are an infinite different functions that follows the pattern that generates different results for x.

        The problem when using it for forensics is that you will put the person following the pattern you implemented in jail, not the one that actually is guilty.

      • by Imrik ( 148191 )

        How about:
        2, x, 8, y, 32, z. Can you guess x, y and z?

      • Have you seen this nice little problem? http://mathworld.wolfram.com/C... [wolfram.com] 1, 2, 4, 8, 16, x. Can you guess x?
        • I can guess x to within +/-1 of the number you're after.

          That is quite significant in terms of filling in the blanks.

      • Myopic, but my brain constructs sharp edges on probably sharp edges... Telegraph poles, skylines .... Fails miserably on reading text on distant signs. Fails on recognising distant faces. Other clues help ... gait, them doing recognition gestures...
  • I had thought of the possibility of this years ago. The basic idea is that, if you downsample an image, and then upsample it again, information is lost in the low resolution version that must be reconstructed somehow. Essentially what you need is a means to make educated guesses as to the missing information. Traditional codecs are based on the maths that results when the codec is intended to reconstruct an arbitrary image. If we constrain the space of possible images, such as photos of the same person, the

    • Best post explaining the actual value. Also, it should be possible to measure the performance if the gurssing algorithms by comparing the lost from test images vs their original high res versions.

  • I thought this was just something TV / moviemakers had been doing since the 90s to purposefully annoy geeks.

    "Zoom in on D2."

    "Enhance!"
  • The AI is not dishonest, it has been designed to make-up stuff.

    Its a bit like doing the fractal compression of an image, then restore to an higher resolution than the original. You will get a more detailed image, but it's content will have been made-up.
    Fractal compression existed well before Google and no idiot used this feature as proof AFAIK.

    I cannot believe anybody in his right mind would take any "make-up" algorithm as reliable evidence. One has to be pretty ignorant, or criminally insane, to use what

  • As NCIS episodes have demonstrated, the video analysts have to issue the command Enhance! for this thing not to lie

  • Holy shit!

    Remember when they found that bank loan "artificial intelligence" programs were discriminating based on the racial profile of your zip code? The program learned from the human examples they were given.

    So it isn't impossible that algorithms that insert "likely" pixels into images would perhaps add minority colored pixels in an urban looking scene and white colored pixels in a suburban scene. You can't use image data that didn't come from the actual scene in court!!!!

As of next week, passwords will be entered in Morse code.

Working...