Is Google's AI-Driven Image-Resizing Algorithm Dishonest? (thestack.com) 79
The Stack reports on Google's "new research into upscaling low-resolution images using machine learning to 'fill in' the missing details," arguing this is "a questionable stance...continuing to propagate the idea that images contain some kind of abstract 'DNA', and that there might be some reliable photographic equivalent of polymerase chain reaction which could find deeper truth in low-res images than either the money spent on the equipment or the age of the equipment will allow."
An anonymous reader summarizes their report:
Rapid and Accurate Image Super Resolution (RAISR) uses low and high resolution versions of photos in a standard image set to establish templated paths for upward scaling... This effectively uses historical logic, instead of pixel interpolation, to infer what the image would look like if it had been taken at a higher resolution.
It's notable that neither their initial paper nor the supplementary examples feature human faces. It could be argued that using AI-driven techniques to reconstruct images raises some questions about whether upscaled, machine-driven digital enhancements are a legal risk, compared to the far greater expense of upgrading low-res CCTV networks with the necessary resolution, bandwidth and storage to obtain good quality video evidence.
The article points out that "faith in the fidelity of these 'enhanced' images routinely convicts defendants."
It's notable that neither their initial paper nor the supplementary examples feature human faces. It could be argued that using AI-driven techniques to reconstruct images raises some questions about whether upscaled, machine-driven digital enhancements are a legal risk, compared to the far greater expense of upgrading low-res CCTV networks with the necessary resolution, bandwidth and storage to obtain good quality video evidence.
The article points out that "faith in the fidelity of these 'enhanced' images routinely convicts defendants."
Wait, what? (Score:5, Interesting)
People are using this sort of thing in court?
I think these is a very interesting field for consumer needs, but I have to agree, that's disturbing if they're allowing what... let's face it... is data made up by an AI that "looks right", to convict people.
Re: (Score:3, Insightful)
But they were guilty.
Re:Wait, what? (Score:5, Interesting)
Yes, they use it in court. I once watched a federal prosecutor use this and lie so blatantly to the court that his own (image) expert witness sued him for false representation. Yet the defendant was still convicted based almost entirely on that upscale image "evidence" and served several years in prison.
Re: (Score:1)
People are using this sort of thing in court?
Why is that a bad thing? Human eyewitnesses are notoriously unreliable, so it is possible that this technology could result in fewer false convictions. Similar questions were raised about DNA evidence, but it has resulted in far more exonerations of the innocent and convictions of the guilty than the other way around.
You need to get over the delusion that our current justice system is infallible. Far from it: When DNA evidence first became reliable enough to use as evidence, many old cases were reexamin
Re:Wait, what? (Score:4)
It's bad because it's allowing a piece of software to become a witness. One you cannot ask questions of unless you want to force the importing of the whole development team for each trial. Just having "working knowledge" of how the software functions is insufficient.
And because "I extrapolated from other cases what the defendant should look like" wouldn't go over well if given by a human expert.
"so it is possible" - Same extrapolation from scant information.
Re: (Score:2)
So should a person be prosecuted for one hair follicle considering http://www.webmd.com/skin-prob... [webmd.com]. Keep in mind that means 365,000 per year you scatter around for which you are now legally liable. So exactly for how long can DNA be recovered from a hair follicle, after you lose it.
Re: (Score:1)
So should a person be prosecuted for one hair follicle considering http://www.webmd.com/skin-prob... [webmd.com]. Keep in mind that means 365,000 per year you scatter around for which you are now legally liable. So exactly for how long can DNA be recovered from a hair follicle, after you lose it.
I blame this on tv shows like CSI.
DNA is great circumstantial evidence for falsifying an alibi (e.g., I never saw that person, so how did your dna get in the house?). As for proving something specific happened, it of course doesn't do shit, but then again people are convicted by dubious circumstantial evidence all the time (e.g, eye witness testimony) so in the bigger scheme of things, it isn't that different.
How did your DNA get in the house? Really? (Score:1)
How did your DNA get in the house? Really?
1) False match.
2) Carried in by animals, insects, etc.
3) On the sole of someone's shoe.
4) From dumpster-diving.
5) Planted, by cops or others.
I could go on all day.
Re: (Score:2)
How did your DNA get in the house? Really?
1) False match.
2) Carried in by animals, insects, etc.
3) On the sole of someone's shoe.
4) From dumpster-diving.
5) Planted, by cops or others.
I could go on all day.
How is that different than near-sighted and/or racist eye-witnesses, and jail-house snitches? Not really different. The only difference is tv show like CSI that "glorify" DNA evidence and vilify other forms of circumstantial evidence.
Re: (Score:2)
By far the easiest way to transfer in DNA evidence is by public transport. From your head to your coat, brush up against someone, now on their coat, they go home take off coat and hair drops on floor in bedroom. Now, something goes bad and your are done. Yeah, I go the numbers wrong one zero too many but even at 36,500 take public transport regularly and you hair will end up scattered throughout the city.
Re: (Score:2)
Because it is manufacturing evidence from whole cloth.
Re: (Score:2)
Indeed. It's the digital equivalent to an artist looking at a vague picture and painting in details onto it.
Re: (Score:2)
Hallucinations as evidence (Score:1)
It's probably easy to demonstrate the problem with some examples, so that judge and jury "gets it".
Re: (Score:1)
The article links to http://www.crime-scene-investigator.net/admissibilitydigitaleveidencecriminalprosecutions.html, which has summaries of dozens of cases where 'enhanced' images were admitted as evidence. Given that there seems to be a pretty high standard for evidence (and the fact it wasn't developed for forensic techniques), I think the article is exaggerating the likelihood of google's algorithm being admitted. This could be a good way to get sketches of suspects, starting with security footage or s
Enhance! (Score:2)
Obligatory Futurama [youtube.com]
Re: (Score:2)
Re: (Score:1)
red dwarf http://www.dailymotion.com/video/x2qlmuy
This is only meant to distract you... (Score:3)
...from the fact that Google is run by shape-shifting reptoids.
WAKE UP, SHEEPLE!!!!! /Cue obligatory XKCD
Depends on enhancement (Score:5, Informative)
You can't really upscale resolution but you can "enhance" images (especially raw ones) to a point. A lot of shots may be over or underexposed with some details left in one or more of the channels but visually blocked out, having thousands of minuscule changes and filtering go through a human in the hope of seeing something would be nearly impossible and having a filter to weed them out is helpful.
JPEG and similar compression are like MP3 - you can filter out what the algorithm defines as outside of the human realm to perceive but a lot of those assumptions are faulty leading to noticeable artifacts. However it is very hard to recover the data lost in "lossy compression" although you can make some assumptions to recover them.
The other problem with using these filters is that they're called artificial intelligences. They are not intelligent and calling them that leads to an assumption of infallibility. They're a form of Bayesian filtering and we've been using that since at least the days of OS/2 to "enhance" images, I used a demo of a program back then that did just that: inferences on JPEG to make a type of vector image. We just use more powerful clock cycles and more storage to have them perform better but they're not and never will be magic.
I'm intelligent, gears are dumb. Intelligent==fail (Score:5, Insightful)
> They are not intelligent and calling them that leads to an assumption of infallibility.
That's an interesting comment. I'd think the opposite. I'm intelligent, and often wrong. Gears are dumb, and always perform multiplication correctly, never giving the wrong result. To me, intelligence implies the ability to come up with different answers, some of which may be wrong. If it can't come up with unexpected answers, it's just a dumb machine, I'd think.
Re: (Score:3)
This.
I've been in the business 49 years starting when the slide rule was the calculator of choice.
"Artificial Intelligence" (AI) started with a basic definition that always circled back to the human brain as a reference for "intelligence."
In later years, a more realistic description of AI required us to drop the human brain part, but many people failed to catch the move.
A machine will only be intelligent when it can commit suicide because Facebook is down.
Re: (Score:2)
Re: (Score:2)
Perhaps infallible is the wrong word.
The problem to the lay person is that the 'AI' in contemporary media is portrayed as a sort of super-intelligence that is purely logical and thus superior to humans (and subsequently morally 'better' as well). It's easy to say by an attorney that a non-human, self-aware entity enhanced a perfect digital replica of the scene, it is therefore free of any human bias and thus a 'perfect' proof.
To go with your gears example, when people use gears all the time and they're alwa
Re: (Score:3)
Re: (Score:2)
For a real example, look at the upscaled photos of the boy's face in TFA. The upscaling algorithms other than bicubic look for edges, and strive
Not dishonest, probabilistic! (Score:3)
Enhancing an image for increased resolution isn't dishonest... unless you present it as the absolute truth. The reality is that it's a probabilistic view of the unenhanced version which is to say that it probably looks as presented in the image but there are other possibilities that could match that image. Honestly, I doubt it's worse than a human's memory of image because human's don't store information as PNGs and our recall is far from perfect.
Re: (Score:2)
Who put the stick up his ass? (Score:5, Informative)
All upscaling algorithms are making up data based on assumptions on what "typical" hi-res images should look like given their low-res counterparts. That doesn't mean they are lying or misrepresenting. Furthermore, some assumptions are most statistically valid than others, and some produce more aesthetically pleasing results than others, actually resulting in images that are genuinely more likely to be closer to the true image than nearest neighbor.
Nowhere in google's paper are they suggesting that these images be used for forensic purposes, nor claiming that they are finding "deeper truth" or additional information in the images than what actually exists. They developed an approach that produces better results for common classes of images than previous algorithms, which is useful for a large number of applications that don't require the same level of rigor that forensics do.
Re: (Score:2)
It's essentially using AI and statistics to guess. While not "lying or misrepresenting", it should be considered just that: a guess.
If anyone is convicted based on such AI guesses, they should be let out of jail.
Re: (Score:1)
"Oh shit, surgery marks, they are FAKE, there goes my woody."
Pretty much garbage for static images (Score:2)
You can't get something from nothing. That's a fact. Humans can fill in some gaps and AI could probably do the same, but there is no guarantee the results are correct.
On the other hand, if it could actually discern more from a video (which humans can also do, but probably not quite as well), it might be able to "enhance" individual images to some extent and have accurate results.
That people can be convicted by the results is a little scary, but at some level no different from a jury misinterpreting a low re
Re: (Score:2)
"That's a fact. Humans can fill in some gaps and AI could probably do the same, but there is no guarantee the results are correct."
True, but for something like straight lines or curves that may have missing sections, filling in the missing bits would probably give a reasonable fascimile to the original. But sure, at the end of the day, whatever they call it , its just educated guesswork by a program. In most cases though it won't matter so long as it *looks* sharper and more detailed, whether the fine detai
Re:Pretty much garbage for static images (Score:5, Insightful)
You can't get something from nothing.
2, 4, 8, x, 32, 64. Can you guess x?
It's not from nothing.. image captures nature; nature runs under physics; n physics under mathematical laws. So it is reasonable to guess what a missing pixel-block will be based on other sets of observations of similar situations.
Re: (Score:3, Insightful)
There are an infinite different functions that follows the pattern that generates different results for x.
The problem when using it for forensics is that you will put the person following the pattern you implemented in jail, not the one that actually is guilty.
Re: (Score:2)
How about:
2, x, 8, y, 32, z. Can you guess x, y and z?
Re: (Score:2)
Re: (Score:2)
I can guess x to within +/-1 of the number you're after.
That is quite significant in terms of filling in the blanks.
Re: Pretty much garbage for static images (Score:1)
Re: (Score:2)
You know, in a bidding police-state it is far more important to get convictions than to convict the person that actually did it. "Tools" like this (and as an engineer and scientist, I am offended by the very idea that has been implemented here) are a welcome way to make it appear that everything is in order.
Re: (Score:1)
What... what did I just read?
I wonder if this was a botpost or a human typing.
Subspaces and stuff (Score:2)
I had thought of the possibility of this years ago. The basic idea is that, if you downsample an image, and then upsample it again, information is lost in the low resolution version that must be reconstructed somehow. Essentially what you need is a means to make educated guesses as to the missing information. Traditional codecs are based on the maths that results when the codec is intended to reconstruct an arbitrary image. If we constrain the space of possible images, such as photos of the same person, the
Re: Subspaces and stuff (Score:2)
Best post explaining the actual value. Also, it should be possible to measure the performance if the gurssing algorithms by comparing the lost from test images vs their original high res versions.
Hmm. (Score:2)
"Zoom in on D2."
"Enhance!"
Wrong title. (Score:2)
The AI is not dishonest, it has been designed to make-up stuff.
Its a bit like doing the fractal compression of an image, then restore to an higher resolution than the original. You will get a more detailed image, but it's content will have been made-up.
Fractal compression existed well before Google and no idiot used this feature as proof AFAIK.
I cannot believe anybody in his right mind would take any "make-up" algorithm as reliable evidence. One has to be pretty ignorant, or criminally insane, to use what
As long as they use the proper command (Score:2)
As NCIS episodes have demonstrated, the video analysts have to issue the command Enhance! for this thing not to lie
Bzzzt My algorithms say the black man did it (Score:2)
Holy shit!
Remember when they found that bank loan "artificial intelligence" programs were discriminating based on the racial profile of your zip code? The program learned from the human examples they were given.
So it isn't impossible that algorithms that insert "likely" pixels into images would perhaps add minority colored pixels in an urban looking scene and white colored pixels in a suburban scene. You can't use image data that didn't come from the actual scene in court!!!!