Google's Releases New AI Photo Upscaling Tech (petapixel.com) 97
Michael Zhang writes via PetaPixel: In a post titled "High Fidelity Image Generation Using Diffusion Models" published on the Google AI Blog (and spotted by DPR), Google researchers in the company's Brain Team share about new breakthroughs they've made in image super-resolution. [...] The first approach is called SR3, or Super-Resolution via Repeated Refinement. Here's the technical explanation: "SR3 is a super-resolution diffusion model that takes as input a low-resolution image, and builds a corresponding high resolution image from pure noise," Google writes. "The model is trained on an image corruption process in which noise is progressively added to a high-resolution image until only pure noise remains." "It then learns to reverse this process, beginning from pure noise and progressively removing noise to reach a target distribution through the guidance of the input low-resolution image." SR3 has been found to work well on upscaling portraits and natural images. When used to do 8x upscaling on faces, it has a "confusion rate" of nearly 50% while existing methods only go up to 34%, suggesting that the results are indeed photo-realistic.
Once Google saw how effective SR3 was in upscaling photos, the company went a step further with a second approach called CDM, a class-conditional diffusion model. "CDM is a class-conditional diffusion model trained on ImageNet data to generate high-resolution natural images," Google writes. "Since ImageNet is a difficult, high-entropy dataset, we built CDM as a cascade of multiple diffusion models. This cascade approach involves chaining together multiple generative models over several spatial resolutions: one diffusion model that generates data at a low resolution, followed by a sequence of SR3 super-resolution diffusion models that gradually increase the resolution of the generated image to the highest resolution." "With SR3 and CDM, we have pushed the performance of diffusion models to state-of-the-art on super-resolution and class-conditional ImageNet generation benchmarks," Google researchers write. "We are excited to further test the limits of diffusion models for a wide variety of generative modeling problems."
Once Google saw how effective SR3 was in upscaling photos, the company went a step further with a second approach called CDM, a class-conditional diffusion model. "CDM is a class-conditional diffusion model trained on ImageNet data to generate high-resolution natural images," Google writes. "Since ImageNet is a difficult, high-entropy dataset, we built CDM as a cascade of multiple diffusion models. This cascade approach involves chaining together multiple generative models over several spatial resolutions: one diffusion model that generates data at a low resolution, followed by a sequence of SR3 super-resolution diffusion models that gradually increase the resolution of the generated image to the highest resolution." "With SR3 and CDM, we have pushed the performance of diffusion models to state-of-the-art on super-resolution and class-conditional ImageNet generation benchmarks," Google researchers write. "We are excited to further test the limits of diffusion models for a wide variety of generative modeling problems."
old porn (Score:4, Informative)
Can't wait to upscale my old porn collection
Re: (Score:1)
Re: (Score:1)
Me too. Hopefully it will improve the pictures of your mom.
Yo Momma so big, any picture of her already upscaled.
Re: (Score:2)
Combined with deepfake tech this could produce some quite convincing images.
Re: (Score:1)
You're going for funny, but this is something people are already doing.
Impressive! (Score:3)
Re: (Score:3)
Here's the technical explanation: "SR3 is a super-resolution diffusion model that takes as input a low-resolution image, and builds a corresponding high resolution image from pure noise," Google writes.
Interesting in that noise is usually something one tries to avoid in communications. Seems nervous systems may use noise to improve.
Re:Impressive! (Score:5, Interesting)
They do. There's a method for improving balance in seniors that involves electronic shoe liners that add random stimulation to their soles.
It works... presumably because the random misfires of their own nerves are mixed in the with random noise introduced by the devices, and the result is their brain filters them out together leaving higher quality nerve input for balancing. I say 'presumably' because I didn't either have long forgotten that part, or never read it in the first place.
Re:Impressive! (Score:5, Informative)
Adding noise can actually improve measurements in some cases.
For example, say you are measuring a voltage and your analogue to digital converter (ADC) has a resolution of only 1 volt. The actual voltage is 7.4V, but your ADC always reads 7V because that's its maximum resolution.
Now add in 0.3V random noise so that the voltage is randomly between 7.1V and 7.7V. Read the value 10 times and your ADC will randomly measure 7V or 8V, depending on which way it rounded due to the noise. Average your results out and you will get a value closer to 7.4V than you would have got by taking one reading with no noise.
Google seems to have done something similar but presumably not relying on this mathematical relationship. They take the original image, upscale it using nearest neighbour, add a massive amount of noise and then use AI to do noise reduction.
Re: (Score:2)
They take the original image, upscale it using nearest neighbour, add a massive amount of noise and then use AI to do noise reduction.
That's not what TFS says. It says that noise is added in incremental steps to the hi-res image, and the network learns to reverse that process conditional on the low-res image.
Re: (Score:2)
It starts with noise because the process is inherently statistical.
The algorithm needs something to work on and noise is a good statistical starting point. Then the algorithm filters the picture out of that noise.
Re: (Score:2)
Interesting in that noise is usually something one tries to avoid in communications.
Only audible noise to your ears. Most if not all modern communications rely on artificially adding noise to improve communication, a process known as "dithering".
Re: (Score:3)
Yes. This is a very common problem with generative models like these. You'll notice their benchmark is the "confusion matrix." Basically they test how well people are able to figure out which is the real high res picture, and which is the simulated one.
So they're evaluating systems that can create good looking images. They're not evaluating whether the system produces *accurate* images. David Caruso might say enhance and get a very nice looking picture of the wrong person in that taillight reflection.
In the
Re: (Score:2)
In the blog they mention that it's good for medical imaging. Yeah.
Good point. One might hope they don't leave it to the initial noise to decide whether to upscale the image to look like a tumor or not!
Re: (Score:3)
I would like to compare an initial hi-res picture with it downscaled and then upscaled with this algorithm. Didn't see that in TFA.
Re:Impressive! (Score:5, Informative)
Ah, there are some in the paper: https://arxiv.org/pdf/2104.076... [arxiv.org]
It contains zoomed-in pictures that shows that details are made up in a context-appropriate way (for the selected images) but not identical to the reference.
Re:Impressive! (Score:5, Interesting)
I really have no issues with law enforcement using technology and I had the pleasure of getting to sit and have some great conversations with a computer forensics analyst from Orlando Florida where rather than assuming that all law enforcement is idiots, this fellow had a pretty good grasp on tech. He was in a senior position and I felt from our conversations that first of all, he was a genuine and honest person. He was also somewhat jaded, but I've always felt it's impossible for good people to be constantly surrounded by the worst human conditions without becoming dark and jaded over time. And finally, while he may see himself as being in the same league as a fictional computer forensics expert like Abby from NCIS, he's more on the level of a very knowledgeable tech... so, a bit smarter than Linus from Linus tech tips, but not nearly as bright as someone like Mark Russonovich.
The diatribe about police forensics above is important and applicable because he is a person who is regularly called into court rooms to testify as a forensics expert. He makes use of a lot of very cool tools, like those from Cellebrite (he is in fact associated with them for his side hustle), and he believes that information gathered from such tools is indisputable as evidence. So, for example, if he were to extract illicit information from a phone using Cellebrite's tools, he would present that is irrefutable evidence that the current person in possession of the phone was the owner of the information with a total disregard for whether the phone had been "wiped" and given to a new owner. I happen to know from my own experiences with Cellebright's technology that it does in fact provide generational data without metadata describing which generation it is from.
So, if this same person were to be placed in the stand to present information recreated from a photo using this algorithm, he would, with a clear conscious present it as irrefutable evidence as it would be impossible that the tool itself could be at fault for misrepresenting artefacts when super-scaling.
What's very important to understand about this technology is that it's a highly impressive form of deep-fakery. But in this case, I don't use that as a evil term. Where there is no data to be extrapolated, it is presenting data which seems "logical" in terms of what would make sense to be in the scene due to a machine learning algorithm.
So this is where your point is very very important.
Once it makes inferences as to what is missing from a scene by extrapolating data, rather that accurately representing the original scene, it is simply making use of (I haven't read in detail and I could be off in my technical guesses) a convolutional network that through multiple iterates (as is expected from Convnets) will continuously attempt to improve the image until it's reached a threshold that keeps the algorithm itself from finding "fake" artefacts. The result being an image that should be impossible for the human eye to discern as altered from its source.
The end result is, I believe that people will end up serving life sentences in prison because the computer's imagination (which is basically what this is) filled in the blanks in an image and then fooled the human eye (and possibly deep fake detection algorithms) and manufactured evidence.
To take it one step further, it is entirely possible that this tool can be used to manufacture evidence by first intentionally degrading an image, then inserting a blurred artefact that is then "massaged" into something incriminating when super-scaled.
Re: (Score:3)
No need to call BS, but it is useless for image enhancement for criminal investigation like in all those shows.
They make the images look nicer by inventing plausible details that are consistent with the source materials. They don't magically discern the correct name on a blurred name tag, to the extent detailed wrinkles and fine features pop into existence, they are just making them up and may not pertain to the source material.
If you've played with context-aware deletion/resynthesize, it's the same deal, i
Re: (Score:2)
No need to call BS, but it is useless for image enhancement for criminal investigation like in all those shows.
They make the images look nicer by inventing plausible details that are consistent with the source materials. They don't magically discern the correct name on a blurred name tag, to the extent detailed wrinkles and fine features pop into existence, they are just making them up and may not pertain to the source material.
It's like you have part of someone's search query, so you use autocomplete to get the rest. That doesn't mean that you've found their full search query!
Re: (Score:2)
While the "before" images are low res, they all look like they've been digitally scaled down from high quality originals, some of them professional studio portraits. I'd like to see what happens when they try with real-world, low-res originals rather than these synthetic ones. I suspect just a little noise in the input, as most low-res originals would naturally have, will substantially degrade the output quality.
Re: (Score:3)
If the model was retrained with those sorts of artefacts, then it's probably pretty good at basically deleting them from the source as it invents details. Generally these things include both making up missing detail and dropping undesired artefacts at the same time. As they appear in the materials that it did analysis on before, it would start including that the upscales deleted some apparent details in the source based on some signs (e.g. a suspiciously JPEG block sized sharp change in color isn't in the
Re: (Score:2)
Re: (Score:2)
I tend to agree, though one thing that might help is to be less distracting. Sometimes we aren't good at ignoring the blockiness or blurriness, or picturing every possible font and letter to see if it could match a sample of unreadable text.
But there is the danger of it inventing details that would complicate identification since it won't match up and we don't *know* what details are real and what are invented.
Re: (Score:2)
The enhanced pictures in the article are pretty impressive.
Yes. Adverts rarely mention flaws.
Re: (Score:2)
Impressive is not the right word. We all know that you can enhance foto's with these kinds of tools.
What is missing from their article is ground truth. They don't show the actual original hires photos. So we don't know whether the reconstructed pictures look anything like the real people. How well did the algorithm manage to guess the right pixels?
The fact that they show no hires originals tells me that this algorithm just makes shit up and the result does not really match the original faces very well.
Re: (Score:2)
The paper did include. The answer is while the upscales are plausible pictures, they of course can't match details to the original. The details are gone. So at the top of the paper, you have a kid's face, and the upscale... well it has one pretty clear tell because it kind of lost it around the collar and basically rendered static instead of a guess. But putting that aside, you may think it's a kid with somewhat weird teeth rather than a totally made up picture. Comparing the two faces side by side the tee
Re: (Score:2)
This is to make low quality material more pleasant, not some declaration that details can be recovered that are irreversibly lost.
LOL, i'm pretty sure law enforcement is already forming lines at the google headquarters carrying signs with the text: "Shut up and take the money!".
That headline is totally illiterate (Score:3)
Does anybody edit this website?
Re: (Score:1)
Does anybody edit this website?
Yeah, the gloss on that phrase had several branching red-herrings.
My theory would be that it originally was entered as something like "Google's New AI Photo Upscaling Tech Released", then re-ordered in active voice for more click appeal, but the editor neglected to removed the possessive case.
Re: (Score:1)
"removed", LOL, Muphry's Law inaction.
Tomorrow : Google summersets AI Photo Tech (Score:2)
Just wait for it.
Free progress. (Score:2)
Ah good something better than this [benvista.com] and free as well.
ENHANCE! (Score:3)
Law enforcement is drooling already. They can finally enhance all those grainy, blurry, unrecognizable videos but still manage to arrest the wrong person.
Re:ENHANCE! (Score:5, Insightful)
That's a great point, because it would be VERY easy for an image to be "enhanced" in a way that the wrong person gets identified. If I were a defense attorney, I'd be sure from now on (if they don't already), to ask to see the raw / unedited images they use as evidence -- not the modified 'improved' versions they'd want to show a jury or judge.
Re: (Score:2)
I've seen that done for parking tickets, of all things. In the UK parking tickets are a civil matter and some of the parking companies have been known to submit photoshopped images into evidence. Sometimes they get caught out because they accidentally submit the original as well, other times the victim has their own photos that don't match up.
The usual modification is to make any signage more visible, or occasionally even add in signage that wasn't there.
Re: (Score:2)
Re:ENHANCE! (Score:4, Interesting)
In TFA, there's an example of an up-scaled tram. There a blue
*We* know this information isn't correct, but it could be difficult to a lay-person to defend themselves if presented with an apparent photograph that shows, what appears to be, absolute incontrovertible truth!
Obviously, right now, an "expert" would be required to enhance a photo, and thus they should know what may or may not work. But imagine a future where this tech is embedded into cheap consumer cameras. At that point, I worry that people would believe what they see in a photo that they believe they took.
Imagine a phone-camera where you have (effective) infinite pinch/zoom!
Still cool though!
Obligatory Blade Runner scene (Score:4, Interesting)
https://scifiinterfaces.com/20... [scifiinterfaces.com]
Versus ground truth? (Score:3)
Re: (Score:1)
Re: (Score:2)
The blog article mentions medical imaging. I think you're probably going to want your own fish for that one.
Re: (Score:1)
Re: (Score:3)
The actual paper does show quite a few very good examples.
https://arxiv.org/pdf/2104.076... [arxiv.org]
Re: (Score:2)
And quite some bad ones that have significant differences compared to the reference image.
None of these flaws are mentioned by the fucking article.
Re: (Score:3)
Maybe you just can't get the fish back with today's techonology. Who knows what tomorrow will bring? After all, we now know how to unboil an egg [washingtonpost.com]...
Re: (Score:1)
but whether or not they are true to the original is a completely different story.
Not really. The overwhelming majority of images people use are not for preserving an official record. Whether some tiny detail is incorrect in a scene because it's made up is completely irrelevant.
You won't be using an image with this technology in court to identify a suspect (I hope), but for the 99.99% of other people on the planet they are happy with any fish.
Totally agree (Score:1)
Sorry for all the naysayers (Score:1)
I don't believe it (Score:3)
There are details in those 'restored' images that are far, far finer than single pixels from the original.
Look at the forehead on that black woman with the tight braids and tell me honestly that you believe those acne scars were restored accurately by the AI without some form of 'cheating'.
There is no way to extract that save to have trained your AI off the original image.
Re: (Score:3)
Yes. It's generating realistic looking images. The picture of Geoff Hinton is the one from the ACM award page, it's easy to find the original. The reconstruction has given him some freckles he doesn't have, and done some weird stuff to his eyebrow and earlobe.
It's really unfortunate "super resolution" caught on for this process, which is really interpolation. Actual super resolution imaging uses multiple images with sub-pixel shifted fields of view to actually reconstruct sub-pixel details.
Re: (Score:2)
Incentive for the Pixel cell phones to get better cameras.
Re:I don't believe it (Score:4, Interesting)
Hence the description "Image generation", not image "enhancement."
Re: (Score:2)
The model is trained to infer what might be there. That doesn't mean it's right. The result though, is sure to be quite good.
Re: (Score:2)
Good but not accurate.
I just can't wait for this to be used by US law enforcement to 'identify' blurry blobs in the background. It'll be the Darwin awards on a continental scale.
Re: (Score:2)
Good but not accurate.
Correct.
I just can't wait for this to be used by US law enforcement to 'identify' blurry blobs in the background. It'll be the Darwin awards on a continental scale.
I don't understand what you're afraid of.
Something like this could never be submitted as evidence in a court.
However, like a sketch made by a police sketch artist, it could be used to target suspicion. There's nothing new about that.
Re: (Score:2)
I don't understand what you're afraid of.
I'm afraid of this being abused in the future. The algorithm invents information that wasn't there and it does so in a way that make the picture indistinguishable from a real photo. This tech is just begging to be abused to 'prove' things that are not real. The US police already has a history with abusing tech to prosecute innocent people because computer knows all.
I'm also afraid that this kind of thinking/behavior is spreading across the world.
Something like this could never be submitted as evidence in a court.
Far worse things have been submitted as evidence, like plain a
Re: (Score:2)
From the actual paper [arxiv.org]:
SR3 should not be used for any real world super-resolution tasks
Take some comfort in that.
Re: (Score:2)
Somehow that doesn't put my mind at east.
Re: (Score:2)
Far worse things have been submitted as evidence, like plain and simple facial recognition outcomes (computer says it so it must be true kind of reasoning) despite laws and regulations supposedly forbidding this.
Successfully submitted? I'll google that. If so- that's horrible.
Re: (Score:2)
Fake photos (Score:1)
Re: (Score:2)
Google does. They want to offer unlimited (or nearly, effectively unlimited) photo storage for Android users with minimal actual storage used. So they'll compress and shrink it to death but "rebuild" it on demand. Just a theory.
Re: (Score:2)
"Hey honey, did that freckle on your face move position?"
ENHANCE! (Score:2)
Can't wait for idiot cops to start thinking this is a flawless silver bullet and use it with facial recognition.
Re: (Score:2)
It's been a staple of cop shows on TV for years.
I've always wanted to see a realistic depiction of that:
Detective: "Can you enhance that so we can run it through facial rec?"
Tech: "Sure can, boss. Who do you want it to look like?"
Re: (Score:2)
This isn't a violation of Information Theory any more than an artist constructing a sketch of a perp is.
The model isn't constructing an identical copy of the original. It's filling in missing information with what it infers might be there.
Re: (Score:2)
It's filling in missing information with what it infers might be there.
Yes, it is indeed just making shit up.
Re: (Score:2)
Re: (Score:2)
Not at all like a sketch artist.
A sketch artist doesn't make completely believable and photorealistic bullshit that could be mistaken for reality and get you convicted.
Re: (Score:2)
Just because the work can be misconstrued as reality does not mean that it's somehow not just a sketch artist, mixing description with imagination.
So your beef is with how it can be abused. And I get that. But that doesn't change what it is.
Re: (Score:2)
Are you saying that a sketch artist ceases to be a sketch artist once it passes some skill level?
No, i'm saying that this is, quite obviously, not a sketch artist. It is a computer algorithm that invents detail based on noise input.
You can tell because the output of a sketch artist and this algorithm are very different. Well, maybe you're blind or something, but the differences are pretty clear to me.
But that doesn't change what it is.
Indeed. And what it is is a computer algorithm. Very definitely not a sketch artist.
Think of it this way. If the algorithm is a sketch artist, then why is the output so different in quality?
Re: (Score:2)
No, i'm saying that this is, quite obviously, not a sketch artist. It is a computer algorithm that invents detail based on noise input.
A sketch artist invents detail based on the noise input into their own neural network (their experience)
You can tell because the output of a sketch artist and this algorithm are very different. Well, maybe you're blind or something, but the differences are pretty clear to me.
Sure. Just like the musical output of me compared to that of Beethoven are quite different.
Indeed. And what it is is a computer algorithm. Very definitely not a sketch artist.
Inappropriate simplification.
The algorithm really is what runs the net. The magic isn't the algorithm, it's the net itself, and like that, it's much like an actual brain, only the algorithm that "runs" our net (neurobiology) is both much more complicated, and also far less specialized to this task.
Think of it this way. If the algorithm is a sketch artist, then why is the output so different in quality?
Because the neura
Re: (Score:2)
Sigh.
Sketch artist:
https://images.fastcompany.net... [fastcompany.net]
Google neural net:
https://1.bp.blogspot.com/-sev... [blogspot.com]
Is there really nothing here that would prompt a fucking category change in your brain?
You're blaming me of inappropriate simplification, but you're the one insisting on calling an algorithm a sketch artist.
And it is an algorithm. It does not matter whether some parameters were acquired through training or whatnot. The algorithm in this case acts like a sieve that prompts the network to output a number of
Re: (Score:2)
Is there really nothing here that would prompt a fucking category change in your brain?
Of course not.
When we're discussing the core attributes of a thing, whether you're me, or Mozart, at the end of the day, it is in fact sound vibrations generated by the mechanical manipulation of an instrument. Mozart and I are both playing music. To any sane observer- they're nowhere near the same thing. But that's because they're not thinking about it at a technical level.
You're blaming me of inappropriate simplification, but you're the one insisting on calling an algorithm a sketch artist.
It's not an algorithm. It's a neural network. If it could be turned into an algorithm- they would. Then it wouldn't require hyper-fast
Re: (Score:2)
When we're discussing the core attributes of a thing, whether you're me, or Mozart, at the end of the day, it is in fact sound vibrations generated by the mechanical manipulation of an instrument. Mozart and I are both playing music. To any sane observer- they're nowhere near the same thing. But that's because they're not thinking about it at a technical level.
This is a lot of nonsense.
You're now bombarding the medium (waves in air) to the most important thing about music. We value mozart differently from your output because of the informational differences. We don't value the medium when we talk about music. Moreover, there is a lot (frankly, most of it) of types of air waves that we do not classify as music at all. They are simply sound.
We distinguish music from sound by its informational properties. The way you put it, if sound waves are indeed the core proper
Re: (Score:2)
This is a lot of nonsense. You're now bombarding the medium (waves in air) to the most important thing about music. We value mozart differently from your output because of the informational differences. We don't value the medium when we talk about music. Moreover, there is a lot (frankly, most of it) of types of air waves that we do not classify as music at all. They are simply sound. We distinguish music from sound by its informational properties. The way you put it, if sound waves are indeed the core property of music (what makes music music) then i could give you a file of random numbers and that would also be music. Of course this is not music so there are other qualifiers that we need to apply to define music.
And yet, the statement:
When we're discussing the core attributes of a thing, whether you're me, or Mozart, at the end of the day, it is in fact sound vibrations generated by the mechanical manipulation of an instrument.
Remains unimpeachably true, making your prior paragraph pretty fucking useless.
To translate back to our discussion, you're saying that because the processes in both a sketch artist and the NN are (very very distantly) based on the similar notions they are exactly the same process. Never mind that the structure of the connections between the building block, which is the basis of the function a neural net can perform, are severely different in both cases. Never mind the actual functionality of these building blocks themselves (brain neurons v.s. artificial neurons) are severely different. To your mind they are the same because there exists an abstraction that makes them seem a little bit the same. In reality a brain is a completely different construction than the NN's we're talking about. In fact, i would say that there are far more functional differences than similarities, in both the building blocks and the way they are connected. Artificial neural networks are not brains. The only similarity is that both use interconnected elements that have information flow through them. Even the weighting process is different. That abstract idea of what a neuron is encompasses countless processes with countless outcomes. So you're willingly discarding the very obvious and important differences to make a weak point about sameness.
And yet, at the end of the day, both the brain and an artificial NN are simply convolutional networks with large differences in complexity and specialization, again making your entire last paragraph pretty fucking pointless.
If we went by your example then all computer programs are exactly the same because they all are executed in binary. It doesn't matter whether you use word or gimp or autocad or cubase, they all give exactly the same result because they all run on bits, right? Right? Of course this is nonsense and for the same reason your argument about brains vs neural nets is nonsense. You're basically setting and resetting goalpost so that you only see what fits your narrative.
Of course not.
By my example, it would be "All programs that factor Y are the same, regardless of how they do it."
But the mechanism is different A sketch artist listens to a verbal description of a person and makes a low-detail representation of these features. The person verbally mediating the information directs the sketch artist to make the result look like they remember from the actual event they were a witness of. They both use their brains that have, save an abstract notion of how neurons work, absolutely nothing in common with the neural nets from the article. The NN from the article, on the other hand, blindly makes shit up based on an input image. It filters out the strongest statistical match for missing pixels. And the result is a high quality image that is 90% not factual but very much appears to be. Please tell me, how is this the same mechanism?
You're trying to draw a distinction that simply isn't there.
The directi
Re: (Score:2)
And yet, the statement:
When we're discussing the core attributes of a thing, whether you're me, or Mozart, at the end of the day, it is in fact sound vibrations generated by the mechanical manipulation of an instrument.
Remains unimpeachably true, making your prior paragraph pretty fucking useless.
But that's just not true, is it?
It is the nature of those waves that is the important bit. The informational content. Not the fact that it's air waves.
The reason that you can listen to audio over the internet makes this very clear. You can digitize the air movement and send it over the internet, in a letter, on a piece of tape, whatever. The air waves are only a medium and it's the information that distinguishes you from mozart.
And yet, at the end of the day, both the brain and an artificial NN are simply convolutional networks with large differences in complexity and specialization, again making your entire last paragraph pretty fucking pointless.
Bullshit. Our brains are not 'simply convolutional neural networks'. There is a
Re: (Score:2)
Ooh, and here is some people who have actually quantified some of the differences i was talking about:
https://science.slashdot.org/s... [slashdot.org]
Re: (Score:2)
Cortical neurons are well approximated by a deep neural network (DNN) with 5–8 layers
Compared with what I said:
The magic isn't the algorithm, it's the net itself, and like that, it's much like an actual brain, only the algorithm that "runs" our net (neurobiology) is both much more complicated, and also far less specialized to this task.
Re: (Score:2)
The magic isn't the algorithm, it's the net itself, and like that, it's much like an actual brain
But that is full of mistakes.
The algorithm is decided by both the net and the functions of the neurons. Different function set in neuron dictates a different network to solve the same problem (which doesn't take away the fact that the problem solved by a sketch artist is different from this google ANNs).
And in human brains all kind of chemical communication takes place as well. Brain cells can for instance communicate directly with their neighbors, so not just through the normal neurotransmitter pathways th
Re: (Score:2)
The algorithm is decided by both the net and the functions of the neurons.
If we are to use a sufficiently abstract notion of "algorithm", then: A single cortical neuron is functionally equivalent to 5-8 layer deep convolutional network.
Different function set in neuron dictates a different network to solve the same problem (which doesn't take away the fact that the problem solved by a sketch artist is different from this google ANNs).
I don't know why you keep getting hung up on the functionality of individual neurons. They're not relevant, only the behavior of the network is.
Regardless of whether you're using a deep convolutional network to achieve a thing, or some neurons in a petri dish, if they do the same thing, they do the same thing, even if it takes more work for the ar
Re: (Score:2)
Yes, but it's making plausible stuff up without a human having to do the 'artists rendition of what this might have looked like', which is the point.
When my family wanted to 'restore' an old blurry picture with damage, they took it to a restorer (really an artist) to have something to look at that was believable. The human uses their experience to know what that might have looked like, make up something for that big splotch, etc. It was a tedious and thus expensive endeavor for a single photo. Now much of t
Re: (Score:2)
When my family wanted to 'restore' an old blurry picture with damage, they took it to a restorer
How about the police or a person seeking a conviction wants to 'restore' a picture and it turns out the invented details happen to match an innocent person?
You want this tech to help your shitty old photo's but you forget all the other things that this can and inevitably will be used for. Hey, it's only society, right?
Re: (Score:2)
You can already have police action driven by an inaccurate sketch.
If you mean in court, then it's important to highlight precisely what this is and have it well known and establish the legal standard for this not being allowed as evidence, like any doctoring of such a photo should be.
Re: (Score:2)
Yes, but you can easily reason that the sketch is inaccurate.
But what about the ultra-realistic pictures this algorithm produces?
If you mean in court, then it's important to highlight precisely what this is and have it well known and establish the legal standard for this not being allowed as evidence, like any doctoring of such a photo should be.
I'm sure it is.
But the point is that it is not self evidently an interpretation, like is the case with the output of a sketch artist.
Try explaining to a jury that the high quality picture they see is 90% made up.
Darktable (Score:2)
Diffusion methods? Already in the GIT master of Darktable 3.7 [github.io]. Doesn't do upscaling (as yet), but can be used for a variety of other things.
Generally available ETA! (Score:1)
Witness Blurring... (Score:2)
All of those times they blurred the witness face on TV... Informers, Insiders, Criminals, Police, Government officials.
And now, they can all be identified.
This can't be a good thing, but at least we'll know who really knew about the aliens at Area 51.
Re: (Score:2)
Yes... and we'll find they all look suspiciously like the stock photo models from the training data...
Controlled hallucination (Score:2)
While this might improve the visual quality of images it has not seen before, it is unlikely that the added detail will be what was actually there. For some applications, that doesn't matter, for others, it does. Police are not known for their discerning technical acumen.