Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI Google

Google Research AI Image Noise Reduction Is Out of This World (techcrunch.com) 48

Google Research has released an open source project it calls MultiNerf that does an "out of this world" job at removing digital noise from pictures, according to TechCrunch. "The algorithms run on raw image data and adds AI magic to figure out what footage 'should have' looked like without the distinct video noise generated by imaging sensors."

"I can write a million words about how awesome this is," writes TechCrunch's Haje Jan Kamps. You can see how Nerf performs in the dark in this YouTube video.
This discussion has been archived. No new comments can be posted.

Google Research AI Image Noise Reduction Is Out of This World

Comments Filter:
  • by Anonymous Coward on Tuesday August 23, 2022 @07:26PM (#62815919)

    Can it see through clothes? Asking for a friend who doesn't have a Slashdot account...

  • This software certainly achieves impressive results in guessing what an image could look like if it wasn't as noisy, but I would not call this "noise reduction" in the traditional sense. This software is basically matching noisy patterns against a database of images it was trained with and then blends its findings into a plausible image. Somewhat like nVidia's DLSS guesses details when up-scaling.

    Using this stuff for entertainment is fine, but for documentaries / evidence / news there is a risk that peopl
    • all those aliens and UFOs... will look like characters from fictional media

      I, for one, welcome our new harem anime overlords!

    • by timeOday ( 582209 ) on Tuesday August 23, 2022 @07:50PM (#62815967)
      No, that's not it at all. What it's doing is combining multiple exposures, even though the exposures are from slightly different perspectives. In fact it leverages the fact they are from different perspectives because it can glean enough information to set the focal point and dynamic range to wherever you want, within a wide range.

      But here is no neural network generalizing across a database of images here. It's "just" a different way to process a sequence of raw images of the same scene.

      • by ffkom ( 3519199 )
        So "Techcrunch" outright lied when writing "The algorithms run on raw image data and adds AI magic"? Looks like one has to read the source code to find out how much "AI magic" there actually is...
        • Well, "AI" let alone "magic" aren't exactly well-defined terms, but I'd guess whoever at TechCrunch wrote this doesn't really know the difference anyways.

          To me it still seems kinda magical. However, I did not notice the video stipulate the subject must be stationary, which I'm not sure, but seems must be the case. If so, it would generally not be useful for the one thing people love to photograph the most - other people.

          • by lsllll ( 830002 )
            The issue I had with the 5-minute video was that all the crappy, low-light images were in one frame, but when the video showed the camera movements, it showed things that weren't in the original image. Unless the original crappy image they were showing us wasn't the full frame of the image, that tells me either they were lying about the whole thing, or the AI truly "created" objects outside of the frame. The other issue I had was that, if the AI truly was aware of all the objects in the image (which it wo
          • by nagora ( 177841 )

            Well, "AI" let alone "magic" aren't exactly well-defined terms, but I'd guess whoever at TechCrunch wrote this doesn't really know the difference anyways.

            To me it still seems kinda magical. However, I did not notice the video stipulate the subject must be stationary, which I'm not sure, but seems must be the case. If so, it would generally not be useful for the one thing people love to photograph the most - themselves.

            Fixed that for you.

          • Seems like a pretty good and widely accepted definition to me :-) https://www.iso.org/obp/ui/#is... [iso.org]
        • timeOday's replies are not exactly the full story—NeRF does use machine learning, and in fact it trains a small neural network based on the images you feed it. With 128 of Google's fancy proprietary TPUv2 cores, the JAXNeRF implementation asks for 2.5 hours of training [github.com] and is then run with a camera position as the neural network inputs to generate an actual output image in about 350 ms for an 800x800 result. So it has many of the most obnoxious characteristics of leading machine learning algorithms bu
        • Yeah, it's a combination of temporal noise reduction and displacement-based effects. Because there's movement it can determine the distance to any part of the scene by the parallax effect. Not really "AI" as far as I can see... just some application of known physics and algorithms in a novel way.

        • No. What part of what he described don't you understand? Both the sentence you put in quotes and the post you replied to say the same thing.
      • The confusion is understandable, since the linked video mentions the blending of multiple images, but doesn't actually show it. This presentation, as usual, just shows the before and after and expects people to fill in the blanks with their imagination.

        The tech is impressive, but these presentations always leave a lot to be desired.

      • I guess this is similar to NASA processing 10000 images of deep space Webb Telescope images, when it takes 1000s of short exposures.
        The more images, the more data you have, the more noise you can detect to remove.

        (NOTE slashdot, why cant the LOGIN success auto redirect to the page I was where I clicked login)

      • I think the name you're looking for is "superresolution"?
    • by ffkom ( 3519199 )
      I should also add that in the linked video they mention that they combine multiple images of the scene as input, which is essentially the same that many digital cameras do when instructed to shoot a long exposure image. But where they present the "before and after" images in the video, they show a noisy single input picture, instead of the simple combination of multiple exposures that digital cameras could readily supply. This somewhat exaggerates the difference their specific algorithm makes.
    • No it isn't, lol wtf. It's a 3d denoiser, pretty clever actually. Take multiple exposures to reconstruct 3d patches using your chosen 3d data structure, then toss out outliers of the patches as noise using a neural net, boom denoised NERF. The NERF part is the "3d scene reconstruction" btw, not that you watched the video or did anything besides trying to prove how clever you are based on a headline.

      That's also not at all how DLSS works either, btw. It's a neural net trained on noiseless images meant to r
      • by ffkom ( 3519199 )
        I gladly admit that I made assumptions based on the press coverage of this software that are not backed by its published source code.

        Regarding DLSS, nVidia tells the opposite of what you write: https://forums.developer.nvidi... [nvidia.com] (and for DLSS we have no source code to verify their claims).
    • Re: (Score:3, Funny)

      by Tablizer ( 95088 )

      > we will now get perfectly sharp and detailed images of all those aliens and UFOs we so far only had a few noisy pixels from

      Result. [web2carz.com]

  • nvidia's DLSS? if so, is it better? faster?
  • Can it turn my insanely huge catalog of 9-shot HDR photos into snazzy little video snippets with selective, changing focus and moving the camera around? If so, then oh my god, I can't wait for this to be part of Visions of Chaos so I can get my local beast production box heating up the room with this magic.

    • by ceoyoyo ( 59147 )

      Sure, if you're really crappy at shooting multiple exposures and the camera moved a lot between frames.

  • ENHANCE... (Score:3, Funny)

    by Dj Stingray ( 178766 ) on Tuesday August 23, 2022 @08:04PM (#62815999)

    ENHANCE...ENHANCE!

    • by eonwing ( 934274 )
      Exactly what I was thinking. I seriously cannot believe that there is an ACTUAL ENHANCE function now. After all those years of making fun of sci-fi movies.
    • by AmiMoJo ( 196126 )

      Honestly we aren't far off that now. If you look at the video they fake a few photos in a dark area, and it spits out not just a great low noise image, but a 3D model that lets you move around it a bit to get a better perspective.

      The range of movement is limited by the source photos, the more you take from different angles the more freedom you have. Photogammetry of that kind has been around for a while and you can get very good results with a phone camera, but Google has elevated it to working in the dark.

  • by Dan East ( 318230 ) on Tuesday August 23, 2022 @08:10PM (#62816009) Journal

    It's important to note that this technology requires many different images (or video) from different views of a scene as input, and uses that to construct output. So it is not accurate to say this technology is de-noising an image, because it doesn't work on a single image. It's also not clear how well this works on multiple images taken from exactly the same viewpoint. IE from a security camera that doesn't move.

    I went to the trouble of reading some of the paper, and indeed their examples took between 25 and 200 input images. I would imagine that those highest quality results in the dark are the result of the higher end number of input images. Even 25 images is a lot of pictures of one single scene.

    • So, many cellphone camera tricks these days use numerous exposures, as from the camera shooting video instead of stills.

      "Night Sight" for example.

      As far as gathering the photographs needed goes, moving a cellphone camera around is all that's needed. The compute resources, might be a while before those go into the phone itself.

    • It's also not clear how well this works on multiple images taken from exactly the same viewpoint. IE from a security camera that doesn't move.

      Presumably you lose the ability to perspective shift, but combining many fixed identical images for noise reduction has been a standard technique for decades. The most simple and braindead of them all is a simple average or max mathematics and produces amazing results providing subjects don't move.

      The magic part here is handling the movement.

    • Indeed, I watched the youtube and it's not always clear to what other denoising engine they compare it. If it's versus a single image denoiser, it's not fair game since that would have to guess about adjacent pixels. Just by having two pictures it could already render a 3D space of the environment. Basically information it doesn't know in one picture, it can get from another picture. With a single image that's not possible. Very impressive nevertheless though, but it's not the voodoo magic they describe it
    • Many Android cameras that have the 0 shutter lag feature never stop recording when the camera app is opened. Many of the photos the algo needs could be taken from this buffer
  • Call me when it can track 45 right and enhance.

  • by Gravis Zero ( 934156 ) on Tuesday August 23, 2022 @09:18PM (#62816153)

    NeRF is a neat algorithm but it requires a crazy amount of computing time. Plenoxels (plenoptic volume elements) seems to do just as good a job (if not better) than NeRF without a computationally heavy neural network. Nvidia wants to use NeRF to push their video cards. I assume Google is pushing it because they are all in on using neural networks for everything.

    Info about plenoxels: https://alexyu.net/plenoxels/ [alexyu.net]
    implementation: https://github.com/sxyu/svox2 [github.com]

    • by egr ( 932620 )
      InstantNGP https://nvlabs.github.io/insta... [github.io] crashed plenoxels in benchmarks, also it is in active development while there have not been a single commit in 8 months on plenoxels.
      • This is excellent. However, it's each iteration (NeRF->plenoxels->InstantNGP) are all two orders of magnitude faster than the previous, so claiming it crushed it is relative. I wouldn't be surprised if the lessons learned from InstantNGP were reintegrated into plenoxels for an even faster outcome. However, it may take longer as Nvidia has paid staff with the goal of promoting Nvidia hardware which is why they go for the neural networks.

  • by mbkennel ( 97636 ) on Tuesday August 23, 2022 @11:27PM (#62816421)

    This technology may prove valuable to analysts looking at satellite images for intelligence services. By its very nature there will be a strip of related images at similar but not identical scenes, with significant noise.

    • It'll certainly be great for imaging women sunbathing on their roof at midnight.
      • by Tablizer ( 95088 )

        Unless they look like your mom.

        • It'll certainly be great for imaging women sunbathing on their roof at midnight.

          Unless they look like your mom.

          Dunno, maybe the OP's name is "Stacey" .

    • by Tablizer ( 95088 )

      It may introduce artifacts that could lead to false conclusions. It's more useful for aesthetics than for decision making.

  • by 4im ( 181450 ) on Wednesday August 24, 2022 @02:13AM (#62816649)

    I know some people use DeNoise AI [topazlabs.com] to reduce noise in their astro images. Would this compare to that? I'd love to try an open source variant on this anyway.

    Going rapidly over the comments, it rather sounds like it's doing a sort of "dithering" - the technique astrophotographers use to take a series of pictures with slight offsets, so noise / hot pixels aren't always in the same place and can be "averaged out" during stacking.

  • Here's the "Two Minute Papers" desciption of this awesome technique:
    https://www.youtube.com/watch?... [youtube.com]

"The medium is the massage." -- Crazy Nigel

Working...