Researchers Detail AI that De-hazes and Colorizes Underwater Photos (venturebeat.com) 25
Kyle Wiggers, writing for VentureBeat: Ever notice that underwater images tend to be be blurry and somewhat distorted? That's because phenomena like light attenuation and back-scattering adversely affect visibility. To remedy this, researchers at Harbin Engineering University in China devised a machine learning algorithm that generates realistic water images, along with a second algorithm that trains on those images to both restore natural color and reduce haze. They say that their approach qualitatively and quantitatively matches the state of the art, and that it's able to process upwards of 125 frames per second running on a single graphics card. The team notes that most underwater image enhancement algorithms (such as those that adjust white balance) aren't based on physical imaging models, making them poorly suited to the task. By contrast, this approach taps a generative adversarial network (GAN) -- an AI model consisting of a generator that attempts to fool a discriminator into classifying synthetic samples as real-world samples -- to produce a set of images of specific survey sites that are fed into a second algorithm, called U-Net.
Underwater photos (Score:1)
Re: (Score:2)
Enhance! Enhance!!
Not probably. Definitely. (Score:2)
Fun fact: Every vegan eats animals with every meal. Tardigrades, rotifers, nematodes, and maybe more.
I just *need* to plug Journey To The Microcosmos [youtube.com] here, because it is the best, if you want to know more about micro-organisms.
Re:Underwater photos (Score:5, Informative)
I'm a fairly experienced underwater photographer with some published images, so I can add a bit to this discussion. The premise of what you quote is a little bit weird:
"Ever notice that underwater images tend to be be blurry and somewhat distorted?"
No, not properly taken ones I don't. In this respect there's no difference between underwater photography and land photography; it'll only be blurry and distorted if you do a shit job, i.e. fail to shoot at a fast enough shutter speed to avoid motion blur, or fail to hit your subject with sufficient lighting to freeze it in place, or simply use an inappropriate aperture setting, or simply fail to achieve correct focus.
What you do find with most amateur underwater photographs is that they're lacking realistic colour. Before I go into that though it's important to talk about what "realistic colour" means. You see, the deeper you dive, the more colour you lose underwater, however human eyes are pretty good at adapting to that, in the first 15 metres or so you'll find that things look astoundingly vibrant and colourful to the eye, but take a photo and there'll be a significant lack of colour; this is a combination of the fact that cameras can't capture light as well as our eyes, and the fact that cameras aren't backed by processors as powerful as our brain that can automatically adjust for the loss of certain colours in the spectrum. So realistic colour can mean one of a few things in underwater photography:
1) How a subject looks to a diver's own eyes underwater (fairly colourful)
2) How a subject would appear if it were not in the water (true colour)
3) How a subject appears to a camera without any artificial lighting (significant loss of natural colour)
Now here's the thing, to achieve 1) or 2) with a camera you have to do one of a few things. To achieve 1) you have to edit the photo, whether that's letting the camera do it with in-camera white balance, or performing an identical procedure out of the water on raw files using white balance in a tool like Lightroom. To achieve 2) with a camera you simply use artificial light underwater, this has it's limitations in that it too will only illuminate so far, so you have to get close to your subject.
Given the requirement to do something to achieve 1) or 2), I've heard people say "Well that's cheating, it's not really what it looks like if you have to edit it". I disagree with this, by achieving 1) or 2) you are making it look like what it looks like to the human eye either in shallower water or with a torch being shined upon it in the water. If you don't do this you're simply ending up with photos that don't match any reality other than that generated by the technical limitations of modern cameras all because of some obscure and meaningless notion of photographic purity.
This is the number one thing people need to learn about when trying to make their underwater photos look like they did to their own eyes in the water, or under torchlight in the water.
Subsequent issues are indeed to do with simply how much particulate shit there is in the water, whether it's algae, or whether it's microorganisms; this sort of stuff can truly ruin photos if you're using underwater lighting because position your lights wrong and it'll reflect off these items and show as hundreds of speckles on your photo. You can mitigate this by angling your strobes appropriately so that the backscatter reflects outwards away from the lens, but the best way to deal with it is to simply get closer to your subject to minimise the amount of shit in the water between you and your subject; this becomes easier the wider the angle of your lens.
So here's the problem with this AI, it doesn't really seem to be doing much other than manually white balancing. It's not altering the photo to adjust colours sufficiently to mimic lighting underwater, and it's not removing backscatter from particulate in the water. I've no doubt it's possible to train an AI to remove backscatter and so forth, but right now this
Re: (Score:2)
Water scatters light more than air, and also generally supports more and larger particles, which also scatter light. Both effects cause the mean path length of a ray of light in water to be much shorter than in air. The scattering and absorption is also frequency dependent, so white balance changes with propagation distance.
The effect they're after is really de-hazing, apparently so they can do object recognition more reliably.
Re: (Score:2)
Makes sense, one of the biggest problems shooting wide angle with strobes is that whilst you can light up a near subject to give perfect colours, anything past a few metres is still discoloured. This is the downside of shooting with artificial light of course, it means if you wish to light balance that in the distance by increasing the amount of light in the red spectrum, everything close up will be too red.
I don't see this sort of AI ever really improving macro photography because you're typically so close
Re: (Score:2)
The colour alteration will depend on the total path length to the object (in water) from the light source, as well as the particular scattering properties of the stuff in the water, so technically the AI could improve on simple white balance by using depth and colour information it can estimate from the image. That might make a noticeable difference for artificial lighting, but as you've noticed, simple algorithms are usually enough for most naturally lit underwater photos.
The haze is a bit different. It de
Jacques-Yves Cousteau would have liked it (Score:2)
for sure.
Does that mean... (Score:1)
Simple LMS colorspace white-balancing just as good (Score:3)
These "Oh look we did X with some artificial neural network!" news become increasingly annoying in that they too often fail to mention that equivalent, less randomized results have been achieved with simple, well understood algorithms before.
Re: (Score:2)
"Human eye vision" has absolutely nothing to do with image degradation that occurs underwater, so an "accurate model" of it is irrelevant to the problem. LMS can be freely converted to and from other color spaces so you approach amounts to diddling bits without any fundamental understanding of the cause of the problem. You may do "a lot" of "underwater-videos" but you're a knob-turner, nothing more.
You cannot correct for degradation after the data is lost. You can synthesize missing data after the fact,
Re: (Score:2)
Re: (Score:3)
Re: (Score:2)
Yeah the problem is much less severe at low depth, where you still have most of the color spectum reaching. At 30m (or even less), reds are just gone. They even make you look at a color chart during training and the first two swatches (red and orange) look gray. No amount of fiddling with the pixels will bring these colors back, because they could've been anything originally. Even gray. The only solution is to bring your own light with you.
Interestingly though I read about another solution to this problem j
That last sentence (Score:2)
Re: That last sentence (Score:1)
Congratulations scientists... (Score:1)
...you just invented white balance!!!
You don't know if it's based on physical models! (Score:2)
It's a neural network. Aka a generic function you throw in if you have no clue how to write an algorithm for that. You merely train it. What it actually does, and how it does it, is by definition unknown, often not what you thought it does, and sometimes surprising. Otherwise you wouldn't need a neural net, and could code a faster precise algorithm yourself.
Re: (Score:2)
They trained the neural net using simulated underwater images created using a model of light scattering in water.
You could, however, use a neural net to estimate the parameters of such a model to apply the inverse effect. Neural networks are quite good at estimating parameters of physical models from complicated or incomplete data (such as a photograph).
Huh (Score:2)
Thanks, I guess.
Also done in Israel (Score:3)
Hazy, perhaps? (Score:1)