Forgot your password?
typodupeerror
Technology

Algorithm Reveals Objects Hidden Behind Other Things In Camera Phone Images 85

Posted by samzenpus
from the pay-no-attention-to-the-objects-behind-the-curtain dept.
KentuckyFC writes "Imaging is undergoing a quiet revolution at the moment thanks to various new techniques for extracting data from images. Now physicists have worked out how to create an image of an object hidden behind a translucent material using little more than an ordinary smartphone and some clever data processing. The team placed objects behind materials that scatter light such as onion skin, frosted glass and chicken breast tissue. They photographed them using a Nokia Lumina 1020 smartphone, with a 41 megapixel sensor. To the naked eye, the resulting images look like random speckle. But by treating the data from each pixel separately and looking for correlations between pixels, the team was able to produce images of the hidden objects. They even photographed light scattered off a white wall and recovered an image of the reflected scene--a technique that effectively looks round corners. The new technique has applications in areas such as surveillance and medical imaging."
This discussion has been archived. No new comments can be posted.

Algorithm Reveals Objects Hidden Behind Other Things In Camera Phone Images

Comments Filter:
  • It is often not obvious which is which. So save your "enhance!" jokes.
  • We are rapidly approaching the glorious day when we can announce the elusive 'Victoria's Secret Clothing Filter' app.
  • I remember seeing a conference presentation a few years ago that promised to see through clothing using image processing (like the scanners at the airport, except cheap and accessible to everyone).

    Luckily (I would think), this didn't work out in practice.

    I expect this method won't either.

  • by cpt kangarooski (3773) on Monday March 17, 2014 @05:44PM (#46510967) Homepage

    You mean like the frosted glass commonly used for bathroom windows and shower doors? I see this as being a form of image processing that will rapidly be perfected.

  • by Anonymous Coward

    It is a Nokia LUMIA not Lumina.
    "Lumina" is how stupid customers call the phones they own without even bothering to read their names (no wonder they cannot use whatever runs on it).

    Sincerely,
    A guy who works in the customer service and has to deal with Lumias all the time

    • by Jmc23 (2353706)

      A guy who works in the customer service and has to deal with Luminas all the time

      FTFY.

      After all, the customer is always right, no?

  • You have a heck of lot of information to work with.

    Didn't RFA the phrase "using little more than an ordinary smartphone", that would be 5 Meg Pix here, and damn impressive.

    • The Lumia (not "Lumina", lol...) 1020 isn't *that* expensive. Its sensor is top-notch for a smartphone today, but it was only a few years ago that 5MPx was considered excellent in a phone and only professional gear (typically tens of thousands of dollars) could hit 40+MPx. Technology marches on. Today, a low-end smartphone typically has a 5MPx sensor. Assuming CCDs and/or CMOS chips follow Moore's Law (they might not, but I suspect they do, or something close to it) in about six years even cheap phones will

      • by gl4ss (559668)

        N95 with 5 mpix was debuted 2006...

      • Re: (Score:2, Insightful)

        by Anonymous Coward

        Sensor size (in terms of MP) can certainly help, but without a decent lens most of the data is just noise and garbage pixels. A 8MP DLSR with a good lens will give you a vastly better image than a smartphone with a 20+ MP sensor on it.
        A big part of getting a good image is how much light is being captured, but you also need a high-quality sensor to capture it accurately. The actual resolution of the sensor is largely a secondary factor in the final image quality. And when you're dealing with a tiny lens on a

        • by gl4ss (559668)

          and this technique is pretty much about how to fight crappy lens(or chicken fat) between the object and sensor, no?

  • by 140Mandak262Jamuna (970587) on Monday March 17, 2014 @05:47PM (#46511001) Journal
    It is object behind a translucent screen. It does some AI based image sharpening. It is not the classic tank behind a tree. Nor reconstructing a face partially hidden by faces or hoodies etc.
    • I take a lot of pictures with my imager safely hidden behind a solid object called a "lens"

  • by Anonymous Coward on Monday March 17, 2014 @05:48PM (#46511007)

    Tits or it never happened.

  • Sounds like an offshoot of these guys [youtube.com].
  • by Applehu Akbar (2968043) on Monday March 17, 2014 @06:28PM (#46511277)

    An algorithm for perceiving objects hidden behind other objects could...enable even men to find things in the refrigerator!

  • by argStyopa (232550) on Monday March 17, 2014 @06:43PM (#46511385) Journal

    How could you possibly do this experiment without trying it through the frosted glass of a shower door with a naked person on the other side?

  • by Tablizer (95088)

    The team placed objects behind materials that scatter light such as onion skin, frosted glass and chicken breast tissue...

    I don't even want to know the intended practical application of the chicken skin. Just don't tell T-S-A about it :-)

  • by Anonymous Coward
    From TFA:

    It should also be useful in situations where lenses are hard to use. The most expensive and problematic part of the Hubble Space Telescope, for example, was its lens.

    Um, no. The most expensive and problematic part of the Hubble Space Telescope was the primary mirror of its Ritchey–Chrétien reflector type telescope, although I suppose a layman could mistake that for a lense.

  • What have I got in my pocket?

  • I can see how this would be useful for everyday photography. Reducing atmospheric interference like haze and mirage, etc. Pretty cool technology.
    • by Arkh89 (2870391)

      Actually no. This application is not working for broad wavelength spectrum and waste an awful lot of light in all the scenarios where you don't control the source (and impose it to be a laser).

  • See Blade Runner Esper machine [youtube.com], which was parodied on Red Dwarf [youtube.com].
  • by BillX (307153) on Monday March 17, 2014 @10:37PM (#46512957) Homepage

    I don't claim to be an imaging expert, but a few odd details about the experimental method jumped out at me. It's been known for some time now that diffusive and other scene-perturbing objects (e.g. grossly distorting 'lenses' such as a Coke bottle) can be nullified using a structured light technique to characterize and effectively 'undo' the perturber. A simple structured light example is to replace the light source with a DLP projector and take multiple images with only one pixel illuminated at a time. More clever implementations can replace the single pixels with speckle patterns, zebra stripes, etc., and replace the 2D imager with a single-pixel photocell. Other neat tricks can then be performed such as reconstructing the image from the POV of the light source [stanford.edu] rather than the imaging device.

    The experimentals shown in this paper all seem to have two things in common: 1) the "object" in each case is a backlit, 2D binary pattern on a transparency film or similar, with a relatively small illuminated area, and 2) an extremely narrowband (laser, actually) light source is used. The paper does mention several times that the light source is non-coherent, but it is a laser under the hood. This explains the numerous references to "speckle" in the images, which may leave most readers scratching their heads since things don't normally speckle when looked at through a slice of onion under ordinary light. Speckling is a laser (de)coherence phenomenon [repairfaq.org] where the rays are put slightly out of coherence so as to interfere constructively and destructively.

    These things suggest to me that while the paper is definitely interesting, there is no need to worry about the neighbors snapping passable nudes through your shower door or Feds cataloging your grow farm via pictures of a blank wall through your window. This sounds more like a modest extension to what's already been done stirring coherent and structured-light in a pot with convolution and autocorrellation methods.

    Since the coherence length of cheap semiconductor lasers (e.g. laser pointers) can be on the order of 1mm or less, it's possible to call even a straight-up laserbeam "non-coherent narrowband light" with a somewhat straight face. Likewise, the quasi-point-sources created using a sparse geometric 2D aperture in transparency film, backlight by the aforementioned source, is pretty close to structured light for practical purposes. The takeaway message is these are very special lighting and "scene" conditions that are not representative of everyday photographic circumstances. So not to worry just yet :-)

    • The paper does explain these limitations and it would be wrong to assume that this means we can see/photograph through fog, skin, clouds... But a semiconductor laser can have a spacial coherence long enough to do holography so if they said the source was a laser and non-conherent narroband, I'll give them the benefit of the doubt and assume they did something to destroy the laser coherence. Spinning frosted glass beam interrupters and other techniques are often used to despeckle laser light where it interfe

  • I photo bombed you from behind that wall.

Do not underestimate the value of print statements for debugging.

Working...