Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Input Devices Technology

New Camera Sensor Filter Allows Twice As Much Light 170

bugnuts writes "Nearly all modern DSLRs use a Bayer filter to determine colors, which filters red, two greens, and a blue for each block of 4 pixels. As a result of the filtering, the pixels don't receive all the light and the pixel values must be multiplied by predetermined values (which also multiplies the noise) to normalize the differences. Panasonic developed a novel method of 'filtering' which splits the light so the photons are not absorbed, but redirected to the appropriate pixel. As a result, about twice the light reaches the sensor and almost no light is lost. Instead of RGGB, each block of 4 pixels receives Cyan, White + Red, White + Blue, and Yellow, and the RGB values can be interpolated."
This discussion has been archived. No new comments can be posted.

New Camera Sensor Filter Allows Twice As Much Light

Comments Filter:
  • by 140Mandak262Jamuna ( 970587 ) on Saturday March 30, 2013 @10:42PM (#43322283) Journal

    "We've developed a completely new analysis method, called Babinet-BPM. Compared with the usual FDTD method, the computation speed is 325 times higher, but it only consumes 1/16 of the memory. This is the result of a three-hour calculation by the FDTD method. We achieved the same result in just 36.9 seconds."

    What I don't get is calling the FDTD (finite difference time domain) analysis as the "usual" method. It is the usual method in fluid mechanics. But in computational electromagnetics finite element methods have been in use for a long time, and they beat FDTD methods hollow. The basic problem in FDTD method is that, to get more accurate results you need a finer grids. But finer grids also force you to use finer time steps. Thus if you halve the grid spacing, the computational load goes up by a factor of 16. It is known as the tyranny of the CFL condition. The finite element method in frequency domain does not have this limitation and it scales as O(N^1.5) or so. (FDTD scales by O(N^4)). It is still a beast to solve, rank deficient matrix, low condition numbers, needs a full L-U decomposition, but still, FEM wins over FDTD because of the better scaling.

    The technique mentioned here seems to be a variant of boundary integral method, usually used in open domains, and multiwavelength long solution domains. I wonder if FEM can crack this problem.

    • by Anonymous Coward on Saturday March 30, 2013 @11:28PM (#43322439)

      I'm not sure any of the comparison of FDTD and FEM-FD in this post is right. FDTD suffers from the CFL limitation only in its explicit form. Implicit methods allow time steps much greater than the CFL limit. The implicit version requires matrix inversions at each time step, whereas the explicit version does not. Comparing FEM-FD and FDTD methods is silly. One is time domain, one is frequency domain, they are solving different problems. There is no problem doing FEM-TD (time domain), in which case the scaling is worse for FEM, when compared to explicit FDTD since the FDTD method pushes a vector, not a matrix, requires only nearest neighbor communication, whereas FEM requires a sparse-matrix solve, which is the bane of computer scientists as the strong scaling curve rolls over as N increases. FDTD does not have this problem, requires less memory and is more friendly toward GPU based compute hardware that is starting to dominate todays supercomputers.

      • by XiaoMing ( 1574363 ) on Sunday March 31, 2013 @12:16AM (#43322581)

        Interesting comments from both, but I believe you both missed the point. The real question is, which one of these methods, FDTD or FEM-FD, will allow optimal reprocessing in the frequency domain that makes my dinner look prettier with an Instagram vintage filter?

      • A GPU is actually pretty good at sparse matrix computations, unlike CPUs.

        • Say wa? If you do it right CPUs are very good at sparse matrix methods. Its basic algorithms, lots of zeros and structure give lots of scope for optimization regardless of the target hardware. GPUs may be too and if vectorize properly and avoid branching and can give better performance per dollar. Yes this has been some of my day job.
          • Sparse reads and writes are not vectorizable and are not cache-friendly. A GPU has fast memory without cache and is not limited by traditional vectorization

  • by rusty0101 ( 565565 ) on Saturday March 30, 2013 @11:03PM (#43322343) Homepage Journal

    ...we've switched from calculating rggb values based on attenuated rggb values sensed, to calculating rgb values from sensing cyan (usually a color of reflected light with red subtracted, white+blue ?, white+red ?, and yellow (again reflected white light minus the blue spectral light.)

    I can see the resulting files having better print characteristics, if the detectors sense to the levels close to the characteristics of ink used for prints, but I don't think that's going to help at the display the photographer will be using to manipulate the images.

    And of course neither variety of photo image capture is comparable to the qualities of light that our rods and cones respond to in our eyes.

    • by Lehk228 ( 705449 )
      it means for any given sized sensor, a higher percentage of the incoming photons are captured for analysis

      how this advantage is used it up to the engineers.

      it could be used to make sensors that are smaller and just as good as current sensors, or better quality out of the same sensors. because this improvement is in the signal/noise domain it will also allow for better high speed image capture.
      • by EdZ ( 755139 )

        it could be used to make sensors that are smaller and just as good as current sensors

        I'm not sure if it could. Pixel sizes for really tiny cameraphone sensors (1.1 microns, or 1100 nm) are getting close to the wavelength of visible red photons (750 nm). If you shrink them anymore, Quantum Stuff starts to happen which you may not really want to happen.

        • Re:So essentially... (Score:4, Interesting)

          by Rockoon ( 1252108 ) on Sunday March 31, 2013 @07:21AM (#43323521)

          If you shrink them anymore, Quantum Stuff starts to happen which you may not really want to happen.

          ..unless you embrace the Quantum Stuff and deal with the consequences. One of the nice things about Quantum Interference is that its well defined, unlike other forms of noise.

      • by dfghjk ( 711126 )

        Or it may not be an advantage at all.

        It is possible that the extra photons not being passed through traditional filters will actually degrade performance. In the past there have been complementary Bayer filter arrays for the same purpose, improved light sensitivity. These cameras delivered inferior color performance.

        It is important to have good light sensitivity AND good dynamic range. Dynamic range is not just what your sensor can provide but what you can consistently use. Sometimes filtering light imp

      • Makes me wonder why cyan magenta and yellow sensors were not used from the beginning. It should be as easy to build as RGB, but it should also get more light since only a smaller part of the spectrum needs to be blocked for each pixel.
    • The only difference here is that rather than using lenses to focus the light onto individual photosites, they're splitting the light to hit those same photosites. So, at least in theory, you're getting more of the photons as the ones that were being blocked by the filters aren't being wasted.

    • by ceoyoyo ( 59147 )

      "And of course neither variety of photo image capture is comparable to the qualities of light that our rods and cones respond to in our eyes."

      You're right. The colour filters used in cameras generally need extra filtering to block out portions of the IR and UV that our eyes are not sensitive to.

    • I can see the resulting files having better print characteristics, if the detectors sense to the levels close to the characteristics of ink used for prints, but I don't think that's going to help at the display the photographer will be using to manipulate the images.

      You can losslessly, mathematically translate between this and RGB (certainly not sRGB) and CMYK. But that's just math. Printing is difficult due to the physical variables of the subtractive color model. The more money you throw at it -- that is to say, the better and more inks and quality of paper you use -- the better it gets. No new physical or mathematical colorspace will improve color reproduction.

      • No new physical or mathematical colorspace will improve color reproduction.

        'cept we arent dealing with 'preproduction' - we are dealing with 'capture' - while the RGB color space can indeed encode "yellow" it cannot encode how it got to be yellow (is it a single light wave with a wavelength of 570 nm, is it a combination of 510 nm and 650 nm waves, or is it something else?)

        (hint: Your monitor reproduces yellow by combining 510 nm and 650 nm waves, but most things in nature that appear yellow do so because the waves are 570 nm)

    • Re:So essentially... (Score:5, Informative)

      by Solandri ( 704621 ) on Sunday March 31, 2013 @01:52AM (#43322763)

      ...we've switched from calculating rggb values based on attenuated rggb values sensed, to calculating rgb values from sensing cyan (usually a color of reflected light with red subtracted, white+blue ?, white+red ?, and yellow (again reflected white light minus the blue spectral light.)

      Your eyes actually aren't sensitive to red, green, and blue. Here are the spectral sensitivities [starizona.com] of the red, green, and blue cones in your eye. The red cones are actually most sensitive to orange, green most sensitive to yellow-green, and blue most sensitive to green-blue. There's also a wide range of colors that each type of cone is sensitive to, not a single frequency. When your brain decodes this into color, it uses the combined signal it's getting from all three types of cones to figure out which color you're seeing. e.g. Green isn't just the stimulation of your green-yellow cones. It's that plus the low stimulation of your orange cones and blue-green cones in the correct ratio.

      RGB being the holy trinity of color is a display phenomenon, not a sensing one. In order to be able to stimulate the entire range of colors you can perceive, it's easiest if you pick three colors which stimulate the orange cones most and the other two least (red), the green-blue cones most and the others least (blue), and the green-yellow cones most but the other two least (green). (I won't get into purple/violet - that's a long story which you can probably guess if you look at the left end of the orange cones' response curve.) You could actually pick 3 different colors as your primaries, e.g. orange, yellow, and blue. They'd just be more limited in the range of colors you can reproduce because their inability to stimulate the three types of comes semi-independently. Even if you pick non-optimal colors, it's possible to replicate the full range if you add a 4th or 5th display primary. It's just more complex and usually not economical (Panasonic I think made a TV with extra yellow primary to help bolster that portion of the spectrum).

      But like your eyes, for the purposes of recording colors, you don't have to actually record red, green, and blue. You can replicate the same frequency response spectrum using photoreceptors sensitive to any 3 different colors. All that matters is that their range of sensitivity covers the full visible spectrum, and their combined response curves allow you to uniquely distinguish any single frequency of light within that range. It may involve a lot of math, but hey computational power is cheap nowadays.

      It's also worth noting that real-world objects don't give off a single frequency of light. They give off a wide spectrum, which your eyes combine into the 3 signal strengths from the 3 types of cones. This is part of the reason why some objects can appear to shift relative colors as you put them under different lighting. A blue quilt with orange patches can appear to be a blue quilt with red patches under lighting with a stronger red component. The "orange" patches are actually reflecting both orange and red light. So the actual color you see is the frequency spectrum of the light source, times the frequency emission response (color reflection spectrum) of the object, convolved with the frequency response of the cones in your eyes. And when you display a picture of that object, your monitor is simply doing its best using three narrow-band frequencies to stimulate your cones in the same ratio as they were with the wide-band color of the object. So a photo can never truly replicate the appearance of an object; it can only replicate its appearance under a specific lighting condition.

      • What's happening when i shine my violet laser at a tennis ball green dog toy and it seems to get brighter and reflect white, or on a marble coffee table and it gets blue-white? Really liked your breakdown.

        • Re: (Score:2, Informative)

          by Anonymous Coward

          Then the material is phosphorous.
          The photons from the light source are able to put electrons in the material in a higher orbit (skipping at least one orbit level), then when the electron drops its orbit it doesn't go all the way back to the original orbit. Since the distance of the electron going up, is not the same as going down, the photon produces is of a different frequency (color) than the photon from the light source.

          The second drop of the electron to the original orbit will also cause another photon

      • by dkf ( 304284 )

        So the actual color you see is the frequency spectrum of the light source, times the frequency emission response (color reflection spectrum) of the object, convolved with the frequency response of the cones in your eyes.

        What's more, many surfaces reflect different colors at different amounts depending on the exact angle you view them at. Butterfly wings are an extreme example of this, or soap bubbles, but the phenomenon is common. (If you ever want to write a physically-accurate ray tracer, you get to deal with a lot of this complexity.) This can make a surface made of a single substance look very different across it. Now, these effects are functions of the wavelength of the incoming light (and the reflection angle, with t

      • But like your eyes, for the purposes of recording colors, you don't have to actually record red, green, and blue. You can replicate the same frequency response spectrum using photoreceptors sensitive to any 3 different colors

        Mathematically, a spectrum is an infinite dimensional vector space and any three color sensors will pick out a three dimensional subspace. In general, you cannot reproduce the response in one three dimensional subspace (human response) from another three dimensional subspace (camera res

  • Remember how the Foveon X3 sensor [wikipedia.org] was supposed to revolutionize digital photography and make the standard sensors obsolete? Tell me how many cameras you've used with those sensors in them.

    In other words, technological superiority doesn't always win in digital photography.
    • by SuperKendall ( 25149 ) on Saturday March 30, 2013 @11:20PM (#43322403)

      In other words, technological superiority doesn't always win in digital photography.

      This is very true, although the Foveon was superior in resolution and lack of color moire only - it terms of higher ISO support it has not been as good as the top performers of the day.

      But the Foveon chip does persist in cameras, currently Sigma (who bought Foveon) still selling a DSLR with the Foveon sensor, and now a range of really high quality compact cameras with a DSLR sized Foveon chip in it. (the Sigma DP-1M, DP-2M and DP-3M each with fixed prime lenses of different focal lengths)

      I think though that we are entering a period where resolution has plateaued, that is most people do not need more resolution than cameras are delivering - so there is more room for alternative sensors to capture some of the market because they are delivering other benefits that people enjoy. Now that Sigma has carried Foveon forward into a newer age of sensors they are having better luck selling a high-resolution very sharp small compact that has as much detail as a Nikon D800 and no color moire...

      Another interesting alternative sensor is Fuji with the X-Trans sensor - randomized RGB filters to eliminate color moire. The Panasonic approach seems like it might have some real gains in higher ISO support though.

      • This is very true, although the Foveon was superior in resolution and lack of color moire only

        Foveon is only superior in resolution if the number of output pixels is the same. But if you count photosites, i.e. 3 per pixel in a Foveon, then Bayer wins. A Foveon has about the same resolution as a Bayer with twice the pixel count, but the Foveon has three times the number of photosites.

        But the problem is colors.

        Foveon has a theoretical minimum color error of 6%. Color filter sensors (eg. Bayer) have a theoretical minimum error of 0%. Color filter sensors can use organic filters that are close to the fi

        • Foveon is only superior in resolution if the number of output pixels is the same.

          That is a pretty bad way to measure things, because it ignores things like color moire and other artifacts you get with bayer sensors. As I stated, resolution is not everything. And a Foveon chip delivers a constant level of detail, whereas a bayer chip inherantly will deliver levels of detail that vary by scene color.

          In a scene with only red (say the hood of a red car) you are shooting with just 1/3 of the camera sensors cap

      • Quite right.

        Some of the more impressive shots I've seen were on an A series (A85) 4MP camera which can be had for thirty bucks, and some majestic HDRs from a 6mp Konica Minolta. If you have a decent camera and time and tenacity you can make pretty pictures. And conversely I am sure it wouldn't take long to find someone who should just go sell their 5D.

      • by dfghjk ( 711126 )

        "This is very true, although the Foveon was superior in resolution and lack of color moire only - it terms of higher ISO support it has not been as good as the top performers of the day."

        The Foveon has always been inferior in resolution overall photosite-for-photosite, superior only is a small subset of color combinations, and it has been, in fact, a dismal technology in terms of high ISO. It is not simply "not been as good as the top performers", it is notably worse than Bayer sensors categorically. Fove

        • The Foveon has always been inferior in resolution overall photosite-for-photosite, superior only is a small subset of color combinations

          The "small subset" is any photographic subject with blue or red. Like fall leaves, anything with detail against a sky, red or blue fabrics with fine detail, etc.

          That sure is a "small subset".

          it has been, in fact, a dismal technology in terms of high ISO

          In the past possibly. The current cameras handle ISO up to ISO 1600 well in color, up to ISO 6400 in B&W.

          An ISO 6400

    • Foveon was never superior, if they had been able to make it work properly it would have taken over, but it's always had issues with noise and resolution that the CMOS and CCD sensors don't. It's a shame because I wanted it to win, but realistically it's been like a decade and they still haven't managed to get it right, they probably won't at this rate.

    • by ceoyoyo ( 59147 )

      Whether or not the Foveon is technologically superior is pretty debatable. It was a neat idea that had some pretty serious shortcomings and, even forgiving those, the difficulty of producing the things left them in the dust as conventional sensors improved.

    • In other words, technological superiority doesn't always win in digital photography.

      In Panasonic's case it's not achieving superiority but dealing with inferiority, their consumer-grade camera sensors have always had terrible problems with chroma noise in low-light conditions, so this may just be a way of improving the low-light performance.

    • Depends on what technological superiority means. In photography light sensitivity is absolutely key for trying to sell a sensor. Most people are interested in figures for noise and range of ISO settings (provided the camera has more than about 12mpxl otherwise they are interested in more resolution too). Foveon failed in all these regards. Their superior colour rendition and absolute lack of moire did not help them at a time when people were scratching their heads at the low resolution and poor sensitivity.

      • by dfghjk ( 711126 )

        "Their superior colour rendition ..."

        Foveon NEVER had superior color rendition. All it offers is lack of color moire at the expense of many other flaws that are, in the balance, vastly more important. Color moire is not the most problematic issue in digital photography.

    • Two, actually. Sigma SD9, Sigma SD14.

    • Don't believe all the marketing hype. The Foveon sensor failed because it was not technically superior; it gave you lower resolution, less sensitivity, and worse color reproduction than comparable sensors based on Bayer patterns. The one problem it addressed, namely occasional bad color reproduction around edges with Bayer sensors, simply didn't matter enough to make up for its disadvantages.

  • by FlyingGuy ( 989135 ) <flyingguy@EEEgmail.com minus threevowels> on Saturday March 30, 2013 @11:23PM (#43322421)

    Simply use three sensors and a prism. The color separation camera has been around for along time and the color prints from it are just breath taking. Just use three really great sensors then we can have digital color that rivals film.

    Check out the work of Harry Warnecke and you will see what I mean.

  • Why is RGB used for filtering at all? Wouldn't it be better to use the inverse (i.e., CMY or no-red, no-green, no-blue) instead? Wouldn't that allow twice as much light to pass through? I must be missing something obvious, someone care to explain what I am missing here?
    • The answer is color accuracy, which this chip severely sacrifices for better luminance info.
      Mixing R+G, B+G, etc. together means that figuring out the correct R,G,B color corresponding to an observed signal requires taking sums and differences between pixels that sums a bunch more noise into the color channels.

      Example: Consider sensor (A) with R,G,B-sensing pixels, and (B) with Y=R+G, C=G+B, M=B+R sensing pixels.
      Suppose light consisting of R',G',B' hits each sensor: sensor (A) directly tells you R',G',B'
      Sen

    • Well, RGB vs CMY is really more of an additive/subtractive problem. CMY doesn't work on its own in additive space where sensors operate.

After a number of decimal places, nobody gives a damn.

Working...