New Camera Sensor Filter Allows Twice As Much Light 170
bugnuts writes "Nearly all modern DSLRs use a Bayer filter to determine colors, which filters red, two greens, and a blue for each block of 4 pixels. As a result of the filtering, the pixels don't receive all the light and the pixel values must be multiplied by predetermined values (which also multiplies the noise) to normalize the differences. Panasonic developed a novel method of 'filtering' which splits the light so the photons are not absorbed, but redirected to the appropriate pixel. As a result, about twice the light reaches the sensor and almost no light is lost. Instead of RGGB, each block of 4 pixels receives Cyan, White + Red, White + Blue, and Yellow, and the RGB values can be interpolated."
Wow! Computational Electromagnetics rock! (Score:5, Interesting)
"We've developed a completely new analysis method, called Babinet-BPM. Compared with the usual FDTD method, the computation speed is 325 times higher, but it only consumes 1/16 of the memory. This is the result of a three-hour calculation by the FDTD method. We achieved the same result in just 36.9 seconds."
What I don't get is calling the FDTD (finite difference time domain) analysis as the "usual" method. It is the usual method in fluid mechanics. But in computational electromagnetics finite element methods have been in use for a long time, and they beat FDTD methods hollow. The basic problem in FDTD method is that, to get more accurate results you need a finer grids. But finer grids also force you to use finer time steps. Thus if you halve the grid spacing, the computational load goes up by a factor of 16. It is known as the tyranny of the CFL condition. The finite element method in frequency domain does not have this limitation and it scales as O(N^1.5) or so. (FDTD scales by O(N^4)). It is still a beast to solve, rank deficient matrix, low condition numbers, needs a full L-U decomposition, but still, FEM wins over FDTD because of the better scaling.
The technique mentioned here seems to be a variant of boundary integral method, usually used in open domains, and multiwavelength long solution domains. I wonder if FEM can crack this problem.
I just wish they would... (Score:5, Interesting)
Simply use three sensors and a prism. The color separation camera has been around for along time and the color prints from it are just breath taking. Just use three really great sensors then we can have digital color that rivals film.
Check out the work of Harry Warnecke and you will see what I mean.
Re:Wow! Computational Electromagnetics rock! (Score:5, Interesting)
I'm not sure any of the comparison of FDTD and FEM-FD in this post is right. FDTD suffers from the CFL limitation only in its explicit form. Implicit methods allow time steps much greater than the CFL limit. The implicit version requires matrix inversions at each time step, whereas the explicit version does not. Comparing FEM-FD and FDTD methods is silly. One is time domain, one is frequency domain, they are solving different problems. There is no problem doing FEM-TD (time domain), in which case the scaling is worse for FEM, when compared to explicit FDTD since the FDTD method pushes a vector, not a matrix, requires only nearest neighbor communication, whereas FEM requires a sparse-matrix solve, which is the bane of computer scientists as the strong scaling curve rolls over as N increases. FDTD does not have this problem, requires less memory and is more friendly toward GPU based compute hardware that is starting to dominate todays supercomputers.
Re:Just say no to Gizmodo (Score:4, Interesting)
Ironically, the last paragraph at Gizmodo somewhat answers your question:
What's particularly neat about this new approach is that it can be used with any kind of sensor without modification; CMOS, CCD, or BSI. And the filters can be produced using the same materials and manufacturing processes in place today. Which means we'll probably be seeing this technology implemented on cameras sooner rather than later.
Re:I just wish they would... (Score:2, Interesting)
Pro video 3CCD cameras do this. Interestingly those cameras can make use of a trick so that the lens becomes cheaper.
Normally a lens needs to focus all three colours on the same plane, this is difficult due to the prism effects of a lens, therefor normal lenses need to use glass from two different materials with different refractive indices to compensate for this.
Since the colour for a 3CCD video camera is split, you can simply place each sensor on the focus plane of each colour for a non-compensating lens.
Re:So essentially... (Score:4, Interesting)
If you shrink them anymore, Quantum Stuff starts to happen which you may not really want to happen.