New Camera Sensor Filter Allows Twice As Much Light 170
bugnuts writes "Nearly all modern DSLRs use a Bayer filter to determine colors, which filters red, two greens, and a blue for each block of 4 pixels. As a result of the filtering, the pixels don't receive all the light and the pixel values must be multiplied by predetermined values (which also multiplies the noise) to normalize the differences. Panasonic developed a novel method of 'filtering' which splits the light so the photons are not absorbed, but redirected to the appropriate pixel. As a result, about twice the light reaches the sensor and almost no light is lost. Instead of RGGB, each block of 4 pixels receives Cyan, White + Red, White + Blue, and Yellow, and the RGB values can be interpolated."
Re:I call bullpucky (Score:5, Informative)
Foveon has 3 photodiodes per pixel, and theoretically should have the most accurate colors and sharpness by avoiding moire and interpolation issues with bayer filters. In practice, though, a lot of light is lost by the time it reaches the 3rd photodiode.
There is indeed white light because not every pixel has a filter over it. Many pixels pass the light through a hole to the pixel, while a neighbor pixel funnels red light (e.g.) to it. Thus, you get white + 1/2 the neighbor's red. You also get half the neighbor's red on the other side, resulting in white + red for the three pixels in a line.
Cyan is part of the color spectrum as a "subtractive color". What remains under each neighbor pixel when you strip away the red, is the cyan.
From what I can tell, this will not get rid of the need for the anti-aliasing.
I agree with point, but the Foveon works... (Score:5, Informative)
In other words, technological superiority doesn't always win in digital photography.
This is very true, although the Foveon was superior in resolution and lack of color moire only - it terms of higher ISO support it has not been as good as the top performers of the day.
But the Foveon chip does persist in cameras, currently Sigma (who bought Foveon) still selling a DSLR with the Foveon sensor, and now a range of really high quality compact cameras with a DSLR sized Foveon chip in it. (the Sigma DP-1M, DP-2M and DP-3M each with fixed prime lenses of different focal lengths)
I think though that we are entering a period where resolution has plateaued, that is most people do not need more resolution than cameras are delivering - so there is more room for alternative sensors to capture some of the market because they are delivering other benefits that people enjoy. Now that Sigma has carried Foveon forward into a newer age of sensors they are having better luck selling a high-resolution very sharp small compact that has as much detail as a Nikon D800 and no color moire...
Another interesting alternative sensor is Fuji with the X-Trans sensor - randomized RGB filters to eliminate color moire. The Panasonic approach seems like it might have some real gains in higher ISO support though.
Re:yeay four sensors (Score:4, Informative)
So when you print to your eight colour inkjet, what file format is your image stored in that has eight colour channels? What software are you using that supports it?
Note that in CMYK, which is the most by far the most popular "four colour" system (and is the one all those "four colour" printers use), black is one of the colours. That makes up for a shortcoming in the colour inks (which is not shared by camera sensors or displays) in which you can't make a decent black by mixing the colours. I suspect the eight colour printer is doing something very similar - mixing colours to give you a better (they say anyway) representation of the three colour additive system that your computer, camera and monitor use.
Besides, the vast, vast majority of people don't colour calibrate their monitors OR printers. Unless you do that regularly all the extra colour channels in the world aren't going to help you.
Re:I just wish they would... (Score:5, Informative)
Re:So essentially... (Score:5, Informative)
Your eyes actually aren't sensitive to red, green, and blue. Here are the spectral sensitivities [starizona.com] of the red, green, and blue cones in your eye. The red cones are actually most sensitive to orange, green most sensitive to yellow-green, and blue most sensitive to green-blue. There's also a wide range of colors that each type of cone is sensitive to, not a single frequency. When your brain decodes this into color, it uses the combined signal it's getting from all three types of cones to figure out which color you're seeing. e.g. Green isn't just the stimulation of your green-yellow cones. It's that plus the low stimulation of your orange cones and blue-green cones in the correct ratio.
RGB being the holy trinity of color is a display phenomenon, not a sensing one. In order to be able to stimulate the entire range of colors you can perceive, it's easiest if you pick three colors which stimulate the orange cones most and the other two least (red), the green-blue cones most and the others least (blue), and the green-yellow cones most but the other two least (green). (I won't get into purple/violet - that's a long story which you can probably guess if you look at the left end of the orange cones' response curve.) You could actually pick 3 different colors as your primaries, e.g. orange, yellow, and blue. They'd just be more limited in the range of colors you can reproduce because their inability to stimulate the three types of comes semi-independently. Even if you pick non-optimal colors, it's possible to replicate the full range if you add a 4th or 5th display primary. It's just more complex and usually not economical (Panasonic I think made a TV with extra yellow primary to help bolster that portion of the spectrum).
But like your eyes, for the purposes of recording colors, you don't have to actually record red, green, and blue. You can replicate the same frequency response spectrum using photoreceptors sensitive to any 3 different colors. All that matters is that their range of sensitivity covers the full visible spectrum, and their combined response curves allow you to uniquely distinguish any single frequency of light within that range. It may involve a lot of math, but hey computational power is cheap nowadays.
It's also worth noting that real-world objects don't give off a single frequency of light. They give off a wide spectrum, which your eyes combine into the 3 signal strengths from the 3 types of cones. This is part of the reason why some objects can appear to shift relative colors as you put them under different lighting. A blue quilt with orange patches can appear to be a blue quilt with red patches under lighting with a stronger red component. The "orange" patches are actually reflecting both orange and red light. So the actual color you see is the frequency spectrum of the light source, times the frequency emission response (color reflection spectrum) of the object, convolved with the frequency response of the cones in your eyes. And when you display a picture of that object, your monitor is simply doing its best using three narrow-band frequencies to stimulate your cones in the same ratio as they were with the wide-band color of the object. So a photo can never truly replicate the appearance of an object; it can only replicate its appearance under a specific lighting condition.
Re:I call bullpucky (Score:4, Informative)
Magenta is a combination of colours just like white isn't "in the colour spectrum".
Indigo/violet however is in the spectrum but as it's outside of the range of values which can be created with red green and blue we approximate it using magenta which is a mixture of blue and red.
Re:So essentially... (Score:2, Informative)
Then the material is phosphorous.
The photons from the light source are able to put electrons in the material in a higher orbit (skipping at least one orbit level), then when the electron drops its orbit it doesn't go all the way back to the original orbit. Since the distance of the electron going up, is not the same as going down, the photon produces is of a different frequency (color) than the photon from the light source.
The second drop of the electron to the original orbit will also cause another photon to be released which would be a third colour.
Re:yeay four sensors (Score:4, Informative)
So when you print to your eight colour inkjet, what file format is your image stored in that has eight colour channels?
You don't seem to understand the purpose of the colours or how colour is managed in a workflow. A file stored in your computer will have a certain gamut, if not specified this gamut is sRGB. Your printer also has a certain gamut. This is a function of the ink, colours it can print and the paper printed on. Colour management will take care of ensuring what you see on your screen will be reproduced on the printer providing the printer is physically capable of printing the colours in the gamut.
This is a quite common problem for instance with a CMYK printer which is unable to print any of the primary colours shown as red green and blue on the monitor. The result is a printer that prints a subset of the available colours a screen can display, but at the same time can print outside the gamut of your monitor too.
You don't need a file that has 8 primary colours to take advantage of the really wide gamuts 8 colour printers can print, you just need maths on your side. The ProPhotoRGB colour space works around this by defining the primary for green and blue as imaginary negative values which don't exist in reality. As such using red, green and blue primaries you can create for instance a colour that *almost* represents a pure cyan.
This is something that many photographers who print images already do. I think even the latest Photoshop comes setup out of the box to import raw camera files using ProPhotoRGB as the working colour space.
Besides, the vast, vast majority of people don't colour calibrate their monitors OR printers. Unless you do that regularly all the extra colour channels in the world aren't going to help you.
You don't know photographers very well do you? The vast majority of amateur and all professional photographers I've ever met calibrate their screens. Printer calibration is often not needed as the vast majority of photographers I know outsource their printing to someone else, and that someone else will typically provide them with the colour profile of their printer's last calibration to ensure accurate results can be obtained. Pretty much every printing company will do this for you, even cheap mass production ones like Snapfish.
Re:yeay four sensors (Score:4, Informative)
You don't seem to know what we're talking about. Let me quote the OP:
"I've been hoping for 4-sensor cameras for ages. People only have three color sensors, but what those colors are vary a bit from person to person, and capturing 4 colors stands a better chance of getting images that look good for everyone."
Yes, more inks in your printer help it reproduce the RGB values that you capture with your camera, save in your files, display on your screen, and send to the printer. Just like in the example I gave, the K channel in CMYK helps make up for deficiencies in the mixing properties of the C, M and Y that don't let you make a proper black by mixing. Extra ink won't do squat to match extra colour information from a theoretical extra colour sensor in the camera though, because everything in between is RGB.
Yes, actually, I know lots of photographers. I calibrate my screen, and I use a printer I chose specifically because they do a good job of frequent calibration. Most professional photographers do. But if you haven't noticed, with the availability of digital cameras a LOT of people took up photography. Hardware screen calibrators are still a niche item, nowhere near as popular as cameras. In particular, Panasonic doesn't make any still cameras that are likely to be used extensively by professionals, so it's likely that even fewer people who shoot Panasonic would calibrate their equipment.
Re:So essentially... (Score:5, Informative)
Yup - this is fluorescence.
It is worth nothing that a related term is phosphorescence, which is what most people think of when they thing of phosphors. For the benefit of those reading, the two are basically the same phenomena on different timescales.
When light hits an object that is fluorescent it absorbs the light and re-emits it. The re-emitted light has a different spectrum than the absorbed light. The re-emitted light is also emitted AFTER the light is absorbed. In most cases it is emitted almost instantaneously and this is called fluorescence. However, some materials take much longer to emit the absorbed energy as light and this is called phosphorescence.
So, that T-shirt that lights up under a blacklight is exhibiting fluorescence. The watch hands that continue to glow 30 seconds after going from daylight to darkness is exhibiting phosphorescence. They're the exact same thing, but with different dynamics. They both involve electrons absorbing energy and releasing it, but with phosphorescence they get stuck in metastable states (read wikipedia for a decent explanation, but a full one requires a bit more quantum physics than I've mastered).
Re:yeay four sensors (Score:5, Informative)
I'm not complaining about anything. I'm replying to your erroneous assertion (you DID read the whole thread before replying, right?) that the existence of printers with eight inks somehow means they'll be able to reproduce data from a hypothetical four colour channel camera sensor.
I do like your fake quotes though. Please indicate where I said "there's no printer with 4 colours." What I DID say was "Too bad you're displaying them on a screen or printing them with a process that only uses three colours." If you bothered to understand what you're talking about, or even read my comments, you'd realize that the process is indeed three colour. Even if you imagine a four colour camera sensor, the file you store the data in is three colour channel, the software you use to edit it is three colour channel, the screen you show it on is three channel and the data you send to the printer driver is three channel. IF you could somehow send the four channel data to the printer you might be able to reproduce some extra colours (which the vast majority of humanity probably wouldn't be able to see anyway), but probably not very well since all those extra inks are formulated specifically to help reproduce RGB.