Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Input Devices Media Technology Science

Quantum Film Might Replace CMOS Sensors 192

An anonymous reader writes "Quantum film could replace conventional CMOS image sensors in digital cameras and are four times more sensitive than photographic film. The film, which uses embedded quantum dots instead of silver grains like photographic film, can image scenes at higher pixel resolutions. While the technology has potential for use in mobile phones, conventional digital cameras would also gain much higher resolution sensors by using quantum film material." The original (note: obnoxious interstitial ad) article at EE Times adds slightly more detail.
This discussion has been archived. No new comments can be posted.

Quantum Film Might Replace CMOS Sensors

Comments Filter:
  • by FlyingBishop ( 1293238 ) on Monday March 22, 2010 @07:31PM (#31577296)

    Near as I can tell we've exceeded the useful range of pixel density increases for all but the most high-powered applications, so there's no reason to look for better resolution.

  • by MobileTatsu-NJG ( 946591 ) on Monday March 22, 2010 @07:35PM (#31577334)

    Couldn't one lead to the other? Would averaging 4 noisy pixels give you a better light sensitivity than just having the one?

  • by schon ( 31600 ) on Monday March 22, 2010 @07:38PM (#31577362)

    You don't see any market for smaller cameras?

    It's not about smaller cameras - when your pixels are smaller than individual photos (as is the case now), making them smaller only increases the "noise" part of the s/n ratio.

  • by forkazoo ( 138186 ) <wrosecrans@@@gmail...com> on Monday March 22, 2010 @07:57PM (#31577520) Homepage

    Couldn't one lead to the other? Would averaging 4 noisy pixels give you a better light sensitivity than just having the one?

    To a certain extent, yes. But, there is a certain minimum overhead for every pixel. The more pixels you cram onto a sensor, the more space on the sensor is dedicated to overhead instead of picking up light. Consequently, there are real limits to how much resolution you would want to have on a sensor.

  • by Zocalo ( 252965 ) on Monday March 22, 2010 @07:58PM (#31577530) Homepage
    The two are closely related, as the smaller the pixel's physical dimensions, the fewer photons it can capture for a given exposure time resulting in a lower S/N ratio. For any given sensor size and technology you need to trade off resolution against ISO performance, so a technology providing an four fold increase in sensitivity would, for instance, let you:
    1. Quadruple resolution
    2. Quadruple ISO performance (reduction in noise)
    3. Double resolution and double ISO performance
  • by PCM2 ( 4486 ) on Monday March 22, 2010 @07:59PM (#31577538) Homepage

    I read a story about this [economist.com] in a recent issue of The Economist. The article focuses more on the other direction -- how quantum dots can be used to enhance LEDs to create more pleasing/efficient/versatile lighting. But it also mentions how they can be used to read light, too; for example, to make better solar panels.

  • by johnlcallaway ( 165670 ) on Monday March 22, 2010 @08:20PM (#31577776)
    Having more pixels is a good thing for anyone who takes photographs. It provides better capability to can crop an image to a smaller size, and still have enough resolution to print or display something.

    A lot of people just vomit their photos onto Facebook, but many still take the time to do a simple crop/levels/contrast edit. The only people who don't need more megapixels are those that never edit their pictures. And they probably don't care about quality anyway.

    Most cameras can take pictures in all but the lowest light levels. I have taken hand-held pictures around a campfire with the proper lens. In fact, this just moves the problem from dark pictures to blown out pictures. Increasing sensitivity without being able to either stop down the lens or to decrease the exposure time is worthless .. daytime pictures come out too bright but you don't need a flash for indoor shots for cheap cell phone cameras.

    One issue not mentioned is electronic noise. The closer together you bring elements on a CCD, and the longer the exposure, the more noise that is generated. Poor lens, very small CCDs, and poor camera software are the major causes of poor quality in small cameras. My wife and I have a 14MP and an older 7MP dSLR camera. The 14MP not only provides for the ability to crop, but the noise levels are significantly lower probably due to improved software and electronics. Given the choice, I will grab the 14MP. The images take up more disk space, but it is worth it when it comes time to edit.

    It is not an improvement to make a photo-detector smaller and increase the resolution if it can't work in bright sunlight or has a lot of noise at low light levels.

    So for now .. I'll mark the article interesting until someone actually produces a working camera that can be tested against current cameras in the same price range.....
  • by Bigjeff5 ( 1143585 ) on Monday March 22, 2010 @08:26PM (#31577836)

    There is a physics problem when your image sensor is too small - photons have size and mass, and there is a point at which you cannot collect enough light to take a good picture.

    That's why expensive cameras have larger image sensors - they aren't packing more pixels per square inch, they are actually packing fewer pixels per square inch. A high end 10 mega-pixel will have an image sensor that is 10x bigger than a pocket-sized 10 mega-pixel camera, and it will take phenomenally better pictures.

    This is the source of the GP's confusion about what the summary means - is "quantum film" more sensitive to light? Or are they simply able to pack more sensors in a smaller area? If they are actually able to collect accurate color information from fewer photons (i.e. more sensitive to light), then you can shrink the size of high end image sensors and still maintain quality. If it simply allows them to pack more pixels onto a sensor without being able to collect accurate color data with fewer photons, then quantum film is absolutely worthless. It offers no benefit to the quality of images in that case, even if they can crank a camera up to 30 megapixels it will still look like shit.

  • by ceoyoyo ( 59147 ) on Monday March 22, 2010 @08:27PM (#31577842)

    No, it doesn't. The lens system of the camera only has a certain resolving ability. Once you pass that point, you can make the sensor as high resolution as you want and you're just wasting your time because the lens isn't passing information at that level of detail anyway. Basically, you're measuring blur more and more finely.

    Take a picture from anything less than a high end SLR or medium format camera and zoom in until you're actually looking at one image pixel to one screen pixel. Now tell me how good the image looks. Pretty crappy, hey? That's because the lens isn't capable of producing a decent image at even the resolution of the current sensor, never mind a better one.

  • by dgatwood ( 11270 ) on Monday March 22, 2010 @08:44PM (#31577996) Homepage Journal

    This is about the laws of physics. I'm sure somebody will correct me if I'm not explaining this very well, but...

    There's a limit to how precisely a lens can focus light. Now, in theory, as the aperture gets smaller, the diffusion decreases, so you might think that the small lenses would be result in a more precise image than larger ones. However, with those smaller lenses come smaller image sensors, which means that even if the lens can focus light to a smaller point, the pixels are also smaller, thus canceling out much of this improvement.

    The bigger problem is that the smaller the lens, the greater the impact of even tiny lens aberrations on the resolving power of the lens. A speck of dust on a 1.5mm lens makes a huge difference, whereas it can be largely ignored on a lens with a 72mm diameter.

    Also, as resolution increases, light gathering decreases. That's pretty fundamental to the laws of physics. Think about the bucket analogy. You have four square buckets measuring 1 foot by 1 foot. You place them side by side during a thunderstorm. You get another bucket that is two feet on each side. You place it beside the others. The same amount of rain (approximately) falls onto the four small buckets as the single large bucket, thus the large bucket has four times the amount of water in it that any one of the smaller buckets does.

    The same principle applies to pixels. All else being equal, resolution and light gathering are inversely proportional. Small cameras are already hampered pretty badly by light gathering because of their small lenses. Increasing the resolution just makes this worse. I can tell the difference in noise between my old 6MP DSLR and my 10MP DSLR. I can't imagine what 20MP in a camera phone would look like. :-D

    I think the real question should not be whether we can make smaller cameras, but rather whether we can make existing small cameras better by improving the light gathering. This technology might do that---whether it will work better than some of the newer CMOS sensor designs that already move the light-gathering material to the front remains to be seen---but at some point, making things smaller just means that they're easier to lose. I think we're at that point, if not past it....

  • by Anonymous Coward on Monday March 22, 2010 @09:09PM (#31578182)
    To paraphrase Nyeerrmm for laymen, stopping down the aperture, AKA higher f-stop gives you more Depth of Focus or Depth of Field. This means the plane of focus is deeper. Put most simply, the higher f-stop gives you MORE things in focus. So you think it's sharper. But it doesn't mean the things that are in focus are any sharper. More things are sharp, but any one spot is less sharp.
  • by Anonymous Coward on Monday March 22, 2010 @09:17PM (#31578268)

    Actually, photons have neither size nor mass

    http://en.wikipedia.org/wiki/Photon

    Good point about sensor size. A post below touches on diffraction problems, and there are other problems with small sensors such as S/N, being more demanding on lenses' resolution, etc.

  • by santax ( 1541065 ) on Monday March 22, 2010 @10:01PM (#31578588)
    Must have done something wrong. I replied to you but don't see my post coming up in my posted messages. Ah well, again. I'm Dutch. A lens is a piece of glass nicely cut and when you combine a couple of them you get an objective. At least, we make that difference here. Here in the Netherlands they really are different things. I wasn't aware that lenses has a more broad meaning in English. Well let me tell you why the glass. Because as I said. I don't need more pixels. I work in low-light environments and I actually do know a thing or two about this subject. And I even have to use the manual focus due to the autofocus being way to slow in bad light... Even on lenses with USM. (another more important improvement over more pixels). I say 700mm because I need 700mm. 200mm doesn't cut it on most stages. At least not the venues where I work. Now ofcourse I know these things are huge. Have you seen the 1200mm from Canon? I did once at an exhibition. 150.000 dollars, needs a trailer to move... I'm pretty sure there is a way to make those things, smaller, less sensitive to dust and cheaper. But really, don't call people bluffers when you have tell them that you need a full frame for L-glass. L-glass works perfectly on any body, even the cheap D400/D350's. It is the other way around. Try to fit a cheap lens on a fullframe when you start seeing black edges. Talking about bluff...
  • by dgatwood ( 11270 ) on Monday March 22, 2010 @10:12PM (#31578664) Homepage Journal

    Err... diffraction, not diffusion.

    Also, my second paragraph was backwards in that the diffraction increases as the aperture gets smaller. The smaller sensor thus compounds the problem further.

  • by Anonymous Coward on Monday March 22, 2010 @10:19PM (#31578712)

    Photons don't have mass. They do however have momentum. (p=h/lambda, note: deriving mass from this momentum using p=mv is a common physics mistake and makes no physical sense) They also don't strictly have size. If you're referring to fitting them through things and collecting them with objects, treating light as waves generally works, with photon just representing a quantization of the energy contained in the wave. If you were to try to characterize photon size, it would variable by the wavelength of the photon, though for imaging purposes this makes no sense as the aperture size doesn't merely determine whether light can get through or not but determines the angular resolution of the imaging system.

  • by AliasMarlowe ( 1042386 ) on Tuesday March 23, 2010 @03:24AM (#31580284) Journal
    Image quality is limited by several factors. The sensitivity of the detector is only one, and is the only one that quantum dots can address. In this instance, the sensitivity increases only by a moderate amount, so the improvement in signal level (or reduction in pixel size preserving signal level) is also moderate.

    Increasing the signal level will improve the S/N ratio for readout noise, assuming the readout is comparable to that available in today's cameras. Readout noise has been aggressively tacked by camera manufacturers, and is already very low. The principal source of noise in conventional images is shot noise (photon noise), and this is unrelated to the detector sensitivity. Shot noise depends ONLY on the number of photons arriving at each pixel, and is the reason that darker areas of digital images tend to be noisier, or require information-destroying denoising operations in postprocessing. Other forms of noise, such as dark current and dark noise, are relevant only in special applications, such as astrophotography.

    Shot noise is intrinsic in the statistics of photon fluxes. The number of photons arriving at a pixel from a radiance which is "uniform" in time and space is Poissonian: the standard deviation is the square root of the mean. The signal to noise ratio is the mean divided by its square root, which is the square root of the number of photons which arrived in that sampling interval (exposure). If 10,000 photons are expected to arrive at a pixel in a given exposure time, then the shot noise will be about 1% when comparing multiple "identical" exposures of that pixel. Changing the detector sensitivity raises or lowers the readout signal level, but does not change the signal to noise ratio in the signal from shot noise.

    Reducing the shot noise requires more photons arriving at each pixel. Getting more photons per pixel requires either (i) bigger pixels on the detector, (ii) better illumination of the subject, or (iii) better optics. This is why professional cameras have larger pixels than prosumer cameras, which tend to have larger pixels than pocket cameras, phone cameras, etc. Better lenses also help (but large apertures also affect depth of field). For given lighting conditions and optics, bigger pixels result in lower image noise, unless the readout circuitry really sucks.

    So, quantum dots will result in a higher signal level than conventional CCD/CMOS/CID detectors under similar imaging circumstances. The improvement is probably limited to improving the ratio of signal to readout noise, which is already pretty good. Quantum dots will not magically increase the number of photons arriving at the detector, and if used to reduce pixel size, will result in worse signal to noise ratio for the shot noise (biggest noise problem in most photography). Result: not a dramatic improvement, although detectors giving horribly noisy images (needing heavy destructive denoising) may get even smaller.

    Just send the bums some money, so they'll shut up. The potential of quantum dots in imaging sensors has been known for years.
  • by hvdh ( 1447205 ) on Tuesday March 23, 2010 @06:42AM (#31581154)

    Reducing the shot noise requires more photons arriving at each pixel. Getting more photons per pixel requires either (i) bigger pixels on the detector, (ii) better illumination of the subject, or (iii) better optics.

    (iv) Increase photon capture efficiency.

    The article says that in conventional CMOS sensors, three quarters of the incident photons are either absorbed by a metal layer or hit a spot between photo diodes, not contributing to photo diode charge and read-out signal. The new coating can convert those photons into charge, increasing the signal by a factor of four without changing pixel size, optics or illumination. Noise will be lower.

    If it works as advertised, this is a good thing.

  • by Khyber ( 864651 ) <techkitsune@gmail.com> on Tuesday March 23, 2010 @06:45AM (#31581168) Homepage Journal

    Actually, the smaller and more sensitive quantum dots by themselves would be better for dark imaging, mainly because with so many more usable levels of sensitivity you could assign a noise level below a certain threshold to render as pure black and then work up from there. What would matter would be the degree of sensitivity these quantum dots have, and then the subsequent software that processes the sensor data.

  • by silentcoder ( 1241496 ) on Tuesday March 23, 2010 @07:31AM (#31581444)

    There's another side to it as well.
    With the 10mp you can crop a third of the picture out and still have more left over than the 6mp even took. This is something art photographers do all the time. Cropping is often crucial for getting the composition you want and getting rid of details that don't add anything to the focus of the picture. The more data you HAVE the more you can do with it.
    This is also why art photographers take pictures in RAW mode rather than jpeg. What's lost with jpeg's lossy compression is data we can USE to make the picture better with later. Photoshopping pix to change what they look like is not all that impressive to me, but adjusting the light levels so somebody's gorgeous blue eyes can come out a bit better... that's art.

  • by AliasMarlowe ( 1042386 ) on Tuesday March 23, 2010 @07:54AM (#31581584) Journal

    The article says that in conventional CMOS sensors, three quarters of the incident photons are either absorbed by a metal layer or hit a spot between photo diodes, not contributing to photo diode charge and read-out signal.

    You are referring to areal efficiency or "fill factor" of detectors. CMOS had low areal efficiency some years back, but no longer. Both CMOS and CCD detectors are almost always equipped with integrated microlenses nowadays, which direct almost all of the incident light on the whole detector onto the active photosites. Some light is still lost at boundaries between the lenses, and due to the efficiency of the lenses. The ineffective regions between photosites receive hardly any light at all. Here's a quote from the Wikipedia article on CCDs:

    Microlenses can bring the fill factor back up to 90 percent or more depending on pixel size and the overall system's optical design.

    Integration of microlenses onto the chip is a major reason why CMOS detectors have caught up with CCD detectors in image quality. Compared to CCDs, a smaller fraction of a CMOS detector consists of photosites. Both benefit from provision of microlenses, but CMOS benefits rather more, and reaches almost the same areal efficiency as a comparable CCD. With less than 10% of incident photons lost, there is only a limited scope for improvement, by quantum dots or other methods. Those claims in TFA were reminiscent of fresh bullshit.

    Good CCDs can exceed 85% in quantum efficiency at some wavelengths, such as in the icx285 which is typically used in industrial devices. However, efficiencies are lower at other wavelengths, and CCD and CMOS detectors used in consumer devices often peak at below 60% quantum efficiency. So there is room for improvement here, but not nearly as spectacular as the claims of TFA.

    Keep in mind, as I mentioned in the earlier post, that increases in detector sensitivity (through areal efficiency or quantum efficiency) will elevate the signal level, but will not affect the ratio of shot noise in the signal. For that, you need more incident photons though bigger pixels and/or better subject illumination and/or bigger lens apertures and/or longer exposure times. TFA smells a bit like marketing hype. Quantum dots may lead to improvements in detector fabrication & price, but not so much in image quality...

Ya'll hear about the geometer who went to the beach to catch some rays and became a tangent ?

Working...