Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Input Devices Media Technology Science

Quantum Film Might Replace CMOS Sensors 192

An anonymous reader writes "Quantum film could replace conventional CMOS image sensors in digital cameras and are four times more sensitive than photographic film. The film, which uses embedded quantum dots instead of silver grains like photographic film, can image scenes at higher pixel resolutions. While the technology has potential for use in mobile phones, conventional digital cameras would also gain much higher resolution sensors by using quantum film material." The original (note: obnoxious interstitial ad) article at EE Times adds slightly more detail.
This discussion has been archived. No new comments can be posted.

Quantum Film Might Replace CMOS Sensors

Comments Filter:
  • by lastomega7 ( 1060398 ) on Monday March 22, 2010 @07:27PM (#31577242)
    There seems to be a sensationalist mix-up with the two terms... is this technology going to bring about more sensitive pixels (i.e. higher ISO capabilities) or just more pixels on the sensor? or both?
    • by MozeeToby ( 1163751 ) on Monday March 22, 2010 @07:28PM (#31577256)

      Also, resolution doesn't equal picture quality. I'd rather have a good lens system than a 20 Megapixel sensor.

      • by peragrin ( 659227 ) on Monday March 22, 2010 @09:43PM (#31578460)

        personally I would rather have a good lens system and a 20 megapixel sensor.

        • by AliasMarlowe ( 1042386 ) on Tuesday March 23, 2010 @03:24AM (#31580284) Journal
          Image quality is limited by several factors. The sensitivity of the detector is only one, and is the only one that quantum dots can address. In this instance, the sensitivity increases only by a moderate amount, so the improvement in signal level (or reduction in pixel size preserving signal level) is also moderate.

          Increasing the signal level will improve the S/N ratio for readout noise, assuming the readout is comparable to that available in today's cameras. Readout noise has been aggressively tacked by camera manufacturers, and is already very low. The principal source of noise in conventional images is shot noise (photon noise), and this is unrelated to the detector sensitivity. Shot noise depends ONLY on the number of photons arriving at each pixel, and is the reason that darker areas of digital images tend to be noisier, or require information-destroying denoising operations in postprocessing. Other forms of noise, such as dark current and dark noise, are relevant only in special applications, such as astrophotography.

          Shot noise is intrinsic in the statistics of photon fluxes. The number of photons arriving at a pixel from a radiance which is "uniform" in time and space is Poissonian: the standard deviation is the square root of the mean. The signal to noise ratio is the mean divided by its square root, which is the square root of the number of photons which arrived in that sampling interval (exposure). If 10,000 photons are expected to arrive at a pixel in a given exposure time, then the shot noise will be about 1% when comparing multiple "identical" exposures of that pixel. Changing the detector sensitivity raises or lowers the readout signal level, but does not change the signal to noise ratio in the signal from shot noise.

          Reducing the shot noise requires more photons arriving at each pixel. Getting more photons per pixel requires either (i) bigger pixels on the detector, (ii) better illumination of the subject, or (iii) better optics. This is why professional cameras have larger pixels than prosumer cameras, which tend to have larger pixels than pocket cameras, phone cameras, etc. Better lenses also help (but large apertures also affect depth of field). For given lighting conditions and optics, bigger pixels result in lower image noise, unless the readout circuitry really sucks.

          So, quantum dots will result in a higher signal level than conventional CCD/CMOS/CID detectors under similar imaging circumstances. The improvement is probably limited to improving the ratio of signal to readout noise, which is already pretty good. Quantum dots will not magically increase the number of photons arriving at the detector, and if used to reduce pixel size, will result in worse signal to noise ratio for the shot noise (biggest noise problem in most photography). Result: not a dramatic improvement, although detectors giving horribly noisy images (needing heavy destructive denoising) may get even smaller.

          Just send the bums some money, so they'll shut up. The potential of quantum dots in imaging sensors has been known for years.
          • Re: (Score:3, Informative)

            by hvdh ( 1447205 )

            Reducing the shot noise requires more photons arriving at each pixel. Getting more photons per pixel requires either (i) bigger pixels on the detector, (ii) better illumination of the subject, or (iii) better optics.

            (iv) Increase photon capture efficiency.

            The article says that in conventional CMOS sensors, three quarters of the incident photons are either absorbed by a metal layer or hit a spot between photo diodes, not contributing to photo diode charge and read-out signal. The new coating can convert those photons into charge, increasing the signal by a factor of four without changing pixel size, optics or illumination. Noise will be lower.

            If it works as advertised, this is a good thing.

            • by AliasMarlowe ( 1042386 ) on Tuesday March 23, 2010 @07:54AM (#31581584) Journal

              The article says that in conventional CMOS sensors, three quarters of the incident photons are either absorbed by a metal layer or hit a spot between photo diodes, not contributing to photo diode charge and read-out signal.

              You are referring to areal efficiency or "fill factor" of detectors. CMOS had low areal efficiency some years back, but no longer. Both CMOS and CCD detectors are almost always equipped with integrated microlenses nowadays, which direct almost all of the incident light on the whole detector onto the active photosites. Some light is still lost at boundaries between the lenses, and due to the efficiency of the lenses. The ineffective regions between photosites receive hardly any light at all. Here's a quote from the Wikipedia article on CCDs:

              Microlenses can bring the fill factor back up to 90 percent or more depending on pixel size and the overall system's optical design.

              Integration of microlenses onto the chip is a major reason why CMOS detectors have caught up with CCD detectors in image quality. Compared to CCDs, a smaller fraction of a CMOS detector consists of photosites. Both benefit from provision of microlenses, but CMOS benefits rather more, and reaches almost the same areal efficiency as a comparable CCD. With less than 10% of incident photons lost, there is only a limited scope for improvement, by quantum dots or other methods. Those claims in TFA were reminiscent of fresh bullshit.

              Good CCDs can exceed 85% in quantum efficiency at some wavelengths, such as in the icx285 which is typically used in industrial devices. However, efficiencies are lower at other wavelengths, and CCD and CMOS detectors used in consumer devices often peak at below 60% quantum efficiency. So there is room for improvement here, but not nearly as spectacular as the claims of TFA.

              Keep in mind, as I mentioned in the earlier post, that increases in detector sensitivity (through areal efficiency or quantum efficiency) will elevate the signal level, but will not affect the ratio of shot noise in the signal. For that, you need more incident photons though bigger pixels and/or better subject illumination and/or bigger lens apertures and/or longer exposure times. TFA smells a bit like marketing hype. Quantum dots may lead to improvements in detector fabrication & price, but not so much in image quality...

          • Re: (Score:3, Informative)

            by Khyber ( 864651 )

            Actually, the smaller and more sensitive quantum dots by themselves would be better for dark imaging, mainly because with so many more usable levels of sensitivity you could assign a noise level below a certain threshold to render as pure black and then work up from there. What would matter would be the degree of sensitivity these quantum dots have, and then the subsequent software that processes the sensor data.

    • Re: (Score:2, Informative)

      Near as I can tell we've exceeded the useful range of pixel density increases for all but the most high-powered applications, so there's no reason to look for better resolution.

      • You don't see any market for smaller cameras?

        • Re: (Score:3, Informative)

          by schon ( 31600 )

          You don't see any market for smaller cameras?

          It's not about smaller cameras - when your pixels are smaller than individual photos (as is the case now), making them smaller only increases the "noise" part of the s/n ratio.

          • > It's not about smaller cameras - when your pixels are smaller than
            > individual photos (as is the case now),

            I don't believe that is true. The smallest pixel pitch I can find is 2300 nanometers.

            > making them smaller only increases the "noise" part of the s/n ratio.

            Some people may choose to use this technology to make smaller cameras with performance equal to the smallest useful ones currently available: I'm sure there is a market. Others will use it to make "normal" cameras with improved performa

        • by Bigjeff5 ( 1143585 ) on Monday March 22, 2010 @08:26PM (#31577836)

          There is a physics problem when your image sensor is too small - photons have size and mass, and there is a point at which you cannot collect enough light to take a good picture.

          That's why expensive cameras have larger image sensors - they aren't packing more pixels per square inch, they are actually packing fewer pixels per square inch. A high end 10 mega-pixel will have an image sensor that is 10x bigger than a pocket-sized 10 mega-pixel camera, and it will take phenomenally better pictures.

          This is the source of the GP's confusion about what the summary means - is "quantum film" more sensitive to light? Or are they simply able to pack more sensors in a smaller area? If they are actually able to collect accurate color information from fewer photons (i.e. more sensitive to light), then you can shrink the size of high end image sensors and still maintain quality. If it simply allows them to pack more pixels onto a sensor without being able to collect accurate color data with fewer photons, then quantum film is absolutely worthless. It offers no benefit to the quality of images in that case, even if they can crank a camera up to 30 megapixels it will still look like shit.

          • by Jay L ( 74152 ) <jay+slash @ j ay.fm> on Monday March 22, 2010 @09:16PM (#31578256) Homepage

            Is "quantum film" more sensitive to light? Or are they simply able to pack more sensors in a smaller area?

            That's the trouble with it - you can know its sensitivity or its resolution, but not both, and the act of measuring one changes the other.

          • Re: (Score:2, Informative)

            by Anonymous Coward

            Actually, photons have neither size nor mass

            http://en.wikipedia.org/wiki/Photon

            Good point about sensor size. A post below touches on diffraction problems, and there are other problems with small sensors such as S/N, being more demanding on lenses' resolution, etc.

          • photons have size and mass

            Photons do not have mass, though this is somewhat a matter of semantics.

          • Re: (Score:2, Informative)

            by Anonymous Coward

            Photons don't have mass. They do however have momentum. (p=h/lambda, note: deriving mass from this momentum using p=mv is a common physics mistake and makes no physical sense) They also don't strictly have size. If you're referring to fitting them through things and collecting them with objects, treating light as waves generally works, with photon just representing a quantization of the energy contained in the wave. If you were to try to characterize photon size, it would variable by the wavelength of t

          • Not only that, high quality cameras will have multiple CCDs.

            A video camera with one 1/4 '' Sharp CCD is not the same as a camera with three Sony 1/3'' CCDs. Even if they both deliver 5 megapixel images.

          • There is a physics problem when your image sensor is too small - photons have size and mass

            Photon has mass ??

          • If it simply allows them to pack more pixels onto a sensor without being able to collect accurate color data with fewer photons, then quantum film is absolutely worthless.

            Not true. Existing digital cameras have noise, particularly at the higher ISOs. The more readings you take from a "pixel" in the frame, the more you can negate this noise by averaging it out. One way to increase the number of samples is to stack several readings--increasing your ISO level, more or less.

            Another way to increase the number of samples is to scale your resulting pixel array down, so that a pixel and its immediate neighbors get averaged into the same pixel, drowning out more of the noise. So if

        • by dgatwood ( 11270 ) on Monday March 22, 2010 @08:44PM (#31577996) Homepage Journal

          This is about the laws of physics. I'm sure somebody will correct me if I'm not explaining this very well, but...

          There's a limit to how precisely a lens can focus light. Now, in theory, as the aperture gets smaller, the diffusion decreases, so you might think that the small lenses would be result in a more precise image than larger ones. However, with those smaller lenses come smaller image sensors, which means that even if the lens can focus light to a smaller point, the pixels are also smaller, thus canceling out much of this improvement.

          The bigger problem is that the smaller the lens, the greater the impact of even tiny lens aberrations on the resolving power of the lens. A speck of dust on a 1.5mm lens makes a huge difference, whereas it can be largely ignored on a lens with a 72mm diameter.

          Also, as resolution increases, light gathering decreases. That's pretty fundamental to the laws of physics. Think about the bucket analogy. You have four square buckets measuring 1 foot by 1 foot. You place them side by side during a thunderstorm. You get another bucket that is two feet on each side. You place it beside the others. The same amount of rain (approximately) falls onto the four small buckets as the single large bucket, thus the large bucket has four times the amount of water in it that any one of the smaller buckets does.

          The same principle applies to pixels. All else being equal, resolution and light gathering are inversely proportional. Small cameras are already hampered pretty badly by light gathering because of their small lenses. Increasing the resolution just makes this worse. I can tell the difference in noise between my old 6MP DSLR and my 10MP DSLR. I can't imagine what 20MP in a camera phone would look like. :-D

          I think the real question should not be whether we can make smaller cameras, but rather whether we can make existing small cameras better by improving the light gathering. This technology might do that---whether it will work better than some of the newer CMOS sensor designs that already move the light-gathering material to the front remains to be seen---but at some point, making things smaller just means that they're easier to lose. I think we're at that point, if not past it....

          • Re: (Score:3, Informative)

            by dgatwood ( 11270 )

            Err... diffraction, not diffusion.

            Also, my second paragraph was backwards in that the diffraction increases as the aperture gets smaller. The smaller sensor thus compounds the problem further.

      • Tell that to anyone using a transmission electron microscope. I have friends who dislike the digital microscopy due to the detail being much lower than film. While it is quicker and less susceptible to movement problems, you lose most of the detail due to the electrons being far smaller than the CMOS sensor's pixels.

        I really think this jaded "we don't need any more technology" bullshit is just a modern day luddite attitude. It seems to be a fear of being superseded with the technology you currently use. May

        • by ceoyoyo ( 59147 )

          I think a $50,000+ electron microscope would qualify as "the most high-powered applications," particularly in the context of the article, which is talking about cell phone cameras.

      • Re: (Score:3, Informative)

        Having more pixels is a good thing for anyone who takes photographs. It provides better capability to can crop an image to a smaller size, and still have enough resolution to print or display something.

        A lot of people just vomit their photos onto Facebook, but many still take the time to do a simple crop/levels/contrast edit. The only people who don't need more megapixels are those that never edit their pictures. And they probably don't care about quality anyway.

        Most cameras can take pictures in all
        • by ceoyoyo ( 59147 ) on Monday March 22, 2010 @08:27PM (#31577842)

          No, it doesn't. The lens system of the camera only has a certain resolving ability. Once you pass that point, you can make the sensor as high resolution as you want and you're just wasting your time because the lens isn't passing information at that level of detail anyway. Basically, you're measuring blur more and more finely.

          Take a picture from anything less than a high end SLR or medium format camera and zoom in until you're actually looking at one image pixel to one screen pixel. Now tell me how good the image looks. Pretty crappy, hey? That's because the lens isn't capable of producing a decent image at even the resolution of the current sensor, never mind a better one.

          • It's not strictly pointless. Oversampling has some uses, namely as a way to filter out random noise and other imperfections caused by the sensor itself. If you can model the random noise, and have many samples for the same atomic datum, you can find ways to still improve the SNR of the data. It still won't let you zoom in to see more detail, however, but still not totally useless for all applications.
        • by farnsworth ( 558449 ) on Monday March 22, 2010 @09:20PM (#31578292)
          What you say is certainly true. But let's say that you have an entry-level slr with a junky $50 lens, and then you suddenly have $500 to spend on your setup. Do you buy a fancier camera or a fancier lens?

          Of course, if money is no object, more of everything will certainly improve things. But practically speaking, the vast majority of folks in the real world would be better off paying more attention to their glass rather than to their silicon.

          A nice lens on a relatively limited camera will take amazing photos. A crappy lens on the best camera will not.
          • Of course, if money is no object, more of everything will certainly improve things.

            Nope. Increasing resolution without first increasing light gathering ability will make the image worse. In fact, most digital cameras would produce better pictures if they decreased the resolution. Manufacturers put a higher pixel density than is useful because megapixels sell: the salesman and your mom, and even you see two cameras, one with 6mp, and one with 12mp, and you assume the 12mp camera is better. If they're similar in other respects, the one with a smaller pixel density (the 6mp one) is guarantee

        • > Having more pixels is a good thing for anyone who sells flash memory.

          Here, I fixed that for you.

          It's true that many photos would be improved by more detail. But it's not always a benefit: just as text is well-represented with a modest number of bits to describe a letter in ASCII, storing sophisticated graphical images of each character is usually quite pointless and actually interferes with getting work done.

    • Re: (Score:3, Informative)

      Couldn't one lead to the other? Would averaging 4 noisy pixels give you a better light sensitivity than just having the one?

    • According to the articles, both.

      • According to the articles, both.

        In particular:

        - It replaces the in-chip photodetector with an on-top-of-chip detector, allowing all the real estate on the chip be used for the REST of the system rather than reserving most of it for light sensors. That means you can use bigger features (and cheaper processes) - and/or get more pixels by shrinking the features back down a bit.

        - It gives about a 4x sensitivity improvement. (2x because the quantum dots are more sensitive, another 2x because th

        • Nice, sounds pretty sweet.

          The big problem with digital cameras is light sensitivity, we can pack 15 mega-pixels into a camera-phone but the loss in light sensitivity means you'd have been better off sticking with 1 mega-pixel, the picture quality will be abysmal. That's why high end cameras use image sensors that are many times larger for the same amount of pixels than cheap consumer models.

    • I've been waiting for technology that would make my computer's bootup sequence more sensitive [wikipedia.org] to my needs.
    • Re: (Score:3, Informative)

      by Zocalo ( 252965 )
      The two are closely related, as the smaller the pixel's physical dimensions, the fewer photons it can capture for a given exposure time resulting in a lower S/N ratio. For any given sensor size and technology you need to trade off resolution against ISO performance, so a technology providing an four fold increase in sensitivity would, for instance, let you:
      1. Quadruple resolution
      2. Quadruple ISO performance (reduction in noise)
      3. Double resolution and double ISO performance
    • by Khyber ( 864651 )

      key word - quantum.

      If that doesn't immediately make you think of ULTRA-TINY SCALES, and thus lead you to think quantum dot silver grains thus quantum dots = higher MP in the same sensor size, I guess you should be handing in your geek card.

  • by ZERO1ZERO ( 948669 ) on Monday March 22, 2010 @07:29PM (#31577266)
    Will this lead to large format film cameras being made smaller but same quality?

    Can the speed be adjusted like ISO 100-400 etc?

    • by dmiller ( 581 )
      Larger sensors will always have a noise and sensitivity advantage to smaller sensors: larger surface area == more photon gathering ability. Also, I'm surprised they cite a four-stop improvement; I thought we were within that range of the quantum limit with current sensors already.
      • Where do I get my Quantum film developed at ?
        I thought photography was getting away from film . . .
        • Re: (Score:3, Funny)

          by John Hasler ( 414242 )

          > Where do I get my Quantum film developed at ?

          You put it in a box with a certain cat.

          > I thought photography was getting away from film . . .

          Well, it is and it isn't.

          • Wrong joke.

            You get Quantum film developed at Black Mesa Laboratories. Don't expect cake while you wait.
  • by Meshach ( 578918 ) on Monday March 22, 2010 @07:29PM (#31577270)
    FTFA:

    For the future, the company also plans to target other specialized applications, such as pitch-black night vision goggles, cheaper solar cells and even spray-on displays.

    Right now night vision goggles give a very grainy tinged image. Clarifying that could have millions of applications.

    • Pitch black night vision goggles ?
      Wow ... is that like the photoshop filter that can take photos
      taken with the lense cap on and convert them to full colour pictures ?

      PS: Unless it is the goggles that are painted pitch black ... in that case
      do that make them any blacker ?
      • Pitch black night vision goggles ?
        Wow ... is that like the photoshop filter that can take photos
        taken with the lense cap on and convert them to full colour pictures ?

        I think "pitch black night vision goggles" is a term-of-art for night vision goggles that can produce usable images at light levels that would APPEAR pitch black to an unaided eye - though there are enough photons available that with sufficient amplification you don't need added illumination.

      • by Kjella ( 173770 )

        Wow ... is that like the photoshop filter that can take photos taken with the lense cap on and convert them to full colour pictures ?

        Take a look at the pictures on the right:

        http://en.wikipedia.org/wiki/Black_body#Radiation_emitted_by_a_human_body

        . With enough sensitivity everything gives off infrared radiation, even things we would normally think is pitch black. Certainly at least enough for soldiers to operate at night without any artificial lighting at all already, and I'm guessing this could make them much better. The lens cover is different, no light is really no light. But even in the absence of sun, moon, stars, fire and artificial light it is never totally dark, just pitch dark.

        • by John Hasler ( 414242 ) on Monday March 22, 2010 @08:50PM (#31578040) Homepage

          > With enough sensitivity everything gives off infrared radiation...

          Actually it does so with no sensitivity at all, just by being hotter than absolute zero. However, to detect infrared your sensor must not only be sensitive to it, it must also be significantly colder than the object you are trying to image. otherwise it will just detect its own emissions.

      • by nmos ( 25822 )

        PS: Unless it is the goggles that are painted pitch black ... in that case
        do that make them any blacker ?

        I don't know but they're sure going to be hard to find in the dark :)

    • Clarifying? How about entirely rewriting to make sense? "Pitch-black night vision goggles"? If I wanted pitch black, I wouldn't be wearing the goggles. :\
  • by santax ( 1541065 ) on Monday March 22, 2010 @07:34PM (#31577328)
    I want the pixels that I have on iso 50 and with F1 over a 700mm objective please. Make it smaller and less 'noticeable' then the L-glass I have to carry with me these days and I might buy myself a new body and some glass... Oh, this one is really important. Make it cheaper please. I know you know that we (photographers) will just give you all that we have for a decent setup, but it would be so cool if a real good objective, would cost less than a real good car.
  • finally... (Score:3, Funny)

    by spectro ( 80839 ) on Monday March 22, 2010 @07:37PM (#31577348) Homepage

    A camera to take pics of Schrödinger's LOLcat

  • With silicon, having to pass through narrow gaps should reduce the amount of light coming at the sensor from an unexpected angle as would occur due to lens flare, imperfections in the lens, etc. Without that, I'd expect the clarity of the image to be impacted. Am I missing something, or is this just trading one problem for another?

    Also, how does this improve over already commercially available newer CMOS designs [displayblog.com] that push the photo-sensitive material to the front surface?

    • The gap in a semiconductor is a gap in quantum energy levels. iIf you have a gap of 0.9 eV, a photon with 0.8 eV strikes the sensor, it basically undetectable. If a photon has enough energy, it is detectable. It isn't quite that simple, photons have spin, so you have to have a the electron states with different spins in order to detect the photon.

      A CMOS sensor is smooth and fairly reflective, so it reflects a considerable fraction of the light. This reflected light does indeed cause flare. The second

      • by dgatwood ( 11270 )

        I'm not talking about the bandgap. I'm talking about the fact that the surfaces of most CMOS chips have a series of narrow slots [cnet.com.au] through which the light must pass. Call it gaps, call it slots, call it circuit traces, call it whatever. These parts don't have such structures, and that significantly changes the angles of light that these sorts of parts can detect.

        And I reiterate the question: do the benefits of absorbing all light (including light from near-parallel angles) outweigh the problems that this c

  • by Dr. Spork ( 142693 ) on Monday March 22, 2010 @07:48PM (#31577446)
    I don't know too much about the physics of photography, but it seems to me that the real problem in the picture quality of tiny cameras is that the lenses are terrible. Improving the sensors just means that we'll get very accurate digital representations of blurry images, produced by tiny, dirty lenses with minuscule, fixed focal lengths. Even as things stand now, a older camera with good optics and a 5MP sensor produces much better images than a new camera with cheap optics and a 12MP sensor. It seems to me that sensor isn't the bottleneck anymore.
    • Having fixed focal lengths is not bad! Prime lenses always give a quality advantage over zoom lenses, which is why in film production, prime lenses are used almost exclusively when image quality matters. Zoom lenses are only used on budget productions, or when there's actually a zoom in the take.
    • Re: (Score:2, Interesting)

      by Anonymous Coward

      If the sensor gets small enough, the lens can be something other that a refractive solid. Perhaps a drop of liquid in some sort of electrostatic suspension, where problems with the material are far less, and the lens can be focused by reshaping rather than moving.

    • I don't know too much about the physics of photography, but it seems to me that the real problem in the picture quality of tiny cameras is that the lenses are terrible.

      It seems to anybody who knows anything about the problem with digital cameras that you don't have a clue. And this statement proves it:

      Even as things stand now, a older camera with good optics and a 5MP sensor produces much better images than a new camera with cheap optics and a 12MP sensor. It seems to me that sensor isn't the bottleneck anymore.

      The reason the old 5mp camera produces a better picture than the 12mp camera is not because of the optics, it's because of the size of the individual pixels on the chip. The 5mp camera has sensors that are 2-3 times larger than the 12mp camera, which means they can collect that much more light, and therefore can have shorter exposure times and/or more accurate color.

      That's

  • Quantum! (Score:4, Funny)

    by MightyMartian ( 840721 ) on Monday March 22, 2010 @07:52PM (#31577490) Journal

    "This is either a picture of your Aunt Mavis... or not."

    • "This is either a picture of your Aunt Mavis... or not."

      You'll only really know once you observe it.

  • by PCM2 ( 4486 ) on Monday March 22, 2010 @07:59PM (#31577538) Homepage

    I read a story about this [economist.com] in a recent issue of The Economist. The article focuses more on the other direction -- how quantum dots can be used to enhance LEDs to create more pleasing/efficient/versatile lighting. But it also mentions how they can be used to read light, too; for example, to make better solar panels.

  • by Flere Imsaho ( 786612 ) on Monday March 22, 2010 @08:13PM (#31577696)

    I dunno about quantum photography, it's neither here nor there.

  • or it didn't happen. Right? Amiright? I slay me. Seriously, though, the article was just a bunch of words. Pretty pictures, that's what I want.
  • "Our quantum film even looks like photographic film—an opaque black material that we deposit right on the top layer of our image chip."

    This is important. Current digital sensors are reflective & that results in a specular reflection. This greatly increases the flare, since much of the light the strikes the sensor reflect back into the lens, where it can reflect from a lens back to the sensor. This is one area where digital has been noticeably worse that film. See PhotoTechEDU Day 4: Contrast, MTF, Flare, and Noise @ http://www.youtube.com/watch?v=tNvFsOvVkOg&feature=channel [youtube.com]. This is the major loss of contrast at low spacial

Get hold of portable property. -- Charles Dickens, "Great Expectations"

Working...