Quantum Film Might Replace CMOS Sensors 192
An anonymous reader writes "Quantum film could replace conventional CMOS image sensors in digital cameras and are four times more sensitive than photographic film. The film, which uses embedded quantum dots instead of silver grains like photographic film, can image scenes at higher pixel resolutions. While the technology has potential for use in mobile phones, conventional digital cameras would also gain much higher resolution sensors by using quantum film material." The original (note: obnoxious interstitial ad) article at EE Times adds slightly more detail.
Sensitivity is not Resolution (Score:5, Insightful)
Re:Sensitivity is not Resolution (Score:5, Insightful)
Also, resolution doesn't equal picture quality. I'd rather have a good lens system than a 20 Megapixel sensor.
Re:Sensitivity is not Resolution (Score:5, Insightful)
personally I would rather have a good lens system and a 20 megapixel sensor.
Sensationalist, almost rubbish (Score:4, Informative)
Increasing the signal level will improve the S/N ratio for readout noise, assuming the readout is comparable to that available in today's cameras. Readout noise has been aggressively tacked by camera manufacturers, and is already very low. The principal source of noise in conventional images is shot noise (photon noise), and this is unrelated to the detector sensitivity. Shot noise depends ONLY on the number of photons arriving at each pixel, and is the reason that darker areas of digital images tend to be noisier, or require information-destroying denoising operations in postprocessing. Other forms of noise, such as dark current and dark noise, are relevant only in special applications, such as astrophotography.
Shot noise is intrinsic in the statistics of photon fluxes. The number of photons arriving at a pixel from a radiance which is "uniform" in time and space is Poissonian: the standard deviation is the square root of the mean. The signal to noise ratio is the mean divided by its square root, which is the square root of the number of photons which arrived in that sampling interval (exposure). If 10,000 photons are expected to arrive at a pixel in a given exposure time, then the shot noise will be about 1% when comparing multiple "identical" exposures of that pixel. Changing the detector sensitivity raises or lowers the readout signal level, but does not change the signal to noise ratio in the signal from shot noise.
Reducing the shot noise requires more photons arriving at each pixel. Getting more photons per pixel requires either (i) bigger pixels on the detector, (ii) better illumination of the subject, or (iii) better optics. This is why professional cameras have larger pixels than prosumer cameras, which tend to have larger pixels than pocket cameras, phone cameras, etc. Better lenses also help (but large apertures also affect depth of field). For given lighting conditions and optics, bigger pixels result in lower image noise, unless the readout circuitry really sucks.
So, quantum dots will result in a higher signal level than conventional CCD/CMOS/CID detectors under similar imaging circumstances. The improvement is probably limited to improving the ratio of signal to readout noise, which is already pretty good. Quantum dots will not magically increase the number of photons arriving at the detector, and if used to reduce pixel size, will result in worse signal to noise ratio for the shot noise (biggest noise problem in most photography). Result: not a dramatic improvement, although detectors giving horribly noisy images (needing heavy destructive denoising) may get even smaller.
Just send the bums some money, so they'll shut up. The potential of quantum dots in imaging sensors has been known for years.
Re: (Score:3, Informative)
Reducing the shot noise requires more photons arriving at each pixel. Getting more photons per pixel requires either (i) bigger pixels on the detector, (ii) better illumination of the subject, or (iii) better optics.
(iv) Increase photon capture efficiency.
The article says that in conventional CMOS sensors, three quarters of the incident photons are either absorbed by a metal layer or hit a spot between photo diodes, not contributing to photo diode charge and read-out signal. The new coating can convert those photons into charge, increasing the signal by a factor of four without changing pixel size, optics or illumination. Noise will be lower.
If it works as advertised, this is a good thing.
Re:Sensationalist, almost rubbish (Score:4, Informative)
The article says that in conventional CMOS sensors, three quarters of the incident photons are either absorbed by a metal layer or hit a spot between photo diodes, not contributing to photo diode charge and read-out signal.
You are referring to areal efficiency or "fill factor" of detectors. CMOS had low areal efficiency some years back, but no longer. Both CMOS and CCD detectors are almost always equipped with integrated microlenses nowadays, which direct almost all of the incident light on the whole detector onto the active photosites. Some light is still lost at boundaries between the lenses, and due to the efficiency of the lenses. The ineffective regions between photosites receive hardly any light at all. Here's a quote from the Wikipedia article on CCDs:
Microlenses can bring the fill factor back up to 90 percent or more depending on pixel size and the overall system's optical design.
Integration of microlenses onto the chip is a major reason why CMOS detectors have caught up with CCD detectors in image quality. Compared to CCDs, a smaller fraction of a CMOS detector consists of photosites. Both benefit from provision of microlenses, but CMOS benefits rather more, and reaches almost the same areal efficiency as a comparable CCD. With less than 10% of incident photons lost, there is only a limited scope for improvement, by quantum dots or other methods. Those claims in TFA were reminiscent of fresh bullshit.
Good CCDs can exceed 85% in quantum efficiency at some wavelengths, such as in the icx285 which is typically used in industrial devices. However, efficiencies are lower at other wavelengths, and CCD and CMOS detectors used in consumer devices often peak at below 60% quantum efficiency. So there is room for improvement here, but not nearly as spectacular as the claims of TFA.
Keep in mind, as I mentioned in the earlier post, that increases in detector sensitivity (through areal efficiency or quantum efficiency) will elevate the signal level, but will not affect the ratio of shot noise in the signal. For that, you need more incident photons though bigger pixels and/or better subject illumination and/or bigger lens apertures and/or longer exposure times. TFA smells a bit like marketing hype. Quantum dots may lead to improvements in detector fabrication & price, but not so much in image quality...
Re: (Score:3, Informative)
Actually, the smaller and more sensitive quantum dots by themselves would be better for dark imaging, mainly because with so many more usable levels of sensitivity you could assign a noise level below a certain threshold to render as pure black and then work up from there. What would matter would be the degree of sensitivity these quantum dots have, and then the subsequent software that processes the sensor data.
Re: (Score:3, Informative)
There's another side to it as well.
With the 10mp you can crop a third of the picture out and still have more left over than the 6mp even took. This is something art photographers do all the time. Cropping is often crucial for getting the composition you want and getting rid of details that don't add anything to the focus of the picture. The more data you HAVE the more you can do with it.
This is also why art photographers take pictures in RAW mode rather than jpeg. What's lost with jpeg's lossy compression i
Re: (Score:2, Informative)
Near as I can tell we've exceeded the useful range of pixel density increases for all but the most high-powered applications, so there's no reason to look for better resolution.
Re: (Score:2)
You don't see any market for smaller cameras?
Re: (Score:3, Informative)
You don't see any market for smaller cameras?
It's not about smaller cameras - when your pixels are smaller than individual photos (as is the case now), making them smaller only increases the "noise" part of the s/n ratio.
Re: (Score:2)
> It's not about smaller cameras - when your pixels are smaller than
> individual photos (as is the case now),
I don't believe that is true. The smallest pixel pitch I can find is 2300 nanometers.
> making them smaller only increases the "noise" part of the s/n ratio.
Some people may choose to use this technology to make smaller cameras with performance equal to the smallest useful ones currently available: I'm sure there is a market. Others will use it to make "normal" cameras with improved performa
Re:Sensitivity is not Resolution (Score:5, Informative)
There is a physics problem when your image sensor is too small - photons have size and mass, and there is a point at which you cannot collect enough light to take a good picture.
That's why expensive cameras have larger image sensors - they aren't packing more pixels per square inch, they are actually packing fewer pixels per square inch. A high end 10 mega-pixel will have an image sensor that is 10x bigger than a pocket-sized 10 mega-pixel camera, and it will take phenomenally better pictures.
This is the source of the GP's confusion about what the summary means - is "quantum film" more sensitive to light? Or are they simply able to pack more sensors in a smaller area? If they are actually able to collect accurate color information from fewer photons (i.e. more sensitive to light), then you can shrink the size of high end image sensors and still maintain quality. If it simply allows them to pack more pixels onto a sensor without being able to collect accurate color data with fewer photons, then quantum film is absolutely worthless. It offers no benefit to the quality of images in that case, even if they can crank a camera up to 30 megapixels it will still look like shit.
Quantum film (Score:5, Funny)
That's the trouble with it - you can know its sensitivity or its resolution, but not both, and the act of measuring one changes the other.
Re: (Score:2, Informative)
Actually, photons have neither size nor mass
http://en.wikipedia.org/wiki/Photon
Good point about sensor size. A post below touches on diffraction problems, and there are other problems with small sensors such as S/N, being more demanding on lenses' resolution, etc.
Re: (Score:2)
photons have size and mass
Photons do not have mass, though this is somewhat a matter of semantics.
Re: (Score:2, Informative)
Photons don't have mass. They do however have momentum. (p=h/lambda, note: deriving mass from this momentum using p=mv is a common physics mistake and makes no physical sense) They also don't strictly have size. If you're referring to fitting them through things and collecting them with objects, treating light as waves generally works, with photon just representing a quantization of the energy contained in the wave. If you were to try to characterize photon size, it would variable by the wavelength of t
Re: (Score:2)
Not only that, high quality cameras will have multiple CCDs.
A video camera with one 1/4 '' Sharp CCD is not the same as a camera with three Sony 1/3'' CCDs. Even if they both deliver 5 megapixel images.
Re: (Score:2)
There is a physics problem when your image sensor is too small - photons have size and mass
Photon has mass ??
Re: (Score:3, Funny)
Re: (Score:2)
If it simply allows them to pack more pixels onto a sensor without being able to collect accurate color data with fewer photons, then quantum film is absolutely worthless.
Not true. Existing digital cameras have noise, particularly at the higher ISOs. The more readings you take from a "pixel" in the frame, the more you can negate this noise by averaging it out. One way to increase the number of samples is to stack several readings--increasing your ISO level, more or less.
Another way to increase the number of samples is to scale your resulting pixel array down, so that a pixel and its immediate neighbors get averaged into the same pixel, drowning out more of the noise. So if
Re:Sensitivity is not Resolution (Score:5, Informative)
This is about the laws of physics. I'm sure somebody will correct me if I'm not explaining this very well, but...
There's a limit to how precisely a lens can focus light. Now, in theory, as the aperture gets smaller, the diffusion decreases, so you might think that the small lenses would be result in a more precise image than larger ones. However, with those smaller lenses come smaller image sensors, which means that even if the lens can focus light to a smaller point, the pixels are also smaller, thus canceling out much of this improvement.
The bigger problem is that the smaller the lens, the greater the impact of even tiny lens aberrations on the resolving power of the lens. A speck of dust on a 1.5mm lens makes a huge difference, whereas it can be largely ignored on a lens with a 72mm diameter.
Also, as resolution increases, light gathering decreases. That's pretty fundamental to the laws of physics. Think about the bucket analogy. You have four square buckets measuring 1 foot by 1 foot. You place them side by side during a thunderstorm. You get another bucket that is two feet on each side. You place it beside the others. The same amount of rain (approximately) falls onto the four small buckets as the single large bucket, thus the large bucket has four times the amount of water in it that any one of the smaller buckets does.
The same principle applies to pixels. All else being equal, resolution and light gathering are inversely proportional. Small cameras are already hampered pretty badly by light gathering because of their small lenses. Increasing the resolution just makes this worse. I can tell the difference in noise between my old 6MP DSLR and my 10MP DSLR. I can't imagine what 20MP in a camera phone would look like. :-D
I think the real question should not be whether we can make smaller cameras, but rather whether we can make existing small cameras better by improving the light gathering. This technology might do that---whether it will work better than some of the newer CMOS sensor designs that already move the light-gathering material to the front remains to be seen---but at some point, making things smaller just means that they're easier to lose. I think we're at that point, if not past it....
Re: (Score:3, Informative)
Err... diffraction, not diffusion.
Also, my second paragraph was backwards in that the diffraction increases as the aperture gets smaller. The smaller sensor thus compounds the problem further.
Re: (Score:2)
Tell that to anyone using a transmission electron microscope. I have friends who dislike the digital microscopy due to the detail being much lower than film. While it is quicker and less susceptible to movement problems, you lose most of the detail due to the electrons being far smaller than the CMOS sensor's pixels.
I really think this jaded "we don't need any more technology" bullshit is just a modern day luddite attitude. It seems to be a fear of being superseded with the technology you currently use. May
Re: (Score:2)
I think a $50,000+ electron microscope would qualify as "the most high-powered applications," particularly in the context of the article, which is talking about cell phone cameras.
Re: (Score:3, Informative)
A lot of people just vomit their photos onto Facebook, but many still take the time to do a simple crop/levels/contrast edit. The only people who don't need more megapixels are those that never edit their pictures. And they probably don't care about quality anyway.
Most cameras can take pictures in all
Re:Sensitivity is not Resolution (Score:5, Informative)
No, it doesn't. The lens system of the camera only has a certain resolving ability. Once you pass that point, you can make the sensor as high resolution as you want and you're just wasting your time because the lens isn't passing information at that level of detail anyway. Basically, you're measuring blur more and more finely.
Take a picture from anything less than a high end SLR or medium format camera and zoom in until you're actually looking at one image pixel to one screen pixel. Now tell me how good the image looks. Pretty crappy, hey? That's because the lens isn't capable of producing a decent image at even the resolution of the current sensor, never mind a better one.
Re: (Score:2)
Re:Sensitivity is not Resolution (Score:5, Insightful)
Of course, if money is no object, more of everything will certainly improve things. But practically speaking, the vast majority of folks in the real world would be better off paying more attention to their glass rather than to their silicon.
A nice lens on a relatively limited camera will take amazing photos. A crappy lens on the best camera will not.
Re: (Score:2)
Of course, if money is no object, more of everything will certainly improve things.
Nope. Increasing resolution without first increasing light gathering ability will make the image worse. In fact, most digital cameras would produce better pictures if they decreased the resolution. Manufacturers put a higher pixel density than is useful because megapixels sell: the salesman and your mom, and even you see two cameras, one with 6mp, and one with 12mp, and you assume the 12mp camera is better. If they're similar in other respects, the one with a smaller pixel density (the 6mp one) is guarantee
Re: (Score:2)
There aren't many poor sensors around any more, though. There is plenty of crappy glass.
I have probably the worst sensor in the SLR market available. It's one of the first Panasonic sensors used in the Four Thirds cameras from Olympus. Newer ones are better, but this one exhibits pattern noise pretty badly in shadows at ISO 800, and even in bright areas at ISO 1600. And the images are still *stunningly* good (if taken of a suitable subject with a good lens). It only gets better from there, on the newer Olym
Re: (Score:2)
> Having more pixels is a good thing for anyone who sells flash memory.
Here, I fixed that for you.
It's true that many photos would be improved by more detail. But it's not always a benefit: just as text is well-represented with a modest number of bits to describe a letter in ASCII, storing sophisticated graphical images of each character is usually quite pointless and actually interferes with getting work done.
Re: (Score:2)
Not at base ISO it's not. Go read the dpreview.
Re: (Score:3, Informative)
Couldn't one lead to the other? Would averaging 4 noisy pixels give you a better light sensitivity than just having the one?
Re:Sensitivity is not Resolution (Score:4, Informative)
To a certain extent, yes. But, there is a certain minimum overhead for every pixel. The more pixels you cram onto a sensor, the more space on the sensor is dedicated to overhead instead of picking up light. Consequently, there are real limits to how much resolution you would want to have on a sensor.
Re: (Score:2)
That overhead is mitigated pretty well by microlenses, though. They put little lenses over the pixel array to funnel light into the sensitive bits and away from the dead-weight circuitry.
Re: (Score:2)
> the more space on the sensor is dedicated to overhead instead of picking up light.
Not a big problem if you build stuff in 3D.
Some modern sensors have microlenses in front of the actual detectors.
http://imaging.nikon.com/products/imaging/technology/d-technology/imagingsensor/iso/img/cp_02.gif [nikon.com]
http://www.usa.canon.com/dlc/controller?act=GetArticleAct&articleID=246 [canon.com]
Re: (Score:2)
Re: (Score:2)
> ...who knows how much area the individual pixels will have to take up.
Assuming the stuff is more or less as the articles say it is that will be up to the designer of the imaging chip. You build your transistor array and then coat it with this stuff.
Re: (Score:2)
According to the articles, both.
In particular: (Score:3, Interesting)
According to the articles, both.
In particular:
- It replaces the in-chip photodetector with an on-top-of-chip detector, allowing all the real estate on the chip be used for the REST of the system rather than reserving most of it for light sensors. That means you can use bigger features (and cheaper processes) - and/or get more pixels by shrinking the features back down a bit.
- It gives about a 4x sensitivity improvement. (2x because the quantum dots are more sensitive, another 2x because th
Re: (Score:2)
Nice, sounds pretty sweet.
The big problem with digital cameras is light sensitivity, we can pack 15 mega-pixels into a camera-phone but the loss in light sensitivity means you'd have been better off sticking with 1 mega-pixel, the picture quality will be abysmal. That's why high end cameras use image sensors that are many times larger for the same amount of pixels than cheap consumer models.
Finally! (Score:2)
Re: (Score:3, Informative)
Re: (Score:2)
key word - quantum.
If that doesn't immediately make you think of ULTRA-TINY SCALES, and thus lead you to think quantum dot silver grains thus quantum dots = higher MP in the same sensor size, I guess you should be handing in your geek card.
Finally a film replacement? (Score:3, Interesting)
Can the speed be adjusted like ISO 100-400 etc?
Re: (Score:2)
Quantum film (Score:2)
I thought photography was getting away from film . . .
Re: (Score:3, Funny)
> Where do I get my Quantum film developed at ?
You put it in a box with a certain cat.
> I thought photography was getting away from film . . .
Well, it is and it isn't.
Re: (Score:2)
You get Quantum film developed at Black Mesa Laboratories. Don't expect cake while you wait.
Re: (Score:2)
They were talking about 2 stop advantage, 4x. As current tech is about 60-80% (80% peak QE CCD-sensors are pretty cheap and widely used in astronomy) peak efficient, it seems like this new tech is up to 360% efficient. Most current cameras are getting 30-60% peak QE right now. From pocket cameras to DSLRs.
I call bullshit on the technology. Snake oil or they're being very creative with truth.
They're claiming that they can replace in-chip features with features on the top of the chip. E.g. instead of having your pixels surrounded with supporting electronics that can't collect any light, you can move the pixel to the top of the chip and put the supporting electronics underneath the pixels. Also, your pixels aren't covered by a few microns of material that photons have to go through before hitting the pixel.
Since QE only measures the efficiency converting photons incident on the photodetector an
Re: (Score:2)
They make them. Leica has one, but nobody can afford it. Sigma makes some with their funky "Foveon" sensors.
The seriously nice, and remarkably affordable, ones are the "Micro Four-Thirds" cameras from Olympus and Panasonic. They have ordinary Four Thirds sensors inside, just like on Olympus DSLR's. You can use either compact Micro Four Thirds lenses on them (both Olympus and Panasonic make some, and more are coming), or standard Four Thirds lenses.
Night vision goggles (Score:4, Insightful)
Right now night vision goggles give a very grainy tinged image. Clarifying that could have millions of applications.
Re: (Score:2)
Wow
taken with the lense cap on and convert them to full colour pictures ?
PS: Unless it is the goggles that are painted pitch black
do that make them any blacker ?
Re: (Score:2)
Pitch black night vision goggles ? ... is that like the photoshop filter that can take photos
Wow
taken with the lense cap on and convert them to full colour pictures ?
I think "pitch black night vision goggles" is a term-of-art for night vision goggles that can produce usable images at light levels that would APPEAR pitch black to an unaided eye - though there are enough photons available that with sufficient amplification you don't need added illumination.
Re: (Score:2)
Wow ... is that like the photoshop filter that can take photos taken with the lense cap on and convert them to full colour pictures ?
Take a look at the pictures on the right:
http://en.wikipedia.org/wiki/Black_body#Radiation_emitted_by_a_human_body
. With enough sensitivity everything gives off infrared radiation, even things we would normally think is pitch black. Certainly at least enough for soldiers to operate at night without any artificial lighting at all already, and I'm guessing this could make them much better. The lens cover is different, no light is really no light. But even in the absence of sun, moon, stars, fire and artificial light it is never totally dark, just pitch dark.
Re:Night vision goggles (Score:5, Interesting)
> With enough sensitivity everything gives off infrared radiation...
Actually it does so with no sensitivity at all, just by being hotter than absolute zero. However, to detect infrared your sensor must not only be sensitive to it, it must also be significantly colder than the object you are trying to image. otherwise it will just detect its own emissions.
Re: (Score:2)
I don't know but they're sure going to be hard to find in the dark :)
Re: (Score:2)
Don't care about more pixels (Score:3, Interesting)
Re: (Score:3, Informative)
Re: (Score:2)
One approach would be to use mirror lenses, like a reflecting telescope. Most such telescopes these days have a central obstruction for the secondary mirror, which reduces contrast and has unappealing donut-shaped bokeh. However, there are a couple interesting offset-mirror scopes that avoid this problem, such as the Scheifspeigler and Yolo designs. While most are designed to be very slow (f/10 or longer), they
Re: (Score:2)
They make those. I saw a 500 f/6.3 (actual light-gathering ability) in a shop the other day, but most are 500 f/8. They give rather bizarre "donut" bokeh and tend to lack contrast, but they are essentially free from chromatic aberration and quite small for their length.
Can't get that much aperture, though.
I'd like a modern mirror lens, actually. Real multicoatings and autofocus, and you're good to go. The motorized aperture isn't really that necessary (if it's 500 f/8, I probably don't want to close it down
Re: (Score:2)
Objective truth (Score:2)
Re: (Score:2)
Sounds like Baker-Nunn camera (Score:2)
You say you want more portable glass. However, you're still asking for a 700mm lens. You do realise, that in order to have 700mm lens at f/1, you need an entrance pupil with 700/1 = 700mm worth of diameter? Yup, that's right, 70cm of diameter in order to achieve f/1. Not sure that's ever going to be portable, mate.
To quote Frank Abagnale Jr., "I concur."
From wikipedia: (http://en.wikipedia.org/wiki/Baker-Nunn_camera#Baker-Nunn)
A dozen f/0.75 Baker-Nunn cameras with 20-inch apertures – each weighing 3.5 tons including a multiple axis mount allowing it to follow satellites in the sky – were used by the Smithsonian Astrophysical Observatory to track artificial satellites from the late 1950s to mid 1970s.
20 in *25 mm/inch = 500 mm. => 500 mm /0.75 = 667 mm Objective, which is pretty close to 700mm. At 3.5 tons, this is only semi-portable.
finally... (Score:3, Funny)
A camera to take pics of Schrödinger's LOLcat
up or down (Score:2)
Won't this cause other problems? (Score:2)
With silicon, having to pass through narrow gaps should reduce the amount of light coming at the sensor from an unexpected angle as would occur due to lens flare, imperfections in the lens, etc. Without that, I'd expect the clarity of the image to be impacted. Am I missing something, or is this just trading one problem for another?
Also, how does this improve over already commercially available newer CMOS designs [displayblog.com] that push the photo-sensitive material to the front surface?
You don't seem to understand 'gap' (Score:2)
A CMOS sensor is smooth and fairly reflective, so it reflects a considerable fraction of the light. This reflected light does indeed cause flare. The second
Re: (Score:2)
I'm not talking about the bandgap. I'm talking about the fact that the surfaces of most CMOS chips have a series of narrow slots [cnet.com.au] through which the light must pass. Call it gaps, call it slots, call it circuit traces, call it whatever. These parts don't have such structures, and that significantly changes the angles of light that these sorts of parts can detect.
And I reiterate the question: do the benefits of absorbing all light (including light from near-parallel angles) outweigh the problems that this c
Doesn't mean much as long as the optics still suck (Score:3, Insightful)
Re:Doesn't mean much as long as the optics still s (Score:2)
Re: (Score:2)
He means fixed focus lenses.
Good DSLR zooms are very good these days. (Score:2)
There are SLR zooms that are amazing optically, easily the equal of a prime. In no particular order:
Olympus 14-35 f/2
Olympus 30-100 f/2
Olympus 90-250 f/2.8
Olympus 11-22 f/2.8-3.5
Olympus 50-200 f/2.8-3.5
Nikon 200-400 f/4
Sigma 300-800 f/5.6 (what a huge thing)
Re: (Score:2, Interesting)
If the sensor gets small enough, the lens can be something other that a refractive solid. Perhaps a drop of liquid in some sort of electrostatic suspension, where problems with the material are far less, and the lens can be focused by reshaping rather than moving.
Re: (Score:2)
I don't know too much about the physics of photography, but it seems to me that the real problem in the picture quality of tiny cameras is that the lenses are terrible.
It seems to anybody who knows anything about the problem with digital cameras that you don't have a clue. And this statement proves it:
Even as things stand now, a older camera with good optics and a 5MP sensor produces much better images than a new camera with cheap optics and a 12MP sensor. It seems to me that sensor isn't the bottleneck anymore.
The reason the old 5mp camera produces a better picture than the 12mp camera is not because of the optics, it's because of the size of the individual pixels on the chip. The 5mp camera has sensors that are 2-3 times larger than the 12mp camera, which means they can collect that much more light, and therefore can have shorter exposure times and/or more accurate color.
That's
Re: (Score:2)
Odd, I get crisper pictures with smaller apertures, all else being equal, and I'm pretty sure everybody else in the world does, too. You've got it completely backwards there.
Re:Doesn't mean much as long as the optics still s (Score:4, Interesting)
As an engineer who does astronomical optics rather than a photographer, I can say with certainty with absolute certainty that all else being equal (i.e. diffraction limited case) a larger aperture is sharper. This is simply a matter of physics. The resolution is inversely proportional to diameter of the aperture due to the wave-like nature of light.
Now, if by 'crisper' you don't mean sharper, but rather a fuzzy measure of how you think it looks, its not surprising because smaller lenses of good quality are easier to make, and will thus approach the ideal diffraction limit. But this isn't a case of all other things being equal, and won't be as capable.
Re: (Score:2)
Just to clarify, I wasn't referring to the lens diameter. I'm well aware that you get a more focused image with larger optics. I was referring to the diameter of the iris, and was confusing the depth of field with the focus.
Re: (Score:2)
Most SLR lenses aren't diffraction-limited, though. If you go to slrgear.com or dpreview.com and look at performance vs. aperture, you'll notice poorer performance wide open (because of aberrations) and poorer performance closed past f/8 (on Four Thirds) or f/16 (on 35mm format) because of aberrations. Most lenses are best somewhere between f/4 and f/8.
Re: (Score:2)
Odd, I get crisper pictures with smaller apertures, all else being equal, and I'm pretty sure everybody else in the world does, too. You've got it completely backwards there.
Most lenses reach ideal sharpness around F8, so you are both right.
Smaller apertures and you run into diffraction limitations. Larger apertures and you run into narrow depth of field issues, as well as design issues. I believe it is difficult to accurately manufacture the lens to align at large apertures.
Re: (Score:2)
Yeah, that's what I was thinking of. You're right. My brain was spacing out. Mea culpa.
Re: (Score:2)
The Canon 85mm f/1.2 is also a legend. And only about 2 grand.
If these lenses are only 'pretty good', you must be accustomed to the optics in research telescopes ;-)
Re: (Score:2)
85/1.2L is actually pretty soft around the edges, from what I've heard.
But, yes, long tele primes are excellent, as are some midrange macro lenses (Sigma 150/2.8, Olympus 50/2, etc.)
Quantum! (Score:4, Funny)
"This is either a picture of your Aunt Mavis... or not."
Re: (Score:2)
You'll only really know once you observe it.
More in The Economist (Score:3, Informative)
I read a story about this [economist.com] in a recent issue of The Economist. The article focuses more on the other direction -- how quantum dots can be used to enhance LEDs to create more pleasing/efficient/versatile lighting. But it also mentions how they can be used to read light, too; for example, to make better solar panels.
I'm Sitting On the Fence (Score:4, Funny)
I dunno about quantum photography, it's neither here nor there.
Pictures... (Score:2)
They're black! (Score:2)
"Our quantum film even looks like photographic film—an opaque black material that we deposit right on the top layer of our image chip."
This is important. Current digital sensors are reflective & that results in a specular reflection. This greatly increases the flare, since much of the light the strikes the sensor reflect back into the lens, where it can reflect from a lens back to the sensor. This is one area where digital has been noticeably worse that film. See PhotoTechEDU Day 4: Contrast, MTF, Flare, and Noise @ http://www.youtube.com/watch?v=tNvFsOvVkOg&feature=channel [youtube.com]. This is the major loss of contrast at low spacial
Re:They're black! (Score:4, Interesting)
If you point any of those cameras toward the sun, you will see flare. This is carefully explained in the video. To suppress flare, you need to stop reflections. On the glass, you can multilayer coatings. On the sensor, you can't do that. So you have to live with the reflection. If you have a concave lens element facing toward the camera body, you have a little concave mirror just waiting to reflect the specular reflection of the sun back onto your sensor. If the new sensors are black, they are not going to reflect much - so less flare.
Re: (Score:2)
Re: (Score:2)
They rereleased the 70-200 f/2.8L just recently. Don't think it's because of the reflective-sensor problem, but I can say with certainty (my dad has one of the old ones) that the old design has some issues that come out when shot on APS-C.
Re: (Score:2)
Re: (Score:2)
Fundamentally.
Re: (Score:2)
Actually it allows smaller sensors to work as well as larger sensors, and larger sensors to work better.
It's what you really want (better pictures in a cheap camera), even if you think you want something else (bigger sensors in a cheap camera).
Re: (Score:2)
It's actually both until you watch it.
Quantum is proper in this case (Score:2)
In order to understand how these dots are optically active, you do indeed need quantum mechanics.
To me, 'nano' is just a word for the boundary between the quantum world and the classical world.