Wavy Lenses Extend Depth of Field in Digital Imaging 359
genegeek writes "On Feb 25 CDM Optics was awarded a patent for a new digital imaging system utilizing "Wavefront Coding" that produces images with 10-fold the depth of field of conventional lenses. The image itself is blurred until processed. Image examples are here."
Re:So (Score:5, Informative)
The advantage of this system over your Canon is that you can get high depth of field and large apertures at the same time. In order to increase the depth of field of your camera, you have to stop down the lens, which means less light. Less light means longer exposures (can't stop the action) or more sensitive film/sensors (more noise).
Instead of stopping down the lens and blocking light, this only affects the phase of the wavefront which means all the light energy still goes through.
Extremely clever.
Re:So (Score:3, Informative)
Last time I checked, it was a hell of a lot easier to do photo processing tricks with photoshop than in a darkroom, and with experience and skill the two types of work can be hard to distinguish from each other. The only exception I can think of being "push" type processing which takes advantage of being able to stretch or alter the dynamic range of your medium (film or photopaper) beyond its ratings. Since the site appears slashdotted, what exactly is it about the new lens that prevents any additional processing?
Re:So (Score:3, Informative)
It's a php page serving images (Score:3, Informative)
I hope the heatsinks work!
Space Tech Spinoff Again! (Score:3, Informative)
I couldn't help but think back to the problem with the Hubble Space Telescope [nevada.edu], wherein after the launch they discovered that the mirror had not been properly ground to specification.
very cool (Score:5, Informative)
Here's a bit of background: in photography or laser scanning (point-by-point photography, basically), you always have a trade-off between depth-of-field and aperture size (as any photographer knows). Bigger aperture means shallow depth-of-field. However, a smaller aperture means lots of wasted light (imagine closing the aperture in your camera), and this means longer exposure times, and more importantly more NOISE in your images. This is true for digital, film, or photodetector.
So the "holy grail" is to keep the aperture open but still have high depth-of-field. This system depends on changing the phase of the light, instead of the amplitude (which is what you do when you stop down a lens to a smaller aperture). That way, no light energy is blocked and wasted.
Since the phase is changed, the resulting image on the CCD or film is fuzzy and has to be "decoded". You can think of it as "encoding" the wavefront in a special way that preserves the depth of field, capturing the image, and then "decoding" it into a sharp picture. It is really amazing. I hope it shows up in consumer cameras someday, it could completely change consumer photography since most "snapshot photographers" don't care about depth of field or all that stuff. It will also be great for medical and industrial imaging.
My system was sort of a hybrid between shading the aperture (instead of a sudden stopping of light, it gradually goes to black at the edge) and phase changes. Lots of people have been working on this problem over the years, but these guys really stripped the problem down to the essence and came up with a highly optimized solution.
Re:Digital photography needs LESS DoF, not MORE. (Score:1, Informative)
It's *always* possible to give up DoF by choice.
Re:Gimme a break (Score:4, Informative)
I've blown 6MP images up to 20"x30". They look great. Good enough that people gush about how great they look when they buy them from us, at least. While I don't have access to an 11MP camera, I can't imagine that 30"x40" would be too much of a stretch.
Keep in mind that I'm talking about images from a $5000 camera, not a piece-o'-crap point-and-shoot.
Quick Depth of Field tutorial (Score:4, Informative)
This technology doesn't take a fundamentally blurred image and sharpen it; instead it looks like it uses very precisely waved lenses to create interference in the light coming through the lens, which is then digitally deconstructed to provide a sharp image with a VERY deep DoF. I can't get to their site to read up on this, but I'd guess there's probably some sort of differential-focus setup (2 lenses, focused at either end of your DoF, generating interference) and a lot of Fourier transforms. But that's just an educated guess based on what I know about optics and waveforms - YMMV, my $0.02, caveat emptor, IANAL, and I haven't had PhysChem in a year. Feel free to add any other disclaimers I left out.
Re:So (Score:4, Informative)
I've made 20"x30"s from this camera with no complaints. They weren't razor-sharp, but then again neither are 35mm prints at that size. Yours will be a bit sharper, but mine will have no grain and better color. Which one is better is a matter of opinion. And against Canon's 11MP, you wouldn't have a prayer.
Or, lets take wide-angle pictures. With the cropping factor on your Nikon D1X, how can you be any wider than say 32mm (35mm equivalent).
I have a 17mm lens (17-35mm F/2.8 AFS), which is 25mm equivalent on the D1X. If I went down to Nikon's rectlinear 14mm, I'd get 21mm equivalent. That's certainly wide enough for almost any application.
Re:So (Score:3, Informative)
It's been shown in side by side tests of large prints that 10-11Mp is far superior to 35mm film. Despite 35mm being technically able to hold more information than that, the grain of the film causes the images to come out looking worse.
Re:Digital photography needs LESS DoF, not MORE. (Score:5, Informative)
I've never met a consumer-grade digital camera with decent aperature range or depth of field. IMHO the new "wavy lens" technology can only be of benefit. (Assuming it actually works.)
There's more to life than Photoshop (Score:3, Informative)
There are some interesting HDR (high dynamic range) projects, such as HDRShop [debevec.org], and these formats are also used in several high-end 3D renderers, but I don't think they will become mainstream until Photoshop adopts them.
Unfortunately, Adobe insists on minor updates instead of doing what Photoshop (and Premiere, and several other of their products) needs, which is a complete rewrite.
High-end 3D renderers also have very good "film grain" simulation (film grain is not just random noise, it has very specific characteristics), and other tricks that can make CGI "feel" almost exactly like traditional analog media. But again, this is not something you'll find in Photoshop.
RMN
~~~
Re:Film and digital resolution comparisons (Score:5, Informative)
and: http://www.luminous-landscape.com/reviews/cameras
It's just polite to make such links both active and accurate (extraneous spaces in both links -- probably inserted by slashdot because you tried to submit the URLs as plain text).
University site with original papers (Score:4, Informative)
HDRI vs RGB (Score:3, Informative)
High Dynamic Range Images use a higher bit depth (12 bits per chanel?). Many of the Nikon cameras can save out these 12 bit/channel images, which, with the proper manipulation software (HDRShop, others) can be used for much finer and subtler manipulation.
So, (math skills permiting), I make that out as 4096 levels per channel, as opposed to the current 256/channel in a standard 24 bit image.
It's still an RGB system, but it's a much better RGB system.
The next step is to get manufacturers on board & start making HDRI Video Cards & Monitors.
More info from Boulderdaily Camera (Score:2, Informative)
Boulderdailycamera [boulderdailycamera.com]
Boulder startup gets deal with major optics player
By Anthony Lane
For the Camera
A Boulder-based startup, which makes technology that greatly improves the clarity of images through a lens, is poised to grow after signing a deal with one of the world's premier lens and microelectronics makers.
CDM Optics is a private company with sales last year of about $1 million, according to R.C. "Merc" Mercure, CDM's chairman and chief executive.
Next year, sales are expected to double with CDM's new partnership with the optical engineering company Carl Zeiss, a renowned manufacturer of microscopes, lenses and other instruments.
"The world's oldest optical company has joined forces with the most modern," said Ed Dowski, vice-president of CDM Optics.
The moving parts and multiple lenses of microscopes and certain cameras are precisely engineered to control aberrations and to produce a sharp image where someone wants it -- on a piece of paper, a slide or a computer screen.
Over centuries, scientists have devised ways to make sharp images of ever-smaller and more distant objects, but could do little to overcome the unchanging rules governing light and the formation of a focused image.
"There were no revolutionary changes in optics for 200 years," said Dowski.
CDM Optics produces an unusual type of "lens." Added to a standard lens, it produces images that actually appear blurry.
In fact, "There doesn't seem to be any part of the image that is more focused than any other," said Mercure, who was the co-founder of Ball Brothers Research Corp., which became Ball Aerospace.
A uniformly unfocused image may seem an unlikely goal, but after being digitally processed, the result is an image that is entirely in focus.
Mercure holds a poster with four pictures of a pack of crayons. Two were produced with a standard digital camera and the other two with a digital camera equipped with CDM's Wavefront Coding technology.
In one of the images from the standard camera, only a few crayons in the middle of the pack are in focus. To bring more of the crayons into focus, the photographer would have to decrease the size of the hole through which light enters the camera.
In the resulting image, more crayons are in focus, but it appears grainy as a result of less light hitting the camera's digital detector.
The difference between the two pictures produced with CDM's technology is more dramatic. The first is hazy -- it is an unprocessed image that would not ordinarily be seen.
In the second picture, all of the crayons from front to back are in focus without the graininess from the standard camera.
Dowski said applications for the technology that allows lenses to produce such images are numerous.
"You can either make lenses cheaper, sharper or both," he said.
Sharper images may be beneficial for many types of optics. A microscope, for instance, may magnify an object to 100 times its actual size with only a sliver 1 micron thick in focus.
"We can give a microscope up to 15 microns of focus," Mercure said.
One area in which this improved depth of field might be useful is in vitro fertilization. Ordinarily, a doctor produces a great number of embryos and monitors them for several days before implanting several. The goal is cause a successful pregnancy while minimizing the number of multiple births.
The problem is that after about three days, embryos are difficult to monitor with an ordinary microscope. The embryologist must guess which embryos are most likely produce a successful pregnancy.
Using Wavefront Coding technology, Mercure said, embryologists should be able to monitor the embryos for four or five days, thus reducing the number of embryos that must be implanted to have the same chance of a successful pregnancy.
The same increase in depth of field
Low-yeld (Score:3, Informative)
I doubt its that bad, since a camera can deal with a sparkling of 'dead' sensors, while pretty much any defect will kill a CPU.
more images online here (Score:2, Informative)
more images of increased depth [colorado.edu]
More information (Score:2, Informative)
Here is a news paper article.
http://www.boulderdailycamera.com/busin
and another.
http://www.alteich.com/tidbits/t012802.
and some images.
http://www.colorado.edu/isl/intimages/3c
Comment removed (Score:3, Informative)
Re:Digital better than film? P-shaw! (Score:3, Informative)
Low-light: CCDs have been used heavily by astronomers for quite a while due to their exceptional low-light performance. (Esp. when actively cooled.)
IR: For near-IR, current image sensors are excellent. In fact, digital camera manufacturers must use an IR-blocking filter in order to prevent IR sensitivity from being a major problem. Remove this filter and you have an excellent IR camera. Sony image sensors are more IR-sensitive than the average CCD or CMOS imager, which Sony takes advantage of in their NightShot camcorders. (The only major difference between a NightShot capable camcorder and any other camcorder is that the IR blocking filter on a NightShot camera can be moved out of the way without disassembling the camera. The improved IR sensitivity helps, but is nothing compared to simply removing that filter.)
Time-lapse? I can do time-lapse photography with cron and almost any video encoding software, as most can import sequences of still images. You need a camera with a HUGE film carrier and then you must play back that film rapidly in a projector.
And far-IR? I've never heard of film being used for far-IR (thermal) imaging. It's actively cooled electronic imaging all the way... (Often liquid nitrogen cooled, to keep the camera from "seeing itself". Cool film down that much and it will stop working, that's IF you're lucky enough for it not to crack from being too brittle.)
"Economist" article (Score:4, Informative)
http://www.economist.com/science/tq/displayStory.c fm?story_id=1476751 [economist.com]
Re:So (Score:5, Informative)
Don't get me wrong: I *love* my Canon PowerShot G2 (4MP). I've been extremely pleased with the results in a 4x6 format. I've blown up some as large as 8x10 (had them professionally printed and developed) and find that the quality is almost as good as prints made from 100 ISO 35mm film. Having "during the shot" color balancing also makes it much easier to get useable prints without serious headaches. And it's certainly more conveinent to me to have the images digitally available, too.
I also find that without my old-school mental block of "don't waste film" is gone, and that I now take many more shots than I used to. It leads to a bigger choice of shots to choose from, so I now get better final prints. Yes, I know I wasn't supposed to worry about "wasting film" before, but those old habits are very hard to break.
Re:MOD PARENT UP! (Score:3, Informative)
Re:Glitter and pepper (Score:3, Informative)
Since the corporate site is still down, the best place to read about this is probably the website of the Imaging Systems Laboratory [colorado.edu] at the University of Colorado at Boulder, which I think is where all this technology was originally developed. Someone else posted that link elsewhere in the comments, but I will post it again here, properly hyperlinked for convenient Slashdotting.
So? (Score:3, Informative)
Only amateurs want "everything from here to infinity" to be in-focus.
The advantages of selective depth-of-field cannot be understated. The ability to have the background be completely soft and have the subject be the only thing in sharp focus (thereby drawing the viewer's attention to it) is a huge advantage of film over digital.
For example, on Attack of the Clones, the guys at ILM actually had to process the images to give them less depth-of-field, because the cameras couldn't get as little depth-of-field as the cinematographer wanted.