Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Graphics Software Technology

Wavy Lenses Extend Depth of Field in Digital Imaging 359

genegeek writes "On Feb 25 CDM Optics was awarded a patent for a new digital imaging system utilizing "Wavefront Coding" that produces images with 10-fold the depth of field of conventional lenses. The image itself is blurred until processed. Image examples are here."
This discussion has been archived. No new comments can be posted.

Wavy Lenses Extend Depth of Field in Digital Imaging

Comments Filter:
  • Re:So (Score:5, Informative)

    by Anonymous Coward on Tuesday March 18, 2003 @01:47PM (#5537744)
    This isn't really analog-vs-digital, although digital processing is the easiest way to "decode" the image after its gone through the fancy lens.

    The advantage of this system over your Canon is that you can get high depth of field and large apertures at the same time. In order to increase the depth of field of your camera, you have to stop down the lens, which means less light. Less light means longer exposures (can't stop the action) or more sensitive film/sensors (more noise).

    Instead of stopping down the lens and blocking light, this only affects the phase of the wavefront which means all the light energy still goes through.

    Extremely clever.
  • Re:So (Score:3, Informative)

    by elmegil ( 12001 ) on Tuesday March 18, 2003 @01:50PM (#5537764) Homepage Journal
    not have the leeway of doing photo processing tricks in the darkroom.

    Last time I checked, it was a hell of a lot easier to do photo processing tricks with photoshop than in a darkroom, and with experience and skill the two types of work can be hard to distinguish from each other. The only exception I can think of being "push" type processing which takes advantage of being able to stretch or alter the dynamic range of your medium (film or photopaper) beyond its ratings. Since the site appears slashdotted, what exactly is it about the new lens that prevents any additional processing?

  • Re:So (Score:3, Informative)

    by burninginside ( 631942 ) on Tuesday March 18, 2003 @01:53PM (#5537786)
    it takes about 25+ megapixels to simulate 35mm film or about 100 megapixels to simulate medium format film, or 500 megapixels to simulate 4x5" film. For the internet even 3 MP is fine, but it becomes obvious in a gallery size print
  • by mks113 ( 208282 ) <{mks} {at} {kijabe.org}> on Tuesday March 18, 2003 @01:55PM (#5537805) Homepage Journal
    That wouldn't take long to saturate the processor. If it were flat html with images, it would just max out the network.

    I hope the heatsinks work!
  • by 4of12 ( 97621 ) on Tuesday March 18, 2003 @01:59PM (#5537834) Homepage Journal

    I couldn't help but think back to the problem with the Hubble Space Telescope [nevada.edu], wherein after the launch they discovered that the mirror had not been properly ground to specification.

  • very cool (Score:5, Informative)

    by Anonymous Coward on Tuesday March 18, 2003 @02:00PM (#5537849)
    Ah yes, I know this system well. I did my master's research in extended depth-of-field optics and came across this research which pretty much blew away what I was working on.

    Here's a bit of background: in photography or laser scanning (point-by-point photography, basically), you always have a trade-off between depth-of-field and aperture size (as any photographer knows). Bigger aperture means shallow depth-of-field. However, a smaller aperture means lots of wasted light (imagine closing the aperture in your camera), and this means longer exposure times, and more importantly more NOISE in your images. This is true for digital, film, or photodetector.

    So the "holy grail" is to keep the aperture open but still have high depth-of-field. This system depends on changing the phase of the light, instead of the amplitude (which is what you do when you stop down a lens to a smaller aperture). That way, no light energy is blocked and wasted.

    Since the phase is changed, the resulting image on the CCD or film is fuzzy and has to be "decoded". You can think of it as "encoding" the wavefront in a special way that preserves the depth of field, capturing the image, and then "decoding" it into a sharp picture. It is really amazing. I hope it shows up in consumer cameras someday, it could completely change consumer photography since most "snapshot photographers" don't care about depth of field or all that stuff. It will also be great for medical and industrial imaging.

    My system was sort of a hybrid between shading the aperture (instead of a sudden stopping of light, it gradually goes to black at the edge) and phase changes. Lots of people have been working on this problem over the years, but these guys really stripped the problem down to the essence and came up with a highly optimized solution.
  • by Anonymous Coward on Tuesday March 18, 2003 @02:12PM (#5537945)
    There's more to this than meets the eye. :-) The two digicams I've used (an old Agfa and a less old Nikon 990) had problems with CCD noise. This slick new invention allows a camera to use bigger aperture with shorter exposure times while still providing adequate focus.

    It's *always* possible to give up DoF by choice.

  • Re:Gimme a break (Score:4, Informative)

    by egomaniac ( 105476 ) on Tuesday March 18, 2003 @02:15PM (#5537972) Homepage
    I don't get the sense that you've ever used a good digital camera.

    I've blown 6MP images up to 20"x30". They look great. Good enough that people gush about how great they look when they buy them from us, at least. While I don't have access to an 11MP camera, I can't imagine that 30"x40" would be too much of a stretch.

    Keep in mind that I'm talking about images from a $5000 camera, not a piece-o'-crap point-and-shoot.
  • by caveat ( 26803 ) on Tuesday March 18, 2003 @02:22PM (#5538029)
    Depth of Field has to do with focus, but it's not quite so simple. A lens has a what's known as a "critical plane of focus", which is the is the theoretical plane on which an object will be in perfect focus, i.e. raytraced light from this plane will fall perfectly on the film/CCD plane. Practically, there is a certain distance in front of and behind the CPoF where an object will be in acceptable focus; this is the Depth of Field. This is affected by several factors, but the two largest in conventional lenses are the f-stop (size of the iris aperture behind the lens, smaller opening = less light, higher exposure time, and greater DoF) and to a much lesser extent focal length (longer lens = smaller DoF).

    This technology doesn't take a fundamentally blurred image and sharpen it; instead it looks like it uses very precisely waved lenses to create interference in the light coming through the lens, which is then digitally deconstructed to provide a sharp image with a VERY deep DoF. I can't get to their site to read up on this, but I'd guess there's probably some sort of differential-focus setup (2 lenses, focused at either end of your DoF, generating interference) and a lot of Fourier transforms. But that's just an educated guess based on what I know about optics and waveforms - YMMV, my $0.02, caveat emptor, IANAL, and I haven't had PhysChem in a year. Feel free to add any other disclaimers I left out.
  • Re:So (Score:4, Informative)

    by egomaniac ( 105476 ) on Tuesday March 18, 2003 @02:23PM (#5538032) Homepage
    Ok, I'll load my 30-year old Canon with some Kodak Technical Pan film. Lets make 16x20" enlargements and see how we compare, huh?

    I've made 20"x30"s from this camera with no complaints. They weren't razor-sharp, but then again neither are 35mm prints at that size. Yours will be a bit sharper, but mine will have no grain and better color. Which one is better is a matter of opinion. And against Canon's 11MP, you wouldn't have a prayer.

    Or, lets take wide-angle pictures. With the cropping factor on your Nikon D1X, how can you be any wider than say 32mm (35mm equivalent).

    I have a 17mm lens (17-35mm F/2.8 AFS), which is 25mm equivalent on the D1X. If I went down to Nikon's rectlinear 14mm, I'd get 21mm equivalent. That's certainly wide enough for almost any application.
  • Re:So (Score:3, Informative)

    by blaine ( 16929 ) on Tuesday March 18, 2003 @02:23PM (#5538036)
    This is false due to missing an inherent weakness in film: grain.

    It's been shown in side by side tests of large prints that 10-11Mp is far superior to 35mm film. Despite 35mm being technically able to hold more information than that, the grain of the film causes the images to come out looking worse.
  • by mawdryn ( 531994 ) on Tuesday March 18, 2003 @02:48PM (#5538258)
    Having spent seven years as a commercial studio photographer (products and advertising, not portraits), I can say from much personal experience that more depth of field -- and more control over that depth of field -- is a very good thing. Even in a studio environment, where one can typically throw as much light on a subject as could be desired (allowing the use of very small apertures), achieving and controlling high depths of field can be a pain in the butt even for highly-controllable "analog" large format (4x5 and 8x10) cameras.

    I've never met a consumer-grade digital camera with decent aperature range or depth of field. IMHO the new "wavy lens" technology can only be of benefit. (Assuming it actually works.)
  • by Rui del-Negro ( 531098 ) on Tuesday March 18, 2003 @02:53PM (#5538302) Homepage
    The real problem there is dynamic range. Photoshop still works in 8 bits per channel, which is clearly not enough for any sort of exposure / brightness / contrast control. You need at least 16 bits per channel, preferably 32 (in floating-point format). Photoshop can load 16-bpc images but 99% of its tools are disabled until you convert the image down to 8-bpc. In other words: the 16-bpc mode is there just for marketing.

    There are some interesting HDR (high dynamic range) projects, such as HDRShop [debevec.org], and these formats are also used in several high-end 3D renderers, but I don't think they will become mainstream until Photoshop adopts them.

    Unfortunately, Adobe insists on minor updates instead of doing what Photoshop (and Premiere, and several other of their products) needs, which is a complete rewrite.

    High-end 3D renderers also have very good "film grain" simulation (film grain is not just random noise, it has very specific characteristics), and other tricks that can make CGI "feel" almost exactly like traditional analog media. But again, this is not something you'll find in Photoshop.

    RMN
    ~~~
  • http://www.robgalbraith.com/bins/content_page.asp? cid=7-4833-4853 [robgalbraith.com]
    and: http://www.luminous-landscape.com/reviews/cameras/ 1ds/1ds-field.shtml [luminous-landscape.com]

    It's just polite to make such links both active and accurate (extraneous spaces in both links -- probably inserted by slashdot because you tried to submit the URLs as plain text).

  • by iblink ( 648486 ) on Tuesday March 18, 2003 @03:15PM (#5538462)
    Although Colorado University may never forgive me, this address has links to the research papers as well as more images: http://www.colorado.edu/isl/
  • HDRI vs RGB (Score:3, Informative)

    by NickFusion ( 456530 ) on Tuesday March 18, 2003 @03:15PM (#5538463) Homepage
    That's because Photoshop & most digital cameras only use RGB colorspace (24 bits) which is a crappy color space, and one that we're currently stuck with because of our display devices.

    High Dynamic Range Images use a higher bit depth (12 bits per chanel?). Many of the Nikon cameras can save out these 12 bit/channel images, which, with the proper manipulation software (HDRShop, others) can be used for much finer and subtler manipulation.

    So, (math skills permiting), I make that out as 4096 levels per channel, as opposed to the current 256/channel in a standard 24 bit image.

    It's still an RGB system, but it's a much better RGB system.

    The next step is to get manufacturers on board & start making HDRI Video Cards & Monitors.
  • by DoubleD ( 29726 ) on Tuesday March 18, 2003 @03:22PM (#5538512)
    Some more info from
    Boulderdailycamera [boulderdailycamera.com]

    Boulder startup gets deal with major optics player
    By Anthony Lane
    For the Camera

    A Boulder-based startup, which makes technology that greatly improves the clarity of images through a lens, is poised to grow after signing a deal with one of the world's premier lens and microelectronics makers.

    CDM Optics is a private company with sales last year of about $1 million, according to R.C. "Merc" Mercure, CDM's chairman and chief executive.

    Next year, sales are expected to double with CDM's new partnership with the optical engineering company Carl Zeiss, a renowned manufacturer of microscopes, lenses and other instruments.

    "The world's oldest optical company has joined forces with the most modern," said Ed Dowski, vice-president of CDM Optics.

    The moving parts and multiple lenses of microscopes and certain cameras are precisely engineered to control aberrations and to produce a sharp image where someone wants it -- on a piece of paper, a slide or a computer screen.

    Over centuries, scientists have devised ways to make sharp images of ever-smaller and more distant objects, but could do little to overcome the unchanging rules governing light and the formation of a focused image.

    "There were no revolutionary changes in optics for 200 years," said Dowski.

    CDM Optics produces an unusual type of "lens." Added to a standard lens, it produces images that actually appear blurry.

    In fact, "There doesn't seem to be any part of the image that is more focused than any other," said Mercure, who was the co-founder of Ball Brothers Research Corp., which became Ball Aerospace.

    A uniformly unfocused image may seem an unlikely goal, but after being digitally processed, the result is an image that is entirely in focus.

    Mercure holds a poster with four pictures of a pack of crayons. Two were produced with a standard digital camera and the other two with a digital camera equipped with CDM's Wavefront Coding technology.

    In one of the images from the standard camera, only a few crayons in the middle of the pack are in focus. To bring more of the crayons into focus, the photographer would have to decrease the size of the hole through which light enters the camera.

    In the resulting image, more crayons are in focus, but it appears grainy as a result of less light hitting the camera's digital detector.

    The difference between the two pictures produced with CDM's technology is more dramatic. The first is hazy -- it is an unprocessed image that would not ordinarily be seen.

    In the second picture, all of the crayons from front to back are in focus without the graininess from the standard camera.

    Dowski said applications for the technology that allows lenses to produce such images are numerous.

    "You can either make lenses cheaper, sharper or both," he said.

    Sharper images may be beneficial for many types of optics. A microscope, for instance, may magnify an object to 100 times its actual size with only a sliver 1 micron thick in focus.

    "We can give a microscope up to 15 microns of focus," Mercure said.

    One area in which this improved depth of field might be useful is in vitro fertilization. Ordinarily, a doctor produces a great number of embryos and monitors them for several days before implanting several. The goal is cause a successful pregnancy while minimizing the number of multiple births.

    The problem is that after about three days, embryos are difficult to monitor with an ordinary microscope. The embryologist must guess which embryos are most likely produce a successful pregnancy.

    Using Wavefront Coding technology, Mercure said, embryologists should be able to monitor the embryos for four or five days, thus reducing the number of embryos that must be implanted to have the same chance of a successful pregnancy.

    The same increase in depth of field
  • Low-yeld (Score:3, Informative)

    by autopr0n ( 534291 ) on Tuesday March 18, 2003 @03:22PM (#5538518) Homepage Journal
    Some of them are these days (wow! talk about low yield wafers!)

    I doubt its that bad, since a camera can deal with a sparkling of 'dead' sensors, while pretty much any defect will kill a CPU.
  • by Anonymous Coward on Tuesday March 18, 2003 @03:31PM (#5538604)
    In case you were /.'d, most of the images from the CDM Optics website are also available here:
    more images of increased depth [colorado.edu]
  • More information (Score:2, Informative)

    by jimwatters ( 110653 ) on Tuesday March 18, 2003 @03:48PM (#5538720) Homepage
    Maybe just the same info because I have not been able to get through to the original links.
    Here is a news paper article.
    http://www.boulderdailycamera.com/busine ss/tech/27 bcdm.html

    and another.
    http://www.alteich.com/tidbits/t012802.h tm

    and some images.
    http://www.colorado.edu/isl/intimages/3co loredf.ht ml
  • Comment removed (Score:3, Informative)

    by account_deleted ( 4530225 ) on Tuesday March 18, 2003 @03:57PM (#5538777)
    Comment removed based on user account deletion
  • by Andy Dodd ( 701 ) <atd7NO@SPAMcornell.edu> on Tuesday March 18, 2003 @05:21PM (#5539527) Homepage
    Um, you just named a number of areas where electronic imaging is king.

    Low-light: CCDs have been used heavily by astronomers for quite a while due to their exceptional low-light performance. (Esp. when actively cooled.)

    IR: For near-IR, current image sensors are excellent. In fact, digital camera manufacturers must use an IR-blocking filter in order to prevent IR sensitivity from being a major problem. Remove this filter and you have an excellent IR camera. Sony image sensors are more IR-sensitive than the average CCD or CMOS imager, which Sony takes advantage of in their NightShot camcorders. (The only major difference between a NightShot capable camcorder and any other camcorder is that the IR blocking filter on a NightShot camera can be moved out of the way without disassembling the camera. The improved IR sensitivity helps, but is nothing compared to simply removing that filter.)

    Time-lapse? I can do time-lapse photography with cron and almost any video encoding software, as most can import sequences of still images. You need a camera with a HUGE film carrier and then you must play back that film rapidly in a projector.

    And far-IR? I've never heard of film being used for far-IR (thermal) imaging. It's actively cooled electronic imaging all the way... (Often liquid nitrogen cooled, to keep the camera from "seeing itself". Cool film down that much and it will stop working, that's IF you're lucky enough for it not to crack from being too brittle.)
  • "Economist" article (Score:4, Informative)

    by JPMH ( 100614 ) on Tuesday March 18, 2003 @06:01PM (#5539820)
    The Economist had a nice descriptive acticle about wavefront coding a couple of month ago. Interesting stuff.

    http://www.economist.com/science/tq/displayStory.c fm?story_id=1476751 [economist.com]

  • Re:So (Score:5, Informative)

    by plover ( 150551 ) on Tuesday March 18, 2003 @06:10PM (#5539877) Homepage Journal
    Except that weakness turns out to be a strength when dealing with aliasing. The random orientation of the individual grains avoids aliasing issues. Even at a resolution exceeding that of the film grain, a grid of parallel lines (especially parallel or concentric curves) can produce a noticable moire effect. Also, I've found that angled black and white lines can have noticable color artifacts (although I understand there's a new CCD technology that's supposed to overcome this problem.) The randomness of the grain also seems to provide a "softening" effect that I personally find more pleasing than the regularity of a matrix of pixels.

    Don't get me wrong: I *love* my Canon PowerShot G2 (4MP). I've been extremely pleased with the results in a 4x6 format. I've blown up some as large as 8x10 (had them professionally printed and developed) and find that the quality is almost as good as prints made from 100 ISO 35mm film. Having "during the shot" color balancing also makes it much easier to get useable prints without serious headaches. And it's certainly more conveinent to me to have the images digitally available, too.

    I also find that without my old-school mental block of "don't waste film" is gone, and that I now take many more shots than I used to. It leads to a bigger choice of shots to choose from, so I now get better final prints. Yes, I know I wasn't supposed to worry about "wasting film" before, but those old habits are very hard to break.

  • Re:MOD PARENT UP! (Score:3, Informative)

    by Hal-9001 ( 43188 ) on Tuesday March 18, 2003 @08:40PM (#5540848) Homepage Journal
    Woot! Another OpSci person reads Slashdot! :-) (Okay, well, technically I'm an alumnus [B.S. optical engineering 2002], but I'll probably come back ;-) )
    From some of their "interactive" pages, (namingly this page [colorado.edu]), it seems as if they are using the "waviness" (I am still unclear about this) to do some amount of tomography.
    From skimming the website of the Imaging Systems Laboratory at the University of Colorado at Boulder (directed by W.T. Cathey, who wrote one of the standard texts on optical information processing and holography), the way they achieve this depth of focus trick is half optical and half digital signal processing. They use a cubic phase filter (which literally could be a specially warped piece of glass immediately after the lens) to distort the wavefront, so the image captured by a CCD or CMOS array is uniformly blurred by this cubic phase. I think the cubic phase that's applied makes the phase errors due to defocus more evident (probably akin to recording the phase by interference in off-axis holography (invented by Emmett Leith [my advisor :-)] and Juris Upatneiks), or measuring wavefront distortion using a Shack-Hartmann wavefront sensor). Since the cubic phase error that was applied is known, it's easy to deconvolve the image to remove its effect, and the phase errors due to defocus probably interact with the cubic phase in a way that's visible in the image spectrum, so a filter can be applied to remove the effect of defocus as well.
  • by Hal-9001 ( 43188 ) on Tuesday March 18, 2003 @09:24PM (#5541058) Homepage Journal
    When I first saw the article it sounded like the post-processing that is done to improve the focus of images that were originally taken out-of-focus. You can extract a lot of features by convolving an image with the inverse of the defocussing transfer function.

    But doing this has a downside: It also brings to a point focus, or nearly so, the light from patches of a certain range of shapes. They weren't originally points - but photographing them defocussed made the same shape blur as a point light source would have, so the post-processing turned them into points. You extract features that would have been unreadable (like a license plate number), but also "sprinkle glitter and pepper" over the image.
    Not only are the original images taken out-of-focus, but they have also been optically distorted by a specially shaped glass plate (this is the actually wavefront coding part). This optical distortion affects in-focus and out-of-focus objects equally, and I think that is what allows them to deconvolve the image without introducing a lot of noise. Even if it does introduce some noise, they can probably filter that out with a weak blurring filter.

    Since the corporate site is still down, the best place to read about this is probably the website of the Imaging Systems Laboratory [colorado.edu] at the University of Colorado at Boulder, which I think is where all this technology was originally developed. Someone else posted that link elsewhere in the comments, but I will post it again here, properly hyperlinked for convenient Slashdotting. ;-)
  • So? (Score:3, Informative)

    by KewlPC ( 245768 ) on Wednesday March 19, 2003 @03:14AM (#5542545) Homepage Journal
    Most photographers want LESS depth-of-field than the current crop of digital cameras provide.

    Only amateurs want "everything from here to infinity" to be in-focus.

    The advantages of selective depth-of-field cannot be understated. The ability to have the background be completely soft and have the subject be the only thing in sharp focus (thereby drawing the viewer's attention to it) is a huge advantage of film over digital.

    For example, on Attack of the Clones, the guys at ILM actually had to process the images to give them less depth-of-field, because the cameras couldn't get as little depth-of-field as the cinematographer wanted.

Saliva causes cancer, but only if swallowed in small amounts over a long period of time. -- George Carlin

Working...