Camera Lets You Shift Focus After Shooting 155
Zothecula writes "For those of us who grew up with film cameras, even the most basic digital cameras can still seem a little bit magical. The ability to instantly see how your shots turned out, then delete the ones you don't want and manipulate the ones you like, is something we would have killed for. Well, light field cameras could be to today's digital cameras, what digital was to film. Among other things, they allow users to selectively shift focus between various objects in a picture, after it's been taken. While the technology has so far been inaccessible to most of us, that is set to change, with the upcoming release of Lytro's consumer light field camera."
fitrsrsitf (Score:2, Funny)
if you refocus that comment it reads as "first".
Personally, I'm still waiting for.. (Score:1)
A holocam.
Re: (Score:2)
This was already done years ago in a different manner by Ren Ng (I believe was his name) who used an extra digital sensor and it's not a holocam it's a standard digital camera modified to enable changing of focus after the fact.
Re:Personally, I'm still waiting for.. (Score:4, Interesting)
From the sound of it, it basically sounds like it captures a picture with a Z-buffer -- that is, they capture spatial information and angular information, and the angular information is then matched up to find corresponding objects to assess depth for refocusing.
One nifty thing about pictures and videos with built-in Z-buffers would be that it'd be really easy to render into them. Heck, you could have a camera with a built-in GPU that could do it in realtime as you're recording. :)
One step beyond the Z-buffer would be to then do a reverse perspective transformation and extract polygonal information from the scene. This would be of particular use in video recording, where people moving allows the camera to see what's behind them, hidden sides of their bodies, etc. Then you could not only refocus your image, but outright move the camera around in the scene. Of course, if we get to that point, then we'll start seeing increasing demand for cameras that always capture 360-degree panoramas. Combine this with built-in GPS and timestamping and auto-networking of images (within whatever privacy constraints are specified by the camera's owners), and the meshes captured from different angles by people who don't even know each other could be merged into a more complete scene. In busy areas, you could have a full 3d recreation of said area at any point in time. :) "Let's do a flyover along this path in Times Square on this date at this time..."
Re: (Score:2)
given the demo in the video from the article, you should be able to both create an image-based 3D scene from the data as well as generate a stereo pair. On his laptop he was wiggling the perspective back and forth within the limits of the area that the camera captured. Being able to change the focus and depth of field automatically means that you've got a little bit of what's behind every edge.
Re: (Score:2)
Unfortunately, the video isn't working for me on this computer. However, the ability to change depth of focus in post merely requires a blurring algorithm that's selective by z-buffer value. No new information is needed. It does require that the whole image be "sharp", of course.
Re: (Score:2)
A Z-Buffer is a possible output from this camera but it does way cooler than just depth information. The problem with pure depth is that if you have something like a chainlink fence you don't know what's behind it if it's in focus. With this camera it captures 'around' the chain-link fence and sees what's behind it so that you can throw it out of focus.
Re: (Score:2)
Where on Earth did you get that? How is light supposed to travel around obstructions?
Re: (Score:3)
The same way it does in a regular camera. If a fence is close to the lens, keeping it out of focus lets you see through it just fine.
It works because the lens has finite size, and from some parts of the lens you see past the wires in the fence, while from others you do not.
Re: (Score:2)
Light can use a lensing to travel around obstructions. On earth for example in fata morganas.
omg! (Score:4, Funny)
Re: (Score:1)
CSI's magical license plates are now possible!
Re: (Score:2)
Re: (Score:2)
Enhance.
This is too funny! As soon as I read your comment, I realized Bladerunner was WAY ahead of it's time. It is still one of the best renditions of a future dystopia on film. We all wish it were like Star Trek, but the truth is far more grim.
The real question is, are we already living in a dystopia? The world is pretty F'ed up.
mother****ing snake in the mother****ing tree (Score:2)
The real question is, are we already living in a dystopia?
Yeah, Satan is the ruler of this system of things. But a new king is coming; be awake and be ready.
Re: (Score:2)
Yeah, Satan is the ruler of this system of things. But a new king is coming; be awake and be ready.
Damn straight. And Cthulu's reign of terror is going to make Satan's era look like the good old days...
Re: (Score:2)
you mean like some of the things they show doing here?
http://www.hizook.com/blog/2009/06/26/computational-cameras-exploiting-megapixels-and-computers-redefine-modern-camera [hizook.com]
Interesting. (Score:1)
Re: (Score:2)
The underlying concept and algorithms are real, and no doubt there are many proofs-of-concept in existence. Whether the technology can be commercialised in a year though seems a bit of stretch. I am willing to be proved wrong, of course - sounds very cool!
Re:Interesting. (Score:5, Informative)
Read this paper [stanford.edu] (or at least skim it) - these are called plenoptic cameras.
It doesn't do any particular voodoo. I suppose you could distill it down to the point where the camera is (in function) a compound eye.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
The paper clearly shows that it does not capture any more of the rays than any
Re: (Score:2)
I don't think they are claiming to build a full 3D model of the subject (that woul
Re: (Score:2)
I don't think they are claiming to build a full 3D model of the subject (that would indeed be sci-fi).
I don't think they are claiming that, either.
I do think they are claiming to use additional information usually discarded by conventional light sensors (i.e. CCD),
Not "discarded", it is never measured.
i.e. something that corresponds to the radius of curvature
No. It deals with the position of the focus, either in front of, on, or behind the image plane, which is the microlens array. Focus in front of the array and the resulting image on the sensor is either normal or reversed, I forget which. Behind does the opposite.
but the paper abstract talks about light rays, like the rays in geometric optics).
My comment was about the company website, which talks about "all the light rays", implying that it catches light rays that normal cameras do not. They don't catch
Re: (Score:2)
Um, if they were really claiming to capture "all the light rays", in addition to measuring information from the light ray that effectively allows you to calculate the distance to the object, then they would be claiming to build a 3D model from all visible information. You can't simultaneously not believe that they are claiming to construct a full 3D model and also believe that they are claiming to capture "all the light rays": you have to either think they are making both claims (which amounts to a single c
Re: (Score:2)
You can't simultaneously not believe that they are claiming to construct a full 3D model and also believe that they are claiming to capture "all the light rays":
Yes, I can, because "constructing a 3D model" is what may or may not be done after processing the "full light field", which THEY define as all the rays passing through an object.
And "discarded" and "never measured" is the same thing, too.
Nonsense. "Discarding" information means you have something and gotten rid of it. "Never measured" means you didn't have that information to start with. Whatever information you did not measure from the photon is NOT MEASURED, it is not information that you are discarding. Otherwise you could simply NOT DISCARD it and you would st
Re: (Score:3)
You might want to look up "light field [wikipedia.org]"; apparently it's a well-defined term within the field (which has some connections to what I'm familiar with and hence was talking about but is formulated differently for different application). In particular, "full light field" is different from "all the light rays".
Measuring the full light fi
Re: (Score:2)
You might want to look up "light field"; apparently it's a well-defined term within the field (which has some connections to what I'm familiar with and hence was talking about but is formulated differently for different application). In particular, "full light field" is different from "all the light rays".
I'm using the definition that Lytro has on their website, which I quoted in my first comment. Again:
Emphasis mine. Wikipedia's definition is essentially the same. The "all the light rays" bit is a natural result of saying "every direction through every point in space". The word "full
Re: (Score:2)
I'm using the definition that Lytro has on their website, which I quoted in my first comment. Again:
I think the missing phrase here is "in the camera". The camera catches the light field in the camera, not the light field of the entire scene. Not that the website makes that clear.
Re: (Score:2)
I would hazard a guess that it records just three colours of light. After all, the underlying digital sensor is based on existing technology found in modern cameras.
Re: (Score:3)
I don't believe that it is. From the cursory second reading of the paper- it's a new type of sensor.
The paper says that the sensor was a Kodak KAF-16802CE. http://www.datasheetarchive.com/KAF-16802CE-datasheet.html#datasheets [datasheetarchive.com] is the datasheet for this chip, and it appears to be a stock Kodak CCD sensor. Nothing particularly new about it at all. The CE part implies it is a color filtered version.
The new part is the microlens array bolted on the front.
Re: (Score:2)
I wonder how long this will be "at least a year away."
According to the video in the article, the company is releasing "a competitively priced consumer camera" in 2011, ie no more than six months from now.
Re: (Score:2)
Well, it's a lot easier to commercialize something we already have [wikipedia.org]...
Re:Interesting. (Score:4, Informative)
... demonstated to be a working principle [stanford.edu].
The paper includes graphics and formulas... a fuck load more detail than the story link given to us...
Re: (Score:2)
So why isn't this technology in the public domain? The basic research was done at Standford, and IIRC they get about $1.5 billion in federal research grants...
Re:Interesting. (Score:4, Informative)
It's called a Plenoptic Camera [wikipedia.org]. You put a bunch of microlenses on top of a regular sensor. Each lens is the equivalent of a single 2D image pixel, but the many sensor pixels under it capture several variations of that pixel in the light field. Then you can apply different mapping algorithms to go from that sub-array to the final pixel, refocusing the image, changing the perspective slightly, etc. So color-wise it's just a regular camera. What you get is an extra two spatial dimensions (the image contains 4 dimensions of information instead of 2).
Of course, the drawback is that you lose a lot of spatial resolution since you're dividing down the sensor resolution by a constant. I doubt they can do anything interesting with less than 6x5 pixels per lens, so a 25 megapixel camera suddenly takes 1 megapixel images at best. The Wiki article does mention a new trick that overcomes this to some extent though, so I'm not sure what the final product will be capable of.
I want it all (Score:2)
Re: (Score:2)
I think the depth of field in the demo is just there to accentuate the idea that you can focus on different areas. As you say, I am sure you could produce a version with a very deep depth of field if so desired.
Re: (Score:3)
Have you ever tried to reduce depth of field (DOF) to a photo that has too much (for artistic purposes) DOF? It's not easy at all. If you had a pair or more of images, of the same subject, from a slightly different viewpoint (i.e. of the kind you'd take for "stereoscopic" photography) it might be easier because at least then you had some additional cues as to distance from the imaging plane of various objects within the scene, and using that it should be possible to create software to uses those cues to ref
Re:I want it all (Score:5, Informative)
The website about the camera doesn't have enough details, either, but this paper [stanford.edu] does give a reasonable idea of what's going on.
Re: (Score:3)
Thank you! And I was just going to post a reply to my own message wondering aloud if they manipulated the light using at the microlens level. Seem that this is exactly what they're doing
[quote]This is achieved by inserting a microlens array between the sensor and main lens, creating a plenoptic camera.[/quote]
That would still only give several (two, maybe three depending on the array) planes of focus, though, and at a sacrifice of resolution. Still, pretty cool idea.
Re:I want it all (Score:5, Insightful)
Re: (Score:2)
The sacrifice of resolution isn't really that big a concern. Consumer cameras have far more resolution than they need these days, as the almighty megapixel has been used as a marketing ploy even though increasing pixel density on the CCDs has led to lower image quality overall. My 10 year old 2Mpx Canon still takes better pictures than any of my wife's last 3 compact cameras (4, 5 and 8Mpx Nikon and Canons), especially in low light. I would go so far as to say it doesn't make sense to have go beyond much more than 4Mpx with lenses the size of compact cameras, as details will be lost due to lens quality long before the pixel count causes loss of detail.
I would beg to differ. Your 2 megapixel camera may produce higher quality pictures than your wife's compacts, but that likely has nothing to do with the relative resolutions. The dot pitch on decent CCDs is relatively small, so increasing the number of mega-"pixels" from 2 to 4 is going to have a negligible impact on the amount of light gathered. At best you're going to have increased resolution and sharpness that can be further used by improved processing chips. At worst you're going to reach a point w
Re: (Score:2)
ISTR a little ago there was a demonstration of a camera that basically used a honeycomb lens like that of a fly.
Re: (Score:3)
Have you ever tried to reduce depth of field (DOF) to a photo that has too much (for artistic purposes) DOF? It's not easy at all
Bonus! Artiste types love to brag/complain about how difficult/expensive their work was to make.
The non-artsy types don't really care about technical quality or anything other than getting a tolerably viewable "subject standing next to cultural item"
Re: (Score:2)
you speak as if there's no middle ground.
like somebody wanting to take a picture that looks good, but who does not wish to place themselves on a pedestal.
Re: (Score:3)
... then why not just have the whole thing in focus at once. Infinite depth of field.
I watched the video and I believe the guy being interviewed said you can do just that.
Re: (Score:2)
Probably not a good consumer product. (Score:2)
Re: (Score:2)
The smaller sensor, the greater the depth of field and therefore the easier it is to get sharp images. I would imagine your point and shoot has a pretty small sensor. A professional DSLR on the other hand has a much larger chip.
Re: (Score:2)
Exactly. I shoot motorsports with a canon DSLR (D20) and a 400mm lens. Not the same thing as pulling out a little point and shoot and pressing the button ;)
Re: (Score:2)
Re: (Score:2)
Small point and shoot cameras have very small sensor and a lens with a small focal distance. That combination means that they have very large depth of field, which means that on a typical picture, everything or almost everything is in focus. That can be an advantage, but it can also be a disadvantage if you want to for example "isolate" an object by focusing on it, and having it show sharp and focused against blurry background.
Re: (Score:3)
Assuming the microarray isn't part of the lens, you could seemingly reduce the cost and complexity of big telephoto lenses by a lot, which are the most expensi
Re: (Score:2)
Effectively, we've reached the point where it's easier and cheaper and better to move electrons around nuclei, than to move nuclei around other nuclei.
That really brings that 'living in the future' line into... focus.
good future consumer product. (Score:2)
when most people do not need the 12MP photos they take now; cam makers can offer this or similar features based upon the micro camera feature to sell greater MP sensors for images that are no larger than 12MP. Initially, I'd imagine they'd want a work around for when they do not want to use this feature so they can sell you a 18MP camera but the new mode outputs "small" images which are still plenty large for sharing online.
Focus stacking (Score:2)
Conceptually, its a little like focus stacking http://prometheus.med.utah.edu/~bwjones/2009/03/focus-stacking/ [utah.edu] only with a compound lens that does all the exposures at once. More examples of focus stacking here: http://prometheus.med.utah.edu/~bwjones/tag/focus-stacking/ [utah.edu]
Re: (Score:2)
I did not read the details, but the example pictures they provided did seem to have several distinct planes of focus that you could choose. With the size of the pictures, I couldn't tell whether the focus changes if you select two objects that are actually fairly close to each other, but it didn't seem so to me.
Re: (Score:2)
I use focus stacking for my microscopy, however does (or could) this method "scale up" to objects, or parts of objects, that span a much greater distance (i.e. beyond the mm or sub-mm range I have experience with (you're in a better position to answer this than me I think, judging from your post history). I'm asking because I know when I stack say 50 images each with a depth of field of 0.5mm to create an image with ~25mm (just as an example) alignment problems become problematic (I'm not talking about stac
Re: (Score:2)
Just adding to my above comment, those numbers I used as an example are not typical. More often than not the final DOF I am after is probably 1mm maximum and each photo in the stack has way less than 0.5mm DOF.
Re: (Score:2)
Re: (Score:2)
Absolutely. The algorithms and principles are the same. The issue is that it tends to be more useful when your plane of focus (depth of field) is limited as it in in microscopy. You can experiment with this with an SLR camera by selecting an aperture wide open (f/1.2, 1.4 or 1.8 on a 50mm lens for instance). Take pictures of things close, mid and far away and stack the images. Works great.
As for alignment, Photoshop CS5 contains algorithms that also automatically align your images. Very useful.
Re: (Score:2)
Thanks. That is now my experiment for the day ;-)
Big tradeoff (Score:1)
Re: (Score:2)
I don't know how this technology works, but I can also imagine they may take several pictures with a normal CCD, with the lens going through a series of steps. Then in software, they may recombine the images, and compensate for the long shutter time by some kind of smart algorithm that tries to compensate for movement.
Re: (Score:3)
Because a big sensor with a microlens array could be calibrated, you could use Richardson-Lucy deconvolution [wikipedia.org] to recover almost all of the raw resolution of the original sensor if the computing resources are available, in a given plane of focus.
Re: (Score:2)
Isn't this something like how the iPhone 4's wavefront coding camera works?
useful for movies! (Score:3)
For making movies, this would be very useful. Because when taking a movie, it is generally quite difficult to keep focus.
Re: (Score:2)
Yes, but the way it'll be used is that synthetic focus will be applied during post production / editing, and it will end up as a "regular" film. IOW: nothing interactive about the end product, when used for movies.
Re: (Score:2)
Also, having that control after the fact in post production would be great. Got a poor performance from an extra in the background? Well, now there's a very simple way to make it less noticeable. A shot is too confusing and th
Re: (Score:2)
Agreed.
Doing it by hand, with only a single normal camera (Score:2)
I've been doing this for a few years, with one camera taking many views, since I first found out about the research they were doing at Stanford. Here are some scenes around Chicago [flickr.com] which are composites of many photos to generate a synthetic focus. The idea is to capture the scene from many slightly different points of view, and to capture all of the parallax information, which then yields depth.
I haven't be able to make it happen, but it should be possible to combine N pictures to get a bit less than N time
Re: (Score:2)
"I haven't be able to make it happen, but it should be possible to combine N pictures to get a bit less than N times the normal resolution. If you had 100 photos that were 8 megapixels each, you should be able to composite them into a 100 megapixel image with the right alignment and extrapolation algorithms."
No, you can't. Using super-resolution and an expensive mount that can shift the picture by EXACTLY half a pixel (or a quarter, or an eighth), you can get better resolution out of multiple shots, but th
Re: (Score:2)
Actually, with Richardson-Lucy deconvolution [wikipedia.org] it's possible to recover the information, as long as you have the positions of the pixels, and the diffusion function to enough precision.
Re: (Score:2)
No, it's not. You can do a little bit, provided you have enough precision moving the camera, but super resolution (no matter what deconvolution algorithm you use, and RL isn't exactly cutting edge) doesn't really buy you much. It's useful in a few niche areas, like microscopy, but other than that it's impractical.
Yes, I did super resolution research a few years ago.
"hyper stereo" cameras (Score:5, Informative)
I think the infamous bullet-dodging scene in the first Matrix movie was a type of hyper-stereo camera, a row of them albeit. The output lightfield was reconfigured expand point-of-view into time.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Likely not even close, but the first thought that came to mind was the photo stuff from Blade Runner.
Re: (Score:2)
There was a story on /. a year or two back about un-blurring images, but I can't find it now. Basically all the information is there in the image, but what would be one pixel if it were in focus is actually spread over several pixels. Some software was able to gather all the info and bring blurred objects into focus. It wasn't perfect but still very useful.
Lots of red flags, little tech (Score:3)
All the information is about the implications but not about how it actually works or the trade offs required to get there. They also seems to going directly to the consumer. There are only two reasons to bypass big spending pros and prosumers when introducing new technology:
My guess is #2. Exploding the pixel count of the sensor would make the product outrageously expensive. Clearly they are not doing that. So that means the quality suffers as finely adjustable optical focus is replaced by coarse digital focus achievable from the available sensors. We are probably getting camera phone level results. Good enough for Facebook but not something you want to print.
Re: (Score:2)
Look at page 4 of this: http://www.lytro.com/science_inside. You can read the founder's Ph.D. dissertation and I guarantee you'll get your geek on if you can follow it. It's a really excellent piece of work, and at the same time it is written in such a pleasant style that it keeps you curious and interested.
Re: (Score:2)
Don't get me wrong, I've got the whole DSLR thing and still have my medium format film gear, but sometimes I just want to whip out a small camera to get a shot of the kids and it would be nice to know that no matter what, I'll have an in focus shot now or in post processing.
Re: (Score:2)
Re: (Score:2)
Or #3... the product is cheap to make so they know the product will appeal to the mass market and want to make a metric shitton of money.
The reason most tech goes to the pros and prosumer first is simply because new tech. is usually expensive to produce, so only pros and prosumers can afford it. These early adopters then drive the prices down for everyone else.
If, on the other hand, the tech is cheap enough for the mass market in the first place, there is literally ZERO reason for a company to target the pr
Re: (Score:2)
3D Movie (Score:2)
Now I hope when I watch a 3D movie, the focus of the picture follows what my eyes are focusing! That would makes 3D movie much more enjoyable.
Re: (Score:2)
goatse.
Re:Fake (Score:5, Informative)
No. This is known as plenoptic imaging, and the basic idea behind it is to use an array of microlenses positioned at the image plane, which causes the underlying group of pixels for a given microlens to "see" a different portion of the scene, much in the way that an insect's compound eyes work. Using some mathematics, you can then reconstruct the full scene over a range of focusing distances.
The problem with this approach, which many astute photographers pointed out when we read the original research paper on the topic (authored by the same guy running this company), is that it requires an imaging sensor with extremely high pixel density, yet the resulting images have relatively low resolution. This is because you are essentially splitting up the light coming through the main lens into many, many smaller images which tile the sensor. So you might need, say, a 500-megapixel sensor to capture a 5-megapixel plenoptic image.
Although Canon last year announced the development of a prototype 120-megapixel APS-H image sensor (with a pixel density rivaling that of recent digital compact point-and-shoot cameras, just on a wafer about 20x the area), it is clear that we are nowhere near the densities required to achieve satisfactory results with light field imaging. Furthermore, you cannot increase pixel density indefinitely, because the pixels obviously cannot be made smaller than the wavelength of the light it is intended to capture. And even if you could approach this theoretical limit, you would have significant obstacles to overcome, such as maintaining acceptable noise and dynamic range performance, as well as the processing power needed to record and store that much data. On top of that, there are optical constraints--the system would be limited to relatively slow f-numbers. It would not work for, say, f/2 or faster, due to the structure of the microlenses.
In summary, this is more or less some clever marketing and selective advertisement to increase the hype over the idea. In practice, any such camera would have extremely low resolution by today's standards. The prototype that the paper's author made had a resolution that was a fraction of that of a typical webcam; a production model is extremely unlikely to achieve better than 1-2 megapixel resolution.
Re: (Score:3)
... the prototype required a 16mp sensor array to produce a 90kp image. Some similar relationship is expected for a production camera.
Less than a 1 megapixel image. That's pretty small - would be OK for web viewing but not for printing. However, unless you 'stack' the images together to get a very large depth of field (which would often look very unreal), printing the image would not get you much aside from deciding what the focal plane would be.
A web gallery, however, would allow you to move the focus in and out at will (as shown in the examples) and might be more commercially viable. Hogan's main complaint
Re: (Score:2)
it is clear that we are nowhere near the densities required to achieve satisfactory results with light field imaging.
Density would just be one way to do it. Slice it up over time, add more sensors and split the light, use some of those 3D sensors, etc. Each of those has its own set of trade-offs, but we're just talking about time here. The VC's likely know that the sensor tech is poised to be right to eliminate those trade-offs, making now the right time to start the company and put out a 1.0 camera.
Re: (Score:2)
Transparent sensors. Every picture is a 3d array instead of a 2d array.
Foveon [foveon.com]
Re: (Score:2)
Don't click his link. It's the Goatse.cx image.
Also known as the poor man's basilisk [infinityplus.co.uk].
Re: (Score:2)
From what I recall of reading about this a year+ ago, the same tech would allow for mathematically changing focus, zoom, pan, tilt in real time on the signal from the camera or post-fact on a recording of the same signal. So, basically, a flat non-moving sensor can now emulate a PZT camera. I can imagine that with a spherical lens or one of those weird mirrors that lets a regular camera catch a 360 image, it should be possible to make a near holocam. Imagine a movie shot like this and glasses that allow
Re: (Score:2)
The problem is this: this is not how movies work. It'll cost a fortune to make a movie where you can "look around". Shots are usually planned in detail, so if something is out of focus and is a prop/set, it's way cheaper that way. Even in CG movies, there are still digital props, sets, etc, and they are planned according to the needs of script and director's ideas. The level of detail varies and usually is only enough to do the job, doing otherwise would be a waste of money -- if it doesn't end on film, it'
Re: (Score:2)
So what, you expected them to recode their algorithms in flash, send you a source image that's dozens of megabytes in size, and have you wait tens of seconds, possibly minutes, while the whole thing is recalculated after each click? Haha.
Re: (Score:2)
And if it weren't a "faked demo," how exactly would the images be presented to you?
Re: (Score:2)
Are you really that dense? Did you think they were going to create a full fledged web viewer for files of god knows what size so you could get the genuine experience over what 5 demo images could show you?
Re: (Score:2)