The Lytro Camera: Impressive Technology and Some Big Drawbacks 220
waderoush writes "The venture backers behind Lytro, the Silicon Valley startup that just released its new light field camera, say the device will upend consumer photography the way the iPhone upended the mobile business. This review takes that assertion at face value, enumerating the features that made the iPhone an overnight success and asking whether the Lytro camera and its refocusable 'living pictures' offer consumers an equivalent set of advantages. The verdict: not yet. But while the first Lytro model may not an overnight success, light field cameras and refocusable images are just the first taste of a revolution in computational photography that's going to change the way consumers think about pictures."
New medium awaiting new aesthetics and exploration (Score:5, Insightful)
Right now, it seems like the majority of Lytro pictures are technology demos, a fire hydrant in the foreground and a building in the background, or some equivalent, which just invites you to click both and move on. You can just hear the enthusiastic early adopter in the background of these pictures saying "OK, _now_ click the building! Whoa! Cool, huh?!". These shots are, to my mind, the photographic equivalent of arrows or spears coming out towards the audience in early 3D movies. Gimmicks which break the fourth wall, saying "Hey, remember, you're looking at a Lytro (tm) image, not just anything!".
I can't wait for real photographers and artists to actually find situations, styles and aesthetics where Lytro sorts of cameras can be used in a way that both effectively uses the new capabilities of the format _and_ produces something artistically and aesthetically wonderful. I think the technology has a ways to go, but right now, the biggest problem facing Lytro (and light field photography) is that it's a new medium that nobody has a clue how to use effectively.
Until we reach that point where people see a great Lytro picture and actually feel inspired, it's going to be tough to sell what is currently a low-spec camera with one big gimmick. So, if you want Lytro to take off, buy one for the craziest artist you know.
Re:New medium awaiting new aesthetics and explorat (Score:5, Interesting)
My first thought was that it could be great for video; no need to bother with precise focus while shooting if you can refocus when you edit. However, I'm guessing that it would require a huge data rate.
Re: (Score:2)
My first thought was that it could be great for video; no need to bother with precise focus while shooting if you can refocus when you edit. However, I'm guessing that it would require a huge data rate
My thought as well
I am curious to know if there is a site that can tell us how big the data rate we are looking at
Anyone ?
Re: (Score:3)
Re:New medium awaiting new aesthetics and explorat (Score:5, Informative)
IIRC, it's an 11 megapixel sensor, to get a 1 megapixel image.
So, not TOO far off from 4k video, to get a low HD quality Lytro video.
Re: (Score:2)
My understanding is that this was used (the concept, not the camera) to film some of the 'bullet time' like scenes we see in movies now.
It might be one of those technologies that is just now coming into prosumer and consumer levels of affordability.
Re: (Score:3)
It's actually a useful portmanteau of professional and consumer, distinguishing an area of cost and feature above that of a typical consumer and below that of a professional. Usually used in reference to serious hobbyists.
Re:New medium awaiting new aesthetics and explorat (Score:4, Interesting)
Interestingly enough, the number of features on a device is as follows:
prosumer feature > consumer features > professional features.
The professional wants as few features/settings as possible but he does want to equipment to be of high quality. I actually created an application called 'Boom Recorder' http://www.vosgames.nl/products/BoomRecorder/ to record audio in the field for recording dialogue in movies and TV or live performances like concerts.
I created it because I used to be a prosumer and worked on beauty pageant and such. So I designed Boom Recorder for the prosumer market. I failed. Almost no prosumer bought one, because there were to few features, you could only record with it. However the professionals, the ones who make Hollywood blockbusters and big TV production and handle large events, they are the ones who love it; because it has so few features it just works.
Another example are video cameras. The prosumer one has lots of features and settings, way more than a consumer camera. But if you look at a professional digital film camera, there are hardly any features on it. I think professional only wants two knobs on a camera, the shutter angle (which changes the look of the film) and the start/stop button, all other settings which changes the look are on the lens.
Re: (Score:2)
It also explains why Unix is more professional than Windows
Re: (Score:3)
Yes, this is beautifully described in http://steve-parker.org/articles/others/stephenson/holehawg.shtml [steve-parker.org]
Re: (Score:3)
It's actually a useful portmanteau of professional and consumer, distinguishing an area of cost and feature above that of a typical consumer and below that of a professional. Usually used in reference to serious hobbyists.
I was always under the impression that prosumer was where high-end consumer and low-end professional markets overlapped such that a "prosumer" piece of equipment could conceivably be used by someone in either category. The Wikipedia article [wikipedia.org] suggests that this may be the case according to some definitions?
For example, I might be wrong, but wouldn't the Nikon D7000 [wikipedia.org] be a "prosumer" device by this definition?
Someone else said that in terms of features "prosumer > consumer > professional", i.e. prosume
Re:New medium awaiting new aesthetics and explorat (Score:5, Interesting)
Re:New medium awaiting new aesthetics and explorat (Score:4, Insightful)
If it meant a person who has more money than sense, why does it get applied to equipment?
I don't get the connection. But then again I have several grand worth of camera kit, and never plan on making a cent on it (though it would be nice). Why? Because I love the hobby. I know people who spent huge amounts of money on their cars, but will never race/drive professionally either. I know people, as well, who spent huge amounts of money on their computer and hardware, who will never use it for crunching data on anything more important than video games. I could go on, but won't. I don't see a lack of sense there.
There comes a point when pure consumer level stuff won't allow you to do what you want to do anymore, so you have to either quit or pony up some extra cash to get where you want. There is nothing wrong with this. And actually this has helped drive consumer level computer hardware for some time (enthusiast level chips and cards can be considered prosumer, to some extent).
In the future I can see myself spending at bit more on camera gear, when my skill eventually hits the hardware enforced limits, or I branch out into different areas. I have no problem with this, and I don't see it reflecting on my "sense", since I have the cash, and can spend it. If not on something I enjoy, then what should it be spent on?
Re: (Score:3)
Re: (Score:2)
The absence of a SD card slot is a huge drawback. Who's gonna fit a light field video stream of decent quality on 8gb of memory?
Re: (Score:3)
The Lytro takes still pictures, and can take 350 pictures in the 8 GB model, and 750 pictures in the 16 GB model.
Video would be prohibitively large. Aside from storage, it's probably not possible for the camera to take and store 30 FPS of data at 10 M rays per image, which I would guess would be about 10x typical video data rates. They'd need faster sensors, faster RAM, etc., which would push up the complexity and price quite a bit. In comparison, look how much more HD camcorders cost than SD camcorders, a
Re:New medium awaiting new aesthetics and explorat (Score:5, Interesting)
The Lytro takes still pictures, and can take 350 pictures in the 8 GB model, and 750 pictures in the 16 GB model.
Yes. It's kind of ridiculous. Most consumer cameras on the market allow a user-supplied CF or SD card, and the differentiating factor between cameras is normally photographic capabilities/image quality, storage is cheap, and 8gb of flash memory is not $100; the "amount of storage is built into the camera" being fixed is highly irregular; it also means I can't use a card reader to easily transfer data -- hooking up USB cables and trying to figure out any driver requirements is quite inconvenient.
The minimum I use these days are 32 gigabyte cards; with only 16gb, it would actually be necessary to frequently delete pictures to make room for more, instead of just swapping flash cards.
Also, flash cards have limited program-erase cycles... which means the camera has a limited lifetime if used heavily. I suppose warranty will cover for some time storage failure due to heavy picture taking activity wearing out the flash?
Novel compression target (Score:3)
I'd imagine a big part of that is that there aren't any standard compression schemes available for the format so it probably has to store them raw or close to it. If you were storing 11MP pictures in raw format, you'd have approximately the same capacity.
Re: (Score:2, Interesting)
Why do we need "focus" at all? Why not have photographs where everything is in focus? Depth of field is an artifact of lenses, whether they're in your eye or in your camera. A light field could change the entire notion of a photograph, away from trying to imitate the eye to creating a visual record of a scene that actually records everything that is there. No need for depth of field at all.
As usual, when the artists ge
Re:New medium awaiting new aesthetics and explorat (Score:5, Informative)
Why do we need "focus" at all? Why not have photographs where everything is in focus? Depth of field is an artifact of lenses, whether they're in your eye or in your camera.
Focus can be used in composition to guide the viewer to the important elements in the story. Just as "left", "up","down", etc. define the field of view, so does focus.
Re:New medium awaiting new aesthetics and explorat (Score:4, Informative)
Depth of field effects are considered part of the art of photography, much like amplifier distortion is part of the art of playing electric guitar. People pay a great deal for the capacity to get *narrower* depth of field: compare the price of Canon's 85mm f/1.8 and f/1.2 lenses. People most often buy the f/1.2 as a very very narrow depth of field portrait lens, rather than a very very low-light lens. Other lenses are known for the particular way that they throw backgrounds out of focus -- Nikon will even sell you one where you can choose exactly how the background is defocuses.
I think this trend in photography is overblown (I don't see the appeal of portraits where half of one eye is out of focus), but there's no doubt that artistic manipulation of depth of field is a big part of the art.
Re: (Score:2)
No technique ever becomes archaic; it becomes an artistic choice, like black-and-white photography. Same with focus, which probably won't ever go away since it's so intrinsic to how our eyes work. I agree that this could be a huge development once artists figure out what to do with it.
Re: (Score:2)
The way the eye focuses is very different from a camera - you actually have less visual resolution where you're "not looking" than in the center of your vision, and you automatically focus your eye to the depth of whatever you're looking at, so while it feels like everything is in focus, it's not all in focus at once, the center is sharp and in focus, and everything else is fuzzy until you look straight at it.
Theoretically the Lytro could do that as well, automatically focusing wherever you look, though of
Re: (Score:2)
Theoretically the Lytro could do that as well, automatically focusing wherever you look, though of course, it would need to know where you're looking, which isn't something normal computers know. Clicking a mouse on the image to focus there isn't as automatic, of course, but it's similar to what we do naturally with our eyes.
Projects like OpenGazer [sourceforge.net]
have yet to take off because of the lack of a real "killer app". But tie OpenGazer (or other webcam eyetracker) to this, and you'd have a rather cool tech demo.
Actually, I think they've really missed a trick. Isn't "3D" the big trend this year? Why not launch with a zoomless stereoscopic camera that you point about like a pair of binoculars (hence small LCD requirement), then power the PC end (yes, I know, this device is Mac only for no sensible reason) with eyetracking and 3D TV
Re:New medium awaiting new aesthetics and explorat (Score:4, Interesting)
As others already explained this would give unusually 'flat' pictures where depth of field has disappeared and the sense of distance with it, a problem already observed with tiny phone camera's.
This camera seems to go midway with many lenses for groups of pixels, the smaller those groups, the closer you get to your idea.
What I like about this concept is that the software allows for refocussing, they might very well already have a mode for maximum depth of field, i.e. all in focus.
Why don't they make the whole picture sharp? (Score:2)
Re: (Score:2)
Because images where everything is sharp look just like the ones you take with a cheap cell phone.
Re: (Score:2)
The have demoed this capability, though it won't be in the initial software. But I'd expect the effect to be unpleasant, for the same reason that photos taking with long "depth of focus" lenses, where everything is in focus, tend to be irritating - everything being equally in focus is distracting.
Re:New medium awaiting new aesthetics and explorat (Score:4, Insightful)
It's a single photosensor. The lens array and maths are doing the hard work. Therefore, although the data processing requirements may be very data intensive, the actual image should be the same, or very close to the same, as an image taken without the lens array. The maths should be implementable fully in hardware such that all processing can be done on camera at video speeds, so there is no reason that this couldn't be done. The issue would be making a cohesive focal point between frames. Having to focus a film frame-by-frame would take a lot of time and would be something only film studios might be willing to do, but would be too annoying for consumers.
Re: (Score:3)
It would absolutely rule for news and performance photography I guess (or insect macros :D). I'd say it rather increases the opportunity to not miss shots or botch them, but I wouldn't hail this as some radical new medium just yet. I mean, this stuff is already possible with still scenes, a tripod and patience... setting the focus or getting all in focus is nothing new, to put it mildly, and anything that can be done with that is already being done -- but now you can do it on the move, or without knowing wh
Re: (Score:2)
Where's the problem? (Score:2)
The real problem with this technology is that there is no problem. For the most part portable compact cameras have sensors so small that your average happy snap is sharp across the range anyway. As for the other end of the spectrum, DSLRs have 50+ AF points and memory cards are so spacious that there's no reason not to re-shoot if you think the focus may be off slightly.
The technology is revolutionary, but it isn't solving any problem. People have been taking tac sharp photos for hundreds of years so why sh
Of two minds? (Score:5, Interesting)
Seems Xconomy can't decide whether they like it or not:
The original title seems to have been "The Lytro Camera is no iPhone but it's revolutionary anyway".
going by the URL fragment:
the-lytro-camera-is-no-iphone-but-its-revolutionary-anyway
The current title is the less positive "The Lytro Camera Is Revolutionary, But Itâ(TM)s No iPhone" (Note: Not being an iPhone is a negative in a Stevebot's eyes.)
Re:Of two minds? (Score:5, Informative)
Re: (Score:2)
but I said that Lytro needs to make some changes such as enlarging the screen before the value of the device will be completely obvious to consumers.
Or they could use a smaller screen and put an eyepiece on it, making it look more or less like a spotter scope. That even gives them legitimate reason to ditch the touchscreen and replace it with a couple of buttons.
Re: (Score:2)
The iPhone isn't an iPhone either :P
How do you use it for pr0n?? (Score:3)
A pity... (Score:5, Insightful)
Unfortunately, though, the move to release it at a (barely) 'consumer toy' price point really led to a product slightly too compromised to be useful: The optics you need for the light field capture eat so much of the sensor's available resolution that the resolution of the images you can get out of the thing is hovering slightly below 1 megapixel. Yes, the ability to spit out that paltry image at all sorts of focuses, after the fact, is damn cool; but for $500, you could get a high end P&S that could iterate through a series of 10MP shots at different focus points, at time of shooting in a few seconds, netting much of the benefit along with resolutions that wouldn't be ashamed to show up on a $20 webcam.
I'd love to see the same technology applied at a price point and form factor where the sheer sacrifice of available pixels wouldn't be so keenly felt.
Re:A pity... (Score:5, Informative)
Unfortunately, though, the move to release it at a (barely) 'consumer toy' price point really led to a product slightly too compromised to be useful: The optics you need for the light field capture eat so much of the sensor's available resolution that the resolution of the images you can get out of the thing is hovering slightly below 1 megapixel.
I'd love to see the same technology applied at a price point and form factor where the sheer sacrifice of available pixels wouldn't be so keenly felt.
The reason the camera is only 1 megapixel has nothing to do with the optics. The technology requires many pixels in the imager for each pixel in the resulting image. So, the CCD (or CMOS imager, I don't which it uses) probably has at least 10MP, despite the output of only 1MP.
It's a fundamental limit of the technology, and it'll be a while until we see more than 2 MP using it.
Re: (Score:2)
As you say, the technology only works if your sensor has enough pixels behind each microlens(and, since very high resolution digital sensors and the necessary supporting processor and storage are more expensive than very fine-grained polymer microlens arrays, the economic limits on the sensor they could afford presumably drove the
Re: (Score:2)
No it isn't. The reduction happens when the software generates the image for the viewer.
Re: (Score:2)
Yes, right now it is limited by the technology (a full frame sized sensor with 2 micron pixels would be really sweet for this, but I suppose process would be really expensive), but eventually it will be limited by physics itself. For example, if you were to somehow be able to make a sensor array whose pixel pitch dipped way below half the wavelength of the light you are capturing and if you used microlenses at the wavelength of light, you wouldn't really be able to capture any more three-dimensional/refocu
Re: (Score:2)
Re: (Score:2)
You get get to compute your choice of 1 megapixel images at various focu
Re:A pity... (Score:5, Informative)
No you can't. You just think that because you don't understand how an aperture works.
Camera lenses focus by directing light through a small hole. At the point of focus, any light which bounces off an object then hits the lens will be directed in such a way that it hits the sensor in exactly the same place as it would have if it had bounced exactly at the center of that hole to begin with, meaning all light from that position hits the same place, giving a sharp image. Away from the point of focus, light bounces off the object, then when it hits the lens, it bends either too far, or too little, giving a soft edge. Thus when an image is out of focus, then the light projecting onto the sensor is actually wrong, no amount of sensitivity will fix that. This is why optics and focus have always been the most important part of getting a nice image out of any digital camera.
A light field camera fixes this by capturing the direction of the light and reconstructing an image of where the light actually came from, not just where it hits the sensor. Thus it can calculate a 100% in focus image covering the entire depth range without having to focus. Previously, only a relatively small range of distances could be kept in focus, and for that it was required to have a small apature and either a long exposure or a grainy image (cellphone style). Now you can have a sharp image with a wide range of focus without motion blur or grain and that's fantastic.
Resizing a 12 megapixel image into 1 megapixel will give you the same image, with less grain, exactly the same image as if you had stuck a 1 megapixel sensor in to begin with (lower resolution sensors of the same size format give less grain because of larger size per pixel and lower photosensitivity). It will never be any better than the image projected on the sensor to begin with, so it doesn't get you anywhere.
Re: (Score:2)
Yes, the ability to spit out that paltry image at all sorts of focuses, after the fact, is damn cool; but for $500, you could get a high end P&S that could iterate through a series of 10MP shots at different focus points, at time of shooting in a few seconds, netting much of the benefit along with resolutions that wouldn't be ashamed to show up on a $20 webcam.
Do remember that the Lytro captures its image at one instance (okay, technically integrated over a short period of contiguous time), so while for static scenes your approach would work, it wouldn't work all that well with dynamic scenes. Personally, I'd like see more artistic photos such as say a black balloon covered in starry speckles bursting with a figurine of the baby from the end of 2001 inside.
Re: (Score:3)
> that the impressive-but-evolutionary spec bumps of markedly superior conventional digital cameras don't
That is an excellent point. Specs are getting better, but cameras are not, as you can see in make expert reviews. In fact many of the latest generation cameras make worse pictures than the generation before. The specs are lies served to us for marketing purposes, but the functionality suffers.
The Lytra can't compete on specs or image quality (actually the images are pretty bad, if you look closely). B
DPReview has a review (Score:5, Informative)
DP Review [dpreview.com] has a review of this camera. It sounds like it has a long way to go. Due to the way lightfield works, the final resolution is fairly low, in this case only 1024x1024. I don't know if there's really a way around it, since they're substituting resolution for the depth of field focus feature.
Re:DPReview has a review (Score:4, Interesting)
DP Review [dpreview.com] has a review of this camera. It sounds like it has a long way to go. Due to the way lightfield works, the final resolution is fairly low, in this case only 1024x1024. I don't know if there's really a way around it, since they're substituting resolution for the depth of field focus feature.
But that's still high enough for the vast majority of people's snapshots. 1024x1024 yields a 5"x5" print at 200dpi, while most people seem to be satisfied with 4x6" prints.
It's certainly not going to satisfy a pro or serious amateur, but for everyday snapshots, even the current level of the technology is a big step forward since it can eliminate every out of focus shot (though camera shake is still an issue)
Re: (Score:2)
it's worse; there's also a lot of noise if the subject is not very well lit. since they claim this will be (partially) fixed in firmware, maybe they haven't gotten dark-frame subtraction [wikipedia.org] to work yet?
Re: (Score:2)
Dark frame subtraction is only useful (and used) for shutter speeds above one second -- this is true for cameras from the cheapest 1/3.2" sensor on my old Panasonic FZ3 to the rather nice Four Thirds sensor in my DSLR.
More likely, each output pixel requires taking only part of the information from the input pixels (since you've got to do something other than "average them" to get the light field information), exacerbating the noise.
Re:DPReview has a review (Score:4, Interesting)
But that's still high enough for the vast majority of people's snapshots. 1024x1024 yields a 5"x5" print at 200dpi, while most people seem to be satisfied with 4x6" prints.
With no ability to crop or zoom, though. Consumers don't frame their shots very well - so having tons of excess resolution helps pull a decent print out of a crap image. With the current Lytro it's hard to frame shots well.
The Lytro can't fix camera shake, either, and (a) the camera is an unusual, hard-to-hold shape with (b) a crappy LCD. If they took the lightfield guts, and packaged it inside a traditional SLR-style body, they could both make it easier to hold the camera steady, and add a large LCD and real viewfinder.
Re:DPReview has a review (Score:4, Insightful)
But that's where Lytro misses the bus... It's priced above the average consumer's price range, requires more fiddling and diddling, and requires Lytro's proprietary web based software - all to produce a picture that would be the pride of 2002.
It ends up being a solution in search of a problem. Too much for consumers, too little for prosumers and professionals.
Re: (Score:2)
I think the article had it right, sort of: these are bound to be embedded in certain specialty devices. They're great for 3D capture, for example. It's not going to make it as a camera on it's own. People who pay a lot for cameras generally lust after shallow depth of field for it's artistic effects. People who don't use whatever camera is at hand, usually the one in their cell phones.
Re: (Score:2)
Binocular 3D is one thing, but you're still going to lose depth of field if you have to open the aperture for low light or high speed photography. Combine the two technologies and stereoscopic light-field cameras will give you the best of both worlds -- you'll be able to reproduce a full humans-eye-view image.
Although I'm not sure what practical applications that would have, except for the archives of our new genocidal alien overlords, who I for one welcome.
Re:DPReview has a review (Score:5, Funny)
DP Review [dpreview.com] has a review of this camera. It sounds like it has a long way to go. Due to the way lightfield works, the final resolution is fairly low, in this case only 1024x1024.
Low res? No worries, just use the ENHANCE button. Problem solved.
Regards,
David Benton
Crime Scene Investigations, Miami PD
Re: (Score:2)
Low res? No worries, just use the ENHANCE button. Problem solved.
I'm assuming the Enhance button actually replaces the original 1024x1024 image with a 4096x4096 goatse pic. Which would be a good reason for people to call the Lytro a toy camera.
Re: (Score:3)
Re: (Score:3)
Well, it's the first generation consumer lightfield camera. The first-gen digital cameras weren't that great either - they were overpriced and underperformed (you were lucky if you got VGA images).
It's a New Kind Of Camera(tm). There's a lot of refinement that can be done, but the first gen
Re: (Score:2)
Due to the way it works, light field cameras will always have a fraction of the resolution of regular cameras and/or poorer low light performance. Really, poor focus isn't usually a problem with modern cameras. Most of the shots people think are poorly focused are probably actually motion blurred because the camera had to use too slow a shutter speed in order to make up for it's crappy low light performance. And this camera is worse.
Revolutionary? Yeh right. (Score:5, Interesting)
...that's going to change the way consumers think about pictures.
You're overestimating the average consumer: You believe they think prior to taking a picture. Having gone through enough cell phones left abandoned and dropped off at the lost in found before finally pressing 'm' in the phone book and calling their mom to say they lost their phone at my workplace... I can say with a fair degree of confidence most people take pictures of themselves, themselves with friends, more pictures of themselves and... (guys only)... pictures of inanimate objects that they never share or send to anyone. Ever. They're usually things like sign posts, car wheels (not actual cars, this would be too obvious), or random corners of buildings. From this, I can deduce that no actual thinking occurs for at least 95% of your everyday consumer's use of a camera.
Re: (Score:2)
there might be a high correlation between people not thinking before using their cell to take a pic and those who lose their cells. just saying.
anyways, there's not much need to think now with digital photography. each incremental photo costs nothing.
back when i had a film camera, i'd think before each photo i took. they were precious resources.
Re: (Score:2)
Then get a cameraphone, or just set your camera to aperture priority and dial in the highest aperture and the highest ISO you can stand.
Re: (Score:2)
Know what the biggest difference is between generic amateur snapshots and wow photos?
Depth of field.
The awesome photos are almost never the ones with everything in focus. But if you really want that, the cheaper your camera the more likely it is to achieve it.
Re: (Score:2)
True. What the average consumer really wants is a small, light camera that can maintain a decent shutter speed in a dark room. In other words, the opposite of what this one does.
What the average photographer wants is a camera that gives him the ability to use shallow depth of field to create great, eye-catching, artistic images. Also the opposite of what this camera does.
It doesn't look like Lytro has much chance of revolutionizing photography.
Re: (Score:3)
The people who spend lots of money on cameras want shallow depth of field and selective focus. That's why they spend even more money on lenses.
Revolutionize crap photos, perhaps (Score:2)
Given the resolution tradeoffs that are inherent in the design, I can see this theoretically "revolutionizing" camera phones or cheap point-and-shoots... perhaps. But I'm not sure I believe even that, given that people won't take a few seconds even now to crop their photos, sharpen them (even automatically), or adjust the white balance. Most people just seem to throw whatever photos they've taken up online - no editing, no triage, no nothing.
I can't see this making a difference with the higher-end market, i
Re: (Score:3)
Nokia announce a phone with a camera sensor that's 41 Megapixels. -If- you can combine that sensor with the Lytro lens in a small camera assembly you'll get sufficient resolution to be used for more than just gimmicks.
Re: (Score:3)
Nokia is already using all those pixels for their own gimmicks... and getting a 5 MP image out.
If you used that sensor with a light field lens array you'd end up with a camera that had truly horrible low light performance. And low light performance is probably THE thing that makes the biggest difference for typical snapshots.
Re: (Score:2)
It's easy enough to do that with a few minutes in Photoshop and a couple of shots, without the $500 low resolution camera. Not that you'd want to very often. Such a thing would scare your brain.
Read thesis (Score:5, Interesting)
For those more interested in the technology, Ren Ng's thesis is available on Lytro's website (at the bottom of the "Science Inside" page). I read much of the thesis at it the other day after reading an article about the camera in the New York Times. It's a well written thesis and explains the technology in a few simple ways and more rigoroursly.
The best explaination to me was that the microlens array is effectively reimaging the lens onto a small array of pixels under each microlens. (The micolens is placed at the usual focal plane of the camera and the # of microlenses is what determines the resolution). Each pixel therefore sees only a small aperture of the lens. A small aperture gives a very large depth of field. You could just use one pixel under each microlens to create an image with a large depth of field, but you'd be throwing away a lot of light. You can be more clever, however, and reconstruct from all those small aperture images the image at any focus. At different focuses, the light from any location is shared among multiple microlenses. (i.e, it's out of focus - so it's blurred at the focal plane). However, it's not out of focus at the pixels, since remember each pixel only sees a small aperture and has a large depth of field. It's then just a matter of adding the right pixels together to create an in-focus image at any effective focal plane.
Poor image quality all around (Score:4, Insightful)
Many people have noticed in the online samples that you can't focus clearly on far-away objects; they sorta get sharper, but not anywhere as sharp as foreground details. So that awesome picture of you on top of a mountain? You'll be nice and sharp, but the background never will be. Kind of spoils it, when the whole point is to be able to click and have one or the other be super sharp, right?
Also, it needs absurd amounts of light according to Gizmodo, or image noise becomes horrendous. Which is not surprising, given how hard Nikon and Canon are pushing the edge of what's possible in their sensors + image processors, and how small the individual lenses are. Great for sunny places. Not so much for indoors.
Gimmick (Score:2, Insightful)
Forget image capture, I want the display. (Score:2)
When will we get a light field display? It's only logical that you'd need both to truly leverage this invention. And seeing as you can get displays with >300 dpi resolution today, it's only a matter of time before displays have enough resolution, and computers have enough processing power to display light field images.
Re: (Score:2)
Um, what exactly would it do?
Re: (Score:2)
It would project a light field, rather than a flat image.
Re: (Score:2)
All screens project a light field -- it's just the light field of a flat surface.
As to recreating the original light-field, in theory it's not an impossible task, but it's faaaaaaaaaaaaaaaaaar beyond current technology, as it would involve a near infinitive array of light-producing elements firing light in very specific directions. Photography and vision is the summation of light from particular sources following various paths -- you need to recreate a near-infinity of paths to create an accurate recreatio
Re: (Score:2)
images at different viewing angles simultaneously.
That's great for real fake 3D, where your viewing angle on objects in the image
actually changes, as opposed to today's fake fake 3D, which is just fixed perspective
stereoscopy.
And this even works for more than one viewer. And without special glasses.
In practice, this means that it needs very fine structures for those directed emitters.
Also, the visible resolution will be
Re: (Score:2)
I think there's a point where it's fair to start calling it "real" 3D, since the projected image would be indistinguishable an actual 3D object.
Re: (Score:2)
The above concept of "one image per viewing angle" simply isn't enough to be undistinguishable
from reality, that's why I kept one "fake" in the name.
For a real holographic display, image content has to depend on viewing distance as well,
to adapt to a changing field of view.
And there's also still the issue of a forced focus point: Image content that changes according
to viewing angle and viewing distace still doesn't adapt to the actual distance I'm looking at.
It's a
Re: (Score:2)
A light field display would construct an actual light field like the one your eyes normally interact with. It wold have depth of field, and the projected image would appear to change as to move around it, and as you move closer to it. You have to remember that in the physical world, there is only a light field, your eyes intercept it and process it to generate those other effects (excepting depth of field, which is generated physically by the lens in your eye).
Better 3d? (Score:5, Interesting)
Part of what makes 3d movies look fake is that the viewer cannot focus on anything other than what is "in focus" as per the Director. I imagine it would be possible to use this technology paired with some sort of eye tracking tech (which also exists). This would move us a step closer toward a more realistic immersion.
Re:Better 3d? (Score:4, Insightful)
Ok, but then you have to watch it alone because while I'm focusing on the tree in the background, you're focusing on the foreground...
Re: (Score:2)
Yeah. That crossed my mind. I might be more compelling for gaming, then. I could also envision a room where a bunch of people watch the same movie, each wearing a personal set of glasses with internal screens.
Re: (Score:2)
how is that different from 2d? you cant ignore what the director wants you to see and actually focus in on the distant object in the background, its a basic principal of camera optics
Re: (Score:2)
It isn't different. That's my point. This camera would allow whatever you are looking at to be in focus.
It's interesting, but ultimately a doomed idea. (Score:5, Insightful)
Let's list some of the significant drawbacks of this first version which we can realistically chalk up as a technology demo:
* Camera is shaped weird and appears awkward to use. If form follows function, I'm not sure what the function is.
* Cheap last-gen LCD display.
* Output is only 1MP (1024x1024).
* Sensor is really small
* Lens is cheap
* Limited depth of field
* Raw light fields have to be sent to Lytro server for processing
* Only a handful of focus points can be chosen
* In focus range is limited
* Photos are converted into lame Flash animations
Now, let's re-imagine this as a serious photographers tool a few years down the road:
* It's a DSLR with real interchanegable lenses and huge hi-rez LCD display
* Let's say the camera can even magically switch from "classic" to light field mode with a toggle switch.
* Huge full frame sensor allowing light field output at 6+MP with high dynamic range and low noise at high ISOs
* Depth of field choices much broader and limited only by lens chosen
* Effective focus range is much improved
* Raw lightfield processing can be done on your local computer, allowing precise control over number and position of focus layers. Alternately, assuming processing speed is available, perhaps focusing points can be chosen in real-time within the finished image blob.
* Output as multiple jpegs, flash or HTML5, etc.
Now what?
Well, you still have these limitations if you use light fields:
* You're basically giving up some amount of image resolution for the ability to focus after the fact. DSLRs and even consumer cameras already have excellent auto-focus modes that when used properly generally nail focus in decent light. It's not the biggest or even second biggest problem I see in photos online. Bad composition and inadequate lighting are generally much bigger problems.
* If you chose the wrong focus point when shooting, sure you can fix your mistake, but if focus is off due to camera shake or motion blur, you're SOL.
* It's basically useless in images with large depths of field (think large landscapes where everything is essentially in focus)
* Makes no difference on a printed page, except you have one more tweak available during editing.
* Still gimmicky. After everyone has played around with a few of these photos interactively, they're bored and move on.
Re: (Score:2)
"* Huge full frame sensor allowing light field output at 6+MP with high dynamic range and low noise at high ISOs"
Or I can get a regular huge full frame SLR with higher resolution and much better low noise performance. And no, switching the sensor into regular or light field mode isn't going to be easy.
Re: (Score:2)
As a hobbyist photographer (you know the kind that has spent an inordinate amount of money of equipment, can take some decent pictures but has found out that he has no real talent) I concur completely. A really nice gimmick but I hardly see any practical value.
Simular effect with simple filter (Score:5, Informative)
The Lytro camera has special optics that basically separates the light entering the lens from different angles. Knowing the rough angle of the light rays allows you to combine them in different ways to change the focal length of the image, as opposed to a traditional camera, in which they are permanently combined as the CCD captures the light at a set focal length. This comes with a trade-offs as light from each set of angles is essentially captured as a separate image, giving you say 12x12 sub images on the CCD, so the resolution of each sub-image is much lower than you would get using the full CCD for an image.
Since Ren Ng published his seminal paper making the connection between refocusing a light-feild and Fourier Slice theory, there has been additional work which shows that you can achieve the same thing using a simple filter, rather than a whole new set of optics. The benefit of this is that it is cheaper to manufacture, and you can easily switch out the filter to adjust the trade-off between image resolution and depth of field, but come with an additional cost of a slight loss of total light (due to the filter). Here is one of those papers [umd.edu].
There are two basic approaches. The first heterodynes the light (a filter acts as multiplication) such that light that enters at different angles is shifted to different frequencies. So with this approach you get "subimages" in the frequency domain rather than the spacial domain, which can be seperated and recombined in software. The result and trade-offs are essentially the same but with simpler hardware.
The other is based on refocusing as a deconvolution operation, but the filter modifies the point-spread-function of the camera, such that it's frequency response doesn't have any zeros, so you don't loose data at those frequencies like you would with a simple rectangular aperture.
Re: (Score:2)
Linky to that Ren Ng paper on Fourier Slice theory?
Thanks a lot in advance.
just a touch on things to come (Score:3)
1024x1024 (Score:3)
So, perfect for capturing images of stuff like textures for games.
All those pixels and nowhere to go (Score:3)
It's now possible to make imagers with so many pixels that finding some way to use them is a problem. This is one way. Another way is to have more colors. There's a camera with around 100 different color filters, which is interesting for some scientific applications and for machine vision. 3 color sensing is a human eye thing. Some birds have 22 different spectral sensors, which is useful in picking targets through foliage. There's also interest in having more dynamic range, so that you don't have to worry about exposure or lighting as much.
The next thing may be image polarization, by having multiple polarizers per picture. This would be useful in eliminating glare after the fact.
Depth maps (Score:2)
An alternative technique that could be done with a regular DSLR (with appropriate firmware, of course) could use the full range of focus at a wide aperture to generate a depth map for the image (not necessarily an easy thing to do accurately, but possible with a few tricks). You then take the image with a small aperture to maximize depth of field. Then you could focus the image however you like in post production.
Re: (Score:2)
Can't you already do this with traffic cameras. Atleast that's what they show in the movies & TV serials.
Re: (Score:2)
What he was doing was (a) impossible and (b) nothing like a light field camera.
Re: (Score:2)
The actions performed in the move would be possible with a light field image. Though the physical picture he used in the clip appears to be a conventional flat image.
Re: (Score:3)
Re: (Score:3)