Forgot your password?
typodupeerror
Input Devices Technology Hardware

The Lytro Camera: Impressive Technology and Some Big Drawbacks 220

Posted by timothy
from the light-field-of-dreams dept.
waderoush writes "The venture backers behind Lytro, the Silicon Valley startup that just released its new light field camera, say the device will upend consumer photography the way the iPhone upended the mobile business. This review takes that assertion at face value, enumerating the features that made the iPhone an overnight success and asking whether the Lytro camera and its refocusable 'living pictures' offer consumers an equivalent set of advantages. The verdict: not yet. But while the first Lytro model may not an overnight success, light field cameras and refocusable images are just the first taste of a revolution in computational photography that's going to change the way consumers think about pictures."
This discussion has been archived. No new comments can be posted.

The Lytro Camera: Impressive Technology and Some Big Drawbacks

Comments Filter:
  • by wanderfowl (2534492) on Saturday March 10, 2012 @12:08AM (#39309261)

    Right now, it seems like the majority of Lytro pictures are technology demos, a fire hydrant in the foreground and a building in the background, or some equivalent, which just invites you to click both and move on. You can just hear the enthusiastic early adopter in the background of these pictures saying "OK, _now_ click the building! Whoa! Cool, huh?!". These shots are, to my mind, the photographic equivalent of arrows or spears coming out towards the audience in early 3D movies. Gimmicks which break the fourth wall, saying "Hey, remember, you're looking at a Lytro (tm) image, not just anything!".

    I can't wait for real photographers and artists to actually find situations, styles and aesthetics where Lytro sorts of cameras can be used in a way that both effectively uses the new capabilities of the format _and_ produces something artistically and aesthetically wonderful. I think the technology has a ways to go, but right now, the biggest problem facing Lytro (and light field photography) is that it's a new medium that nobody has a clue how to use effectively.

    Until we reach that point where people see a great Lytro picture and actually feel inspired, it's going to be tough to sell what is currently a low-spec camera with one big gimmick. So, if you want Lytro to take off, buy one for the craziest artist you know.

    • by 0123456 (636235) on Saturday March 10, 2012 @12:13AM (#39309291)

      My first thought was that it could be great for video; no need to bother with precise focus while shooting if you can refocus when you edit. However, I'm guessing that it would require a huge data rate.

      • by Taco Cowboy (5327)

        My first thought was that it could be great for video; no need to bother with precise focus while shooting if you can refocus when you edit. However, I'm guessing that it would require a huge data rate

        My thought as well

        I am curious to know if there is a site that can tell us how big the data rate we are looking at

        Anyone ?

      • My understanding is that this was used (the concept, not the camera) to film some of the 'bullet time' like scenes we see in movies now.

        It might be one of those technologies that is just now coming into prosumer and consumer levels of affordability.

      • by mysidia (191772)

        The absence of a SD card slot is a huge drawback. Who's gonna fit a light field video stream of decent quality on 8gb of memory?

        • by laird (2705)

          The Lytro takes still pictures, and can take 350 pictures in the 8 GB model, and 750 pictures in the 16 GB model.

          Video would be prohibitively large. Aside from storage, it's probably not possible for the camera to take and store 30 FPS of data at 10 M rays per image, which I would guess would be about 10x typical video data rates. They'd need faster sensors, faster RAM, etc., which would push up the complexity and price quite a bit. In comparison, look how much more HD camcorders cost than SD camcorders, a

          • by mysidia (191772) on Saturday March 10, 2012 @05:23AM (#39310365)

            The Lytro takes still pictures, and can take 350 pictures in the 8 GB model, and 750 pictures in the 16 GB model.

            Yes. It's kind of ridiculous. Most consumer cameras on the market allow a user-supplied CF or SD card, and the differentiating factor between cameras is normally photographic capabilities/image quality, storage is cheap, and 8gb of flash memory is not $100; the "amount of storage is built into the camera" being fixed is highly irregular; it also means I can't use a card reader to easily transfer data -- hooking up USB cables and trying to figure out any driver requirements is quite inconvenient.

            The minimum I use these days are 32 gigabyte cards; with only 16gb, it would actually be necessary to frequently delete pictures to make room for more, instead of just swapping flash cards.

            Also, flash cards have limited program-erase cycles... which means the camera has a limited lifetime if used heavily. I suppose warranty will cover for some time storage failure due to heavy picture taking activity wearing out the flash?

          • I'd imagine a big part of that is that there aren't any standard compression schemes available for the format so it probably has to store them raw or close to it. If you were storing 11MP pictures in raw format, you'd have approximately the same capacity.

      • Re: (Score:2, Interesting)

        by PopeRatzo (965947)

        no need to bother with precise focus while shooting if you can refocus when you edit.

        Why do we need "focus" at all? Why not have photographs where everything is in focus? Depth of field is an artifact of lenses, whether they're in your eye or in your camera. A light field could change the entire notion of a photograph, away from trying to imitate the eye to creating a visual record of a scene that actually records everything that is there. No need for depth of field at all.

        As usual, when the artists ge

        • by Anonymous Coward on Saturday March 10, 2012 @01:49AM (#39309645)

          Why do we need "focus" at all? Why not have photographs where everything is in focus? Depth of field is an artifact of lenses, whether they're in your eye or in your camera.

          Focus can be used in composition to guide the viewer to the important elements in the story. Just as "left", "up","down", etc. define the field of view, so does focus.

        • by Entropius (188861) on Saturday March 10, 2012 @02:20AM (#39309749)

          Depth of field effects are considered part of the art of photography, much like amplifier distortion is part of the art of playing electric guitar. People pay a great deal for the capacity to get *narrower* depth of field: compare the price of Canon's 85mm f/1.8 and f/1.2 lenses. People most often buy the f/1.2 as a very very narrow depth of field portrait lens, rather than a very very low-light lens. Other lenses are known for the particular way that they throw backgrounds out of focus -- Nikon will even sell you one where you can choose exactly how the background is defocuses.

          I think this trend in photography is overblown (I don't see the appeal of portraits where half of one eye is out of focus), but there's no doubt that artistic manipulation of depth of field is a big part of the art.

        • No technique ever becomes archaic; it becomes an artistic choice, like black-and-white photography. Same with focus, which probably won't ever go away since it's so intrinsic to how our eyes work. I agree that this could be a huge development once artists figure out what to do with it.

        • by Teun (17872) on Saturday March 10, 2012 @07:46AM (#39310751) Homepage
          Because a single pixel can't be in or out of focus you no longer need to focus once every pixel has it's own 'lens'.
          As others already explained this would give unusually 'flat' pictures where depth of field has disappeared and the sense of distance with it, a problem already observed with tiny phone camera's.

          This camera seems to go midway with many lenses for groups of pixels, the smaller those groups, the closer you get to your idea.

          What I like about this concept is that the software allows for refocussing, they might very well already have a mode for maximum depth of field, i.e. all in focus.

      • All the information is obviously there since you can "explore" the image. Clicking on any point in the image either brings nearer objects into focus or farther objects into focus. So, obviously, each point in the 2-d image is encoded with additional information that associates that point with a nearer focal plane or farther focal plane. So, why not computationally merge / stitch together a bunch of sharpened nearer areas with a bunch of sharpened farther areas to get an overall sharper picture? People may f
        • by ceoyoyo (59147)

          Because images where everything is sharp look just like the ones you take with a cheap cell phone.

          • by laird (2705)

            The have demoed this capability, though it won't be in the initial software. But I'd expect the effect to be unpleasant, for the same reason that photos taking with long "depth of focus" lenses, where everything is in focus, tend to be irritating - everything being equally in focus is distracting.

      • by patlabor (56309) on Saturday March 10, 2012 @06:10AM (#39310487)

        It's a single photosensor. The lens array and maths are doing the hard work. Therefore, although the data processing requirements may be very data intensive, the actual image should be the same, or very close to the same, as an image taken without the lens array. The maths should be implementable fully in hardware such that all processing can be done on camera at video speeds, so there is no reason that this couldn't be done. The issue would be making a cohesive focal point between frames. Having to focus a film frame-by-frame would take a lot of time and would be something only film studios might be willing to do, but would be too annoying for consumers.

    • It would absolutely rule for news and performance photography I guess (or insect macros :D). I'd say it rather increases the opportunity to not miss shots or botch them, but I wouldn't hail this as some radical new medium just yet. I mean, this stuff is already possible with still scenes, a tripod and patience... setting the focus or getting all in focus is nothing new, to put it mildly, and anything that can be done with that is already being done -- but now you can do it on the move, or without knowing wh

    • by Idbar (1034346)
      You see, I think the main failure is thinking that this is something for the photography experts. No, this, to me, if for all those taking candid shots in a party, that later realize half og the pictures are out of focus because they see through a cheap viewfinder our a poor rendering from a cellphone's display and failed to see the pictures are out of focus. Professional photographers who know what to frame and focus and do it really fast through the viewfinder of a professional camera may not be the prima
    • The real problem with this technology is that there is no problem. For the most part portable compact cameras have sensors so small that your average happy snap is sharp across the range anyway. As for the other end of the spectrum, DSLRs have 50+ AF points and memory cards are so spacious that there's no reason not to re-shoot if you think the focus may be off slightly.

      The technology is revolutionary, but it isn't solving any problem. People have been taking tac sharp photos for hundreds of years so why sh

  • Of two minds? (Score:5, Interesting)

    by Compaqt (1758360) on Saturday March 10, 2012 @12:08AM (#39309265) Homepage

    Seems Xconomy can't decide whether they like it or not:

    The original title seems to have been "The Lytro Camera is no iPhone but it's revolutionary anyway".

    going by the URL fragment:
    the-lytro-camera-is-no-iphone-but-its-revolutionary-anyway

    The current title is the less positive "The Lytro Camera Is Revolutionary, But Itâ(TM)s No iPhone" (Note: Not being an iPhone is a negative in a Stevebot's eyes.)

    • Re:Of two minds? (Score:5, Informative)

      by waderoush (1271548) on Saturday March 10, 2012 @12:33AM (#39309343) Homepage
      Author here, from Xconomy. I changed the headline to make it shorter and catchier, that's all. I'm not of two minds. I was impressed by the technology, but I said that Lytro needs to make some changes such as enlarging the screen before the value of the device will be completely obvious to consumers.
      • but I said that Lytro needs to make some changes such as enlarging the screen before the value of the device will be completely obvious to consumers.

        Or they could use a smaller screen and put an eyepiece on it, making it look more or less like a spotter scope. That even gives them legitimate reason to ditch the touchscreen and replace it with a couple of buttons.

    • The iPhone isn't an iPhone either :P

  • Usually the first adopters.....
  • A pity... (Score:5, Insightful)

    by fuzzyfuzzyfungus (1223518) on Saturday March 10, 2012 @12:10AM (#39309275) Journal
    The capabilities of light field cameras have that fun 'technology indistinguishable from magic' touch to them that the impressive-but-evolutionary spec bumps of markedly superior conventional digital cameras don't(It's like playing with your favorite eccentric retro computer from before the Great Standardization: at this point, anything that old is a painfully limited toy; but it is different. Your top-of-the-line-screaming-monster of a PC, on the other hand, is brutally capable and impressively cheap; but practically point-for-point familiar to the p90 running Windows95, with all the performance related numbers bumped by a few decimal places).

    Unfortunately, though, the move to release it at a (barely) 'consumer toy' price point really led to a product slightly too compromised to be useful: The optics you need for the light field capture eat so much of the sensor's available resolution that the resolution of the images you can get out of the thing is hovering slightly below 1 megapixel. Yes, the ability to spit out that paltry image at all sorts of focuses, after the fact, is damn cool; but for $500, you could get a high end P&S that could iterate through a series of 10MP shots at different focus points, at time of shooting in a few seconds, netting much of the benefit along with resolutions that wouldn't be ashamed to show up on a $20 webcam.

    I'd love to see the same technology applied at a price point and form factor where the sheer sacrifice of available pixels wouldn't be so keenly felt.
    • Re:A pity... (Score:5, Informative)

      by Anonymous Coward on Saturday March 10, 2012 @12:42AM (#39309371)

      Unfortunately, though, the move to release it at a (barely) 'consumer toy' price point really led to a product slightly too compromised to be useful: The optics you need for the light field capture eat so much of the sensor's available resolution that the resolution of the images you can get out of the thing is hovering slightly below 1 megapixel.

      I'd love to see the same technology applied at a price point and form factor where the sheer sacrifice of available pixels wouldn't be so keenly felt.

      The reason the camera is only 1 megapixel has nothing to do with the optics. The technology requires many pixels in the imager for each pixel in the resulting image. So, the CCD (or CMOS imager, I don't which it uses) probably has at least 10MP, despite the output of only 1MP.

      It's a fundamental limit of the technology, and it'll be a while until we see more than 2 MP using it.

      • I'll admit that 'the optics' was colloquial; but the microlens array is the part of the path where the mapping of multiple CCD pixels to a single available output pixel happens.

        As you say, the technology only works if your sensor has enough pixels behind each microlens(and, since very high resolution digital sensors and the necessary supporting processor and storage are more expensive than very fine-grained polymer microlens arrays, the economic limits on the sensor they could afford presumably drove the
        • by mosb1000 (710161)

          but the microlens optics are the operational location where that reduction happens...

          No it isn't. The reduction happens when the software generates the image for the viewer.

      • by zalas (682627)

        Yes, right now it is limited by the technology (a full frame sized sensor with 2 micron pixels would be really sweet for this, but I suppose process would be really expensive), but eventually it will be limited by physics itself. For example, if you were to somehow be able to make a sensor array whose pixel pitch dipped way below half the wavelength of the light you are capturing and if you used microlenses at the wavelength of light, you wouldn't really be able to capture any more three-dimensional/refocu

    • So you're limited to a 1-megapixel image? If that's the case, I bet I could take a blurry 12-megapixel picture, resize it to 1-megapixel and sharpen it, and it will look just as good. But the camera will cost less.
      • Light field stuff in general isn't(you are always limited to a substantially lower resolution than your available sensor(s) would suggest, because the technology requires multiple sensor pixels to be mapped to each microlens; but getting larger and higher resolution digital sensors is one of those problems that can be solved by writing sufficiently large checks); but this particular camera is. Presumably, to hit the $500 price point.

        You get get to compute your choice of 1 megapixel images at various focu
      • Re:A pity... (Score:5, Informative)

        by donscarletti (569232) on Saturday March 10, 2012 @03:47AM (#39310047)

        I bet I could take a blurry 12-megapixel picture, resize it to 1-megapixel and sharpen it, and it will look just as good.

        No you can't. You just think that because you don't understand how an aperture works.

        Camera lenses focus by directing light through a small hole. At the point of focus, any light which bounces off an object then hits the lens will be directed in such a way that it hits the sensor in exactly the same place as it would have if it had bounced exactly at the center of that hole to begin with, meaning all light from that position hits the same place, giving a sharp image. Away from the point of focus, light bounces off the object, then when it hits the lens, it bends either too far, or too little, giving a soft edge. Thus when an image is out of focus, then the light projecting onto the sensor is actually wrong, no amount of sensitivity will fix that. This is why optics and focus have always been the most important part of getting a nice image out of any digital camera.

        A light field camera fixes this by capturing the direction of the light and reconstructing an image of where the light actually came from, not just where it hits the sensor. Thus it can calculate a 100% in focus image covering the entire depth range without having to focus. Previously, only a relatively small range of distances could be kept in focus, and for that it was required to have a small apature and either a long exposure or a grainy image (cellphone style). Now you can have a sharp image with a wide range of focus without motion blur or grain and that's fantastic.

        Resizing a 12 megapixel image into 1 megapixel will give you the same image, with less grain, exactly the same image as if you had stuck a 1 megapixel sensor in to begin with (lower resolution sensors of the same size format give less grain because of larger size per pixel and lower photosensitivity). It will never be any better than the image projected on the sensor to begin with, so it doesn't get you anywhere.

    • by zalas (682627)

      Yes, the ability to spit out that paltry image at all sorts of focuses, after the fact, is damn cool; but for $500, you could get a high end P&S that could iterate through a series of 10MP shots at different focus points, at time of shooting in a few seconds, netting much of the benefit along with resolutions that wouldn't be ashamed to show up on a $20 webcam.

      Do remember that the Lytro captures its image at one instance (okay, technically integrated over a short period of contiguous time), so while for static scenes your approach would work, it wouldn't work all that well with dynamic scenes. Personally, I'd like see more artistic photos such as say a black balloon covered in starry speckles bursting with a figurine of the baby from the end of 2001 inside.

    • by thsths (31372)

      > that the impressive-but-evolutionary spec bumps of markedly superior conventional digital cameras don't

      That is an excellent point. Specs are getting better, but cameras are not, as you can see in make expert reviews. In fact many of the latest generation cameras make worse pictures than the generation before. The specs are lies served to us for marketing purposes, but the functionality suffers.

      The Lytra can't compete on specs or image quality (actually the images are pretty bad, if you look closely). B

  • by AaronW (33736) on Saturday March 10, 2012 @12:10AM (#39309281) Homepage

    DP Review [dpreview.com] has a review of this camera. It sounds like it has a long way to go. Due to the way lightfield works, the final resolution is fairly low, in this case only 1024x1024. I don't know if there's really a way around it, since they're substituting resolution for the depth of field focus feature.

    • by hawguy (1600213) on Saturday March 10, 2012 @12:22AM (#39309317)

      DP Review [dpreview.com] has a review of this camera. It sounds like it has a long way to go. Due to the way lightfield works, the final resolution is fairly low, in this case only 1024x1024. I don't know if there's really a way around it, since they're substituting resolution for the depth of field focus feature.

      But that's still high enough for the vast majority of people's snapshots. 1024x1024 yields a 5"x5" print at 200dpi, while most people seem to be satisfied with 4x6" prints.

      It's certainly not going to satisfy a pro or serious amateur, but for everyday snapshots, even the current level of the technology is a big step forward since it can eliminate every out of focus shot (though camera shake is still an issue)

      • by retchdog (1319261)

        it's worse; there's also a lot of noise if the subject is not very well lit. since they claim this will be (partially) fixed in firmware, maybe they haven't gotten dark-frame subtraction [wikipedia.org] to work yet?

        • by Entropius (188861)

          Dark frame subtraction is only useful (and used) for shutter speeds above one second -- this is true for cameras from the cheapest 1/3.2" sensor on my old Panasonic FZ3 to the rather nice Four Thirds sensor in my DSLR.

          More likely, each output pixel requires taking only part of the information from the input pixels (since you've got to do something other than "average them" to get the light field information), exacerbating the noise.

      • by sunderland56 (621843) on Saturday March 10, 2012 @12:37AM (#39309359)

        But that's still high enough for the vast majority of people's snapshots. 1024x1024 yields a 5"x5" print at 200dpi, while most people seem to be satisfied with 4x6" prints.

        With no ability to crop or zoom, though. Consumers don't frame their shots very well - so having tons of excess resolution helps pull a decent print out of a crap image. With the current Lytro it's hard to frame shots well.

        The Lytro can't fix camera shake, either, and (a) the camera is an unusual, hard-to-hold shape with (b) a crappy LCD. If they took the lightfield guts, and packaged it inside a traditional SLR-style body, they could both make it easier to hold the camera steady, and add a large LCD and real viewfinder.

      • by DerekLyons (302214) <`fairwater' `at' `gmail.com'> on Saturday March 10, 2012 @01:14AM (#39309491) Homepage

        But that's where Lytro misses the bus... It's priced above the average consumer's price range, requires more fiddling and diddling, and requires Lytro's proprietary web based software - all to produce a picture that would be the pride of 2002.

        It ends up being a solution in search of a problem. Too much for consumers, too little for prosumers and professionals.

        • by ceoyoyo (59147)

          I think the article had it right, sort of: these are bound to be embedded in certain specialty devices. They're great for 3D capture, for example. It's not going to make it as a camera on it's own. People who pay a lot for cameras generally lust after shallow depth of field for it's artistic effects. People who don't use whatever camera is at hand, usually the one in their cell phones.

    • by grcumb (781340) on Saturday March 10, 2012 @01:51AM (#39309649) Homepage Journal

      DP Review [dpreview.com] has a review of this camera. It sounds like it has a long way to go. Due to the way lightfield works, the final resolution is fairly low, in this case only 1024x1024.

      Low res? No worries, just use the ENHANCE button. Problem solved.

      Regards,
      David Benton
      Crime Scene Investigations, Miami PD

      • by mysidia (191772)

        Low res? No worries, just use the ENHANCE button. Problem solved.

        I'm assuming the Enhance button actually replaces the original 1024x1024 image with a 4096x4096 goatse pic. Which would be a good reason for people to call the Lytro a toy camera.

    • DPReview: we haven't 'got it'. So it is what we have suspected all the time, a VC trap. The game of Litro has only one winner, the guy who got the funding.
    • by tlhIngan (30335)

      Due to the way lightfield works, the final resolution is fairly low, in this case only 1024x1024. I don't know if there's really a way around it, since they're substituting resolution for the depth of field focus feature.

      Well, it's the first generation consumer lightfield camera. The first-gen digital cameras weren't that great either - they were overpriced and underperformed (you were lucky if you got VGA images).

      It's a New Kind Of Camera(tm). There's a lot of refinement that can be done, but the first gen

      • by ceoyoyo (59147)

        Due to the way it works, light field cameras will always have a fraction of the resolution of regular cameras and/or poorer low light performance. Really, poor focus isn't usually a problem with modern cameras. Most of the shots people think are poorly focused are probably actually motion blurred because the camera had to use too slow a shutter speed in order to make up for it's crappy low light performance. And this camera is worse.

  • by girlintraining (1395911) on Saturday March 10, 2012 @12:12AM (#39309287)

    ...that's going to change the way consumers think about pictures.

    You're overestimating the average consumer: You believe they think prior to taking a picture. Having gone through enough cell phones left abandoned and dropped off at the lost in found before finally pressing 'm' in the phone book and calling their mom to say they lost their phone at my workplace... I can say with a fair degree of confidence most people take pictures of themselves, themselves with friends, more pictures of themselves and... (guys only)... pictures of inanimate objects that they never share or send to anyone. Ever. They're usually things like sign posts, car wheels (not actual cars, this would be too obvious), or random corners of buildings. From this, I can deduce that no actual thinking occurs for at least 95% of your everyday consumer's use of a camera.

    • there might be a high correlation between people not thinking before using their cell to take a pic and those who lose their cells. just saying.

      anyways, there's not much need to think now with digital photography. each incremental photo costs nothing.

      back when i had a film camera, i'd think before each photo i took. they were precious resources.

  • Given the resolution tradeoffs that are inherent in the design, I can see this theoretically "revolutionizing" camera phones or cheap point-and-shoots... perhaps. But I'm not sure I believe even that, given that people won't take a few seconds even now to crop their photos, sharpen them (even automatically), or adjust the white balance. Most people just seem to throw whatever photos they've taken up online - no editing, no triage, no nothing.

    I can't see this making a difference with the higher-end market, i

    • by topham (32406)

      Nokia announce a phone with a camera sensor that's 41 Megapixels. -If- you can combine that sensor with the Lytro lens in a small camera assembly you'll get sufficient resolution to be used for more than just gimmicks.

      • by ceoyoyo (59147)

        Nokia is already using all those pixels for their own gimmicks... and getting a 5 MP image out.

        If you used that sensor with a light field lens array you'd end up with a camera that had truly horrible low light performance. And low light performance is probably THE thing that makes the biggest difference for typical snapshots.

  • Read thesis (Score:5, Interesting)

    by Anonymous Coward on Saturday March 10, 2012 @12:35AM (#39309353)

    For those more interested in the technology, Ren Ng's thesis is available on Lytro's website (at the bottom of the "Science Inside" page). I read much of the thesis at it the other day after reading an article about the camera in the New York Times. It's a well written thesis and explains the technology in a few simple ways and more rigoroursly.

    The best explaination to me was that the microlens array is effectively reimaging the lens onto a small array of pixels under each microlens. (The micolens is placed at the usual focal plane of the camera and the # of microlenses is what determines the resolution). Each pixel therefore sees only a small aperture of the lens. A small aperture gives a very large depth of field. You could just use one pixel under each microlens to create an image with a large depth of field, but you'd be throwing away a lot of light. You can be more clever, however, and reconstruct from all those small aperture images the image at any focus. At different focuses, the light from any location is shared among multiple microlenses. (i.e, it's out of focus - so it's blurred at the focal plane). However, it's not out of focus at the pixels, since remember each pixel only sees a small aperture and has a large depth of field. It's then just a matter of adding the right pixels together to create an in-focus image at any effective focal plane.

  • by SuperBanana (662181) on Saturday March 10, 2012 @12:41AM (#39309363)

    Many people have noticed in the online samples that you can't focus clearly on far-away objects; they sorta get sharper, but not anywhere as sharp as foreground details. So that awesome picture of you on top of a mountain? You'll be nice and sharp, but the background never will be. Kind of spoils it, when the whole point is to be able to click and have one or the other be super sharp, right?

    Also, it needs absurd amounts of light according to Gizmodo, or image noise becomes horrendous. Which is not surprising, given how hard Nikon and Canon are pushing the edge of what's possible in their sensors + image processors, and how small the individual lenses are. Great for sunny places. Not so much for indoors.

  • Gimmick (Score:2, Insightful)

    by kelemvor4 (1980226)
    Seems like it has gimmick written all over it to me. It's got some optical problems like purple fringe (for example on the shot with the cup of water in the foreground), unacceptably low resolution, and it requires software like flash to view the photos. If they could get the thing into an SLR body so you could put decent optics on the front, and beef up the sensors so the final output resolution could be 10 or more megapixels, then they might have something. As it is, this reminds me of the 3d cameras t
  • When will we get a light field display? It's only logical that you'd need both to truly leverage this invention. And seeing as you can get displays with >300 dpi resolution today, it's only a matter of time before displays have enough resolution, and computers have enough processing power to display light field images.

    • by ceoyoyo (59147)

      Um, what exactly would it do?

      • by mosb1000 (710161)

        It would project a light field, rather than a flat image.

        • All screens project a light field -- it's just the light field of a flat surface.

          As to recreating the original light-field, in theory it's not an impossible task, but it's faaaaaaaaaaaaaaaaaar beyond current technology, as it would involve a near infinitive array of light-producing elements firing light in very specific directions. Photography and vision is the summation of light from particular sources following various paths -- you need to recreate a near-infinity of paths to create an accurate recreatio

      • It would emit directed pictures, and thereby be capable of displaying different
        images at different viewing angles simultaneously.

        That's great for real fake 3D, where your viewing angle on objects in the image
        actually changes, as opposed to today's fake fake 3D, which is just fixed perspective
        stereoscopy.

        And this even works for more than one viewer. And without special glasses.

        In practice, this means that it needs very fine structures for those directed emitters.
        Also, the visible resolution will be
        • by mosb1000 (710161)

          That's great for real fake 3D

          I think there's a point where it's fair to start calling it "real" 3D, since the projected image would be indistinguishable an actual 3D object.

          • I agree. But it's not there yet.

            The above concept of "one image per viewing angle" simply isn't enough to be undistinguishable
            from reality, that's why I kept one "fake" in the name.

            For a real holographic display, image content has to depend on viewing distance as well,
            to adapt to a changing field of view.

            And there's also still the issue of a forced focus point: Image content that changes according
            to viewing angle and viewing distace still doesn't adapt to the actual distance I'm looking at.

            It's a
            • by mosb1000 (710161)

              A light field display would construct an actual light field like the one your eyes normally interact with. It wold have depth of field, and the projected image would appear to change as to move around it, and as you move closer to it. You have to remember that in the physical world, there is only a light field, your eyes intercept it and process it to generate those other effects (excepting depth of field, which is generated physically by the lens in your eye).

  • Better 3d? (Score:5, Interesting)

    by dmomo (256005) on Saturday March 10, 2012 @01:29AM (#39309547) Homepage

    Part of what makes 3d movies look fake is that the viewer cannot focus on anything other than what is "in focus" as per the Director. I imagine it would be possible to use this technology paired with some sort of eye tracking tech (which also exists). This would move us a step closer toward a more realistic immersion.

    • Re:Better 3d? (Score:4, Insightful)

      by Gordo_1 (256312) on Saturday March 10, 2012 @01:44AM (#39309607)

      Ok, but then you have to watch it alone because while I'm focusing on the tree in the background, you're focusing on the foreground...

      • by dmomo (256005)

        Yeah. That crossed my mind. I might be more compelling for gaming, then. I could also envision a room where a bunch of people watch the same movie, each wearing a personal set of glasses with internal screens.

    • by Osgeld (1900440)

      how is that different from 2d? you cant ignore what the director wants you to see and actually focus in on the distant object in the background, its a basic principal of camera optics

      • by dmomo (256005)

        It isn't different. That's my point. This camera would allow whatever you are looking at to be in focus.

  • by Gordo_1 (256312) on Saturday March 10, 2012 @02:18AM (#39309735)

    Let's list some of the significant drawbacks of this first version which we can realistically chalk up as a technology demo:
    * Camera is shaped weird and appears awkward to use. If form follows function, I'm not sure what the function is.
    * Cheap last-gen LCD display.
    * Output is only 1MP (1024x1024).
    * Sensor is really small
    * Lens is cheap
    * Limited depth of field
    * Raw light fields have to be sent to Lytro server for processing
    * Only a handful of focus points can be chosen
    * In focus range is limited
    * Photos are converted into lame Flash animations

    Now, let's re-imagine this as a serious photographers tool a few years down the road:
    * It's a DSLR with real interchanegable lenses and huge hi-rez LCD display
    * Let's say the camera can even magically switch from "classic" to light field mode with a toggle switch.
    * Huge full frame sensor allowing light field output at 6+MP with high dynamic range and low noise at high ISOs
    * Depth of field choices much broader and limited only by lens chosen
    * Effective focus range is much improved
    * Raw lightfield processing can be done on your local computer, allowing precise control over number and position of focus layers. Alternately, assuming processing speed is available, perhaps focusing points can be chosen in real-time within the finished image blob.
    * Output as multiple jpegs, flash or HTML5, etc.

    Now what?
    Well, you still have these limitations if you use light fields:
    * You're basically giving up some amount of image resolution for the ability to focus after the fact. DSLRs and even consumer cameras already have excellent auto-focus modes that when used properly generally nail focus in decent light. It's not the biggest or even second biggest problem I see in photos online. Bad composition and inadequate lighting are generally much bigger problems.
    * If you chose the wrong focus point when shooting, sure you can fix your mistake, but if focus is off due to camera shake or motion blur, you're SOL.
    * It's basically useless in images with large depths of field (think large landscapes where everything is essentially in focus)
    * Makes no difference on a printed page, except you have one more tweak available during editing.
    * Still gimmicky. After everyone has played around with a few of these photos interactively, they're bored and move on.

    • by ceoyoyo (59147)

      "* Huge full frame sensor allowing light field output at 6+MP with high dynamic range and low noise at high ISOs"

      Or I can get a regular huge full frame SLR with higher resolution and much better low noise performance. And no, switching the sensor into regular or light field mode isn't going to be easy.

    • by quintesse (654840)

      As a hobbyist photographer (you know the kind that has spent an inordinate amount of money of equipment, can take some decent pictures but has found out that he has no real talent) I concur completely. A really nice gimmick but I hardly see any practical value.

  • by pavon (30274) on Saturday March 10, 2012 @02:18AM (#39309737)

    The Lytro camera has special optics that basically separates the light entering the lens from different angles. Knowing the rough angle of the light rays allows you to combine them in different ways to change the focal length of the image, as opposed to a traditional camera, in which they are permanently combined as the CCD captures the light at a set focal length. This comes with a trade-offs as light from each set of angles is essentially captured as a separate image, giving you say 12x12 sub images on the CCD, so the resolution of each sub-image is much lower than you would get using the full CCD for an image.

    Since Ren Ng published his seminal paper making the connection between refocusing a light-feild and Fourier Slice theory, there has been additional work which shows that you can achieve the same thing using a simple filter, rather than a whole new set of optics. The benefit of this is that it is cheaper to manufacture, and you can easily switch out the filter to adjust the trade-off between image resolution and depth of field, but come with an additional cost of a slight loss of total light (due to the filter). Here is one of those papers [umd.edu].

    There are two basic approaches. The first heterodynes the light (a filter acts as multiplication) such that light that enters at different angles is shifted to different frequencies. So with this approach you get "subimages" in the frequency domain rather than the spacial domain, which can be seperated and recombined in software. The result and trade-offs are essentially the same but with simpler hardware.

    The other is based on refocusing as a deconvolution operation, but the filter modifies the point-spread-function of the camera, such that it's frequency response doesn't have any zeros, so you don't loose data at those frequencies like you would with a simple rectangular aperture.

  • by planckscale (579258) on Saturday March 10, 2012 @03:15AM (#39309917) Journal
    Imagine when cameras suck an entire event in it's full 3D life-like quality. So you have a dome of some sort, it has millions of high res cameras with full Lytro effect, kind of like a retina. And you can almost go back in time when you stick your head in the flexible LED chamber complete with eye movement trackers and brain control motive predictors. Or just use glasses and 3D earphones. Things will focus as you look at them. You could even insert keystrokes into a virtual terminal embedded into the stream. Not unlike tron or something because you pull all senses into the stream somehow, in any manner you know of to play back at some point when the technology can catch up. I've been tripping on how cameras are kind of like time portals - albeit only into the past, but they way they catch "reality" and hold it, is to me a little creepy.
  • by Khyber (864651) <techkitsune@gmail.com> on Saturday March 10, 2012 @03:24AM (#39309943) Homepage Journal

    So, perfect for capturing images of stuff like textures for games.

  • by Animats (122034) on Saturday March 10, 2012 @04:24AM (#39310173) Homepage

    It's now possible to make imagers with so many pixels that finding some way to use them is a problem. This is one way. Another way is to have more colors. There's a camera with around 100 different color filters, which is interesting for some scientific applications and for machine vision. 3 color sensing is a human eye thing. Some birds have 22 different spectral sensors, which is useful in picking targets through foliage. There's also interest in having more dynamic range, so that you don't have to worry about exposure or lighting as much.

    The next thing may be image polarization, by having multiple polarizers per picture. This would be useful in eliminating glare after the fact.

  • An alternative technique that could be done with a regular DSLR (with appropriate firmware, of course) could use the full range of focus at a wide aperture to generate a depth map for the image (not necessarily an easy thing to do accurately, but possible with a few tricks). You then take the image with a small aperture to maximize depth of field. Then you could focus the image however you like in post production.

Pause for storage relocation.

Working...