Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Graphics Software Technology

High-Speed Video Using a Dense Camera Array 123

karvind writes "Researchers at Stanford have demonstrated multi-thousand frame-per-second (fps) video using a dense array of cheap 30fps CMOS image sensors. A benefit of using a camera array to capture high speed video is that we can scale to higher speeds by simply adding more cameras. Even at extremely high frame rates, our array architecture supports continuous streaming to disk from all of the cameras. Now we know where to use 100TB tape drives and what to expect in the next sci-fi movie."
This discussion has been archived. No new comments can be posted.

High-Speed Video Using a Dense Camera Array

Comments Filter:
  • Interesting study (Score:5, Interesting)

    by Omniscientist ( 806841 ) * <matt@ba d e cho.com> on Wednesday December 29, 2004 @07:21AM (#11207746) Homepage
    This is a very interesting development. If you watch the movies (especially the movie with balloon popping, I think its the third movie), you will see that this is an extremely accurate capture of the event. I would be interested to see how this could present itself in a regular consumer atmosphere...multiple cameras would not exactly make the cut. But yes, it does give a good idea on how to use the 100TB tape drives
    • But you could cheaply get intermediate quality video. The multiple CMOS give a rolling image (look at the guys' shoulders and you'll notice the rotation from the multiple POV) and gives slow-mo without (as another poster points out) having a quick enough shutter time for high-speed analysis. But these 30fps things that make up webcams are usually pretty low quality.

      With these you can get more detail than the shitty webcams without shelling out on high end equipment. This has remarkably few uses, but with t
      • Sorry to reply to my own post, but as an e.g. - put two CMOS in the same housing with this software and you suddenly doubled the fps on your low-end camera. With the right engineering you wouldn't notice the two adjacent POV.

        These things are cheap as hell, it's much easier to double them up than produce one of twice the quality.
        • Am I mistaken, or is the use of still cameras in a series or an array the next logical extension of the still camera arrays used by PDI - made famous with the circular shots in The Matrix?

          Now they are CMOS, instead of plate cameras...

          • As I recall, the cameras used in the filming of the Matrix were pretty ordinary looking 35mm film cameras, not plate cameras.

            But yeah, this brings "bullet-time" to the masses. Way cool.

      • The trend would probably be using a single lense and spliting the image for multiple sensors. This was actually seen as a big deal when it was first done for the film and the viewfinder in cameras but now it's relativly simple.
    • Re:Interesting study (Score:3, Informative)

      by dsginter ( 104154 )
      I would be interested to see how this could present itself in a regular consumer atmosphere...

      It is funny that you used the word "atmosphere" but that might be one of the applications: combustion research.

      A friend of mine works at General Motors doing combustion chamber research. Basically, with a high-speed camera, he films the combustion in what basically amounts to an engine with a glass block and cylinder head. They currently film at 900fps with an industrial film based camera. This is quite expen
      • Diesel is the next big thing but we've got to reduce emmissive levels on those before they become widely accepted.

        You really think a fossile fuel will be the "next big thing". And what's new about Diesel anyway, it's been around for generations...
        • Diesel does not have to be a fossil fuel. If my understanding is correct biodiesel may be used as a direct replacement for petroleum based diesel.

          It has temperature issues, turning solid at low temperatures so a mixture with petroleum based diesel can help keep it liquid at a colder temperature.

          Find out more at http://www.soygold.com/biodiesel.htm [soygold.com]
          and
          http://www.biodieselamerica.org/biosite/index.ph p? id=3,0,0,1,0,0

          A pure biodiesel that can handle cold would be a nice breakthrough as would a nice low-c
        • Diesel engines are trivially easy to convert to non fossil fuels, for example vegetable oil. the Diesel engine was origionally designed to run on fuels other than diesel oil, which makes it so popular with third world countries.
    • This was captured with my digital 'sideline' camera as I captured the individual frames on film.
      http://www.gotsheep.com/~hirsch/Photos/DCP_0492_3 2 0.jpg [gotsheep.com]

      I've found the digital file but not the film that I scanned it from- blowing up eggs is MUCH more fun.

      http://www.gotsheep.com/~hirsch/Photos/EGG_3_crop_ RPD_PPost_lut.jpg [gotsheep.com]

      (Slashdot is doing wierd things to the links- so you'll have to remove the %20's it's sticking in in the spaces)
    • Consumers eh.. So the 100TB would fill up of 1000fps pr0n?
  • by LiquidCoooled ( 634315 ) on Wednesday December 29, 2004 @07:21AM (#11207748) Homepage Journal
    Nothing to see here.

    Quite convenient for a story about a slashdotted camera.
  • Questions (Score:3, Interesting)

    by Anonymous Coward on Wednesday December 29, 2004 @07:33AM (#11207781)
    How do they put all the footage together in the correct 'order', that is to say where each frame is in sequence.

    How can they be sure that none of the cameras capture the same instant of the action?
    • Re:Questions (Score:3, Informative)

      by conteXXt ( 249905 )
      time pulse code. (SMTPE? or something like that)

      Same way they sync audio and video in sound studios.

      Video track (or a seperate track) carry a pulse carrier. Audio track syncs to that.
    • by eclectro ( 227083 ) on Wednesday December 29, 2004 @07:52AM (#11207843)
      How do they put all the footage together in the correct 'order', that is to say where each frame is in sequence.How can they be sure that none of the cameras capture the same instant of the action?

      You know, there might be a reason why those people are at Stanford [stanford.edu].

    • Re:Questions (Score:3, Insightful)

      by Anonymous Coward
      If you were to capture at 1000fps, then each camera would ideally have a sample window within that 1/1000 sec.

      Are the CMOS sensors designed for 30fps sensitive enough to capture a picture without having long exposure time? i.e. Can it handle the 1/1000 sec exposure time without a very dark image that is destroyed by the noise floor.

      Would the sensor analog circuits have fast enough rise/fall time to have bandwidth(1) for that type of frequence response? The overall system bandwidth & sampling rate lim
      • Re:Questions (Score:1, Informative)

        by Anonymous Coward
        The frame rate and exposure time aren't necessarily related. Ideally, exposure time should be 0 (otherwise you'll get motion blur), but for obvious physical reasons it needs to be non-zero. Having overlapping exposures probably isn't that big of a deal as long as you can compensate for motion blur by correlating successive frames.

        In anycase, the paper on the site has the following details about their hardware:

        Camera exposure times can be set in multiples of .205msec down to a minimum of .205msec. Timing a
    • This is exactly why their work is so cool. They've compensated both for the time shift between cameras and the location shift between cameras.
    • I'm more confused about how the video of the event could possible be viewable. They're basically interlacing 52 frames every 1/30th of a second. How is it that you aren't getting a blur because you have each 1/30th of a second taped from 52 different locations? Are the cameras just really really small and placed really far away?
    • Massive forced child labor workshops.
  • Question... (Score:3, Insightful)

    by Jace of Fuse! ( 72042 ) on Wednesday December 29, 2004 @07:34AM (#11207787) Homepage
    It's not that I don't think this is cool, because I can see all kinds of uses for this sort of thing.

    But my question is this...

    Are there any uses for high speed video capture that existing technologies weren't already well suited for, or is this just a cheaper and more readily available option?
    • Re:Question... (Score:5, Interesting)

      by LiquidCoooled ( 634315 ) on Wednesday December 29, 2004 @08:06AM (#11207884) Homepage Journal
      I don't think so.
      The more I look at this, the more I think they are making life difficult for themselves, and the resultant image quality shows.

      Since making my first postings on this discussion, I decided to have a look around at how the professionals handle high speed photography and came up with some nice results.

      Theres a company called Photron [photron.com] that have a range of single digital cameras capable of megapixel images at 2000fps.

      In their gallery [photron.com], they even have an example of a water filled baloon popping, and tbh it looks a lot better than this multi camera version.

      Agreed, this is a way to do it on the cheap, but because of the spatial issues and timing complexities, it may be more trouble than its worth, and may well be wise to buy a camera from the professionals.
      • Re:Question... (Score:2, Interesting)

        by Anonymous Coward
        So do we have any wagers for how much one of these puppies goes for? 5 digits?

        One tradeoff is that these high speed cameras are typically event driven - Once you start them, they record onto local memory (Since there is no way of bursting megapixel*kilohertz => gigabytes/second. With the camera array, it is possible to get a continuous stream. Dunno if it is worth anything to anybody though.
      • Re:Question... (Score:3, Insightful)

        Agreed, this is a way to do it on the cheap, but because of the spatial issues and timing complexities, it may be more trouble than its worth, and may well be wise to buy a camera from the professionals.

        First off, that water balloon video, which is 4000fps instead of the ~1600fps camera array video, is really awesome. However, if, for some deranged scientific experiment/research, 4000fps isn't good enough, perhaps you can build an array of 52 professional 4000fps cameras to achieve a whopping 208,000fps

        • Don't discount that it's going to be more frustating to use an array of cameras in a scientific sense, because you've introduced a whole slew of new spatial and temporal calibration issues. You can see their uncorrected videos are really messed up, like the balloon apparently popping from the wrong side, or the fan looks all wiggy. The corrected videos appear better, but if you need honest to god measurements based on those images, there's still significant remaining uncertainty.

          Bottom line: an array of

          • Those distortions have nothing to do with the array, it is a problem that any single cheap camera of the type that used would experience with fast moving objects. It's explained quite clearly in the article.
            • Please, there's more uncertaintly than from distortions of cheap cameras.

              1) Uncertainty in positions and orientations of cameras. With 50-odd cameras interlaced in space and time, those uncertainties combine per frame, and per scanline when using cheap cameras. With single camera, that uncertaintly is limited to one camera, and is uniform across all images.

              2) Uncertainty in frame synchronization. The article discussed a calibration technique to adjust for individual snapshot latencies of each camera.

      • The more I look at this, the more I think they are making life difficult for themselves, and the resultant image quality shows.

        Yeah, they mention Photron in their paper. As nice as that camera is, it can only store a few seconds at 800x600. The system you are looking at will run till you run out of space. The paper is a well written 320kB pdf and more worth your download time than the movies themselves.

        Now, here are a few thoughts of my own. Some of the image quality problems you notice might be a sid

      • It's a cool concept, maybe not for typical high speed applications, but it might have some use in robotics for composite and stero imaging.
    • Re:Question... (Score:3, Insightful)

      by jolshefsky ( 560014 )
      I can see this as being an option for small video producers who'd like to create good looking slow-motion video. For instance, if you're a local producer of television commercials (or an independent filmmaker who shoots on video) and would like to record something in slow motion without resorting to a studdering sub-20fps image (from 30fps video) this might be an inexpensive alternative.

      I've always wondered how half-speed video from football games looks so damn good. I assume they're using expensive dou

      • I think that simply solving the synchronization issues (without worrying too much about the alignment or positions) would let an independent video producer do matrix-like bullet-time shots on the cheap. Forget the alignment issues -- the synchronization issues are where the value is.

        Picture a college basketball game video with a bullet-time slam dunk right in the middle of the live TV coverage! A puck-time view of a hockey slapshot, or a tip-of-the bat view of a baseball hitter. I bet sports is going t

  • by eclectro ( 227083 ) on Wednesday December 29, 2004 @07:38AM (#11207794)

    If you are hard up for disk space for this, may I suggest emailing frames to this free email account [hriders.com]

    I know it's a hack, but whatever gets the job done, right??
  • by Anonymous Coward
    will be a 2 hour long film of a pin dropping.
  • by jeif1k ( 809151 )
    First of all, the idea is as old as moving pictures: using sequentially triggered multiple cameras was the first approach for capturing motion sequences ever used. This work, using digital cameras, doesn't actually seem to do much about the problems that arise from such an arrangement.
  • I've seen the bullet hitting the tomato in slomo before..

    What makes this special?
    • by moriya ( 195881 )
      I believe the point in producing this is to show that instead of purchasing expensive high-framerate cameras that has some sky-high pricetag, one can use an alternate and cheaper solution by using CMOS sensors. So let's try placing a couple of things into perspective.

      Say you require a camera that can record say 90fps. To a manufacturer of electronic parts, this can be achieved with a little bit of engineering. Basically, take 3 of those 30fps CMOS sensors, pack them together, set a uniform color correct
  • by GrAfFiT ( 802657 ) on Wednesday December 29, 2004 @07:56AM (#11207856) Homepage
    ..ultra slow motion capture of a melting /.ed server. Hey I can see the individual /.ers GET / packets flowing through the fast ethernet port !
    • Well, thats an idea.

      Galielo made the first telescope and was so worried about what he discovered with it, he didn't even publish what he found as fact, but published the possibilities as a fictional make-believe dialogue between two people.

      That didn't stop the Inquisition from condemning him to lifelong imprisonment.

      The thing is you never know where an invention will lead you, but I sure am glad ideas like yours or those who created the CMOS camera no longer lead to prison sentences.

      It does take some of
    • This is on Stanford Servers, they are just chuckling at the millions of hits per minute they are getting now. I downloaded all the videos at 100+ Kbps, during the height of the /. effect.
  • Sensors (Score:3, Interesting)

    by sendorm ( 843943 ) on Wednesday December 29, 2004 @07:57AM (#11207858)
    T think "cheap 30fps CMOS image sensors" simply refer to webcams. From the quality i've seen, they might be in the order of 20$ per unit, which makes to whole camera array about 1000$. Also those webcams do not produce thousand of megabytes, even at rates of 1MB/min you can get decent quality, which makes the video stream to be about 50MB/min.
  • by KrunZ ( 247479 ) on Wednesday December 29, 2004 @07:58AM (#11207863)
    And now we know what the quality aware consumer as a minimum should expect from our beloved video producers:

    No less than 1000 fps facials.

  • Paralax issue (Score:5, Insightful)

    by GrAfFiT ( 802657 ) on Wednesday December 29, 2004 @08:09AM (#11207895) Homepage
    Instead of having a hawkwardly swinging background, why wouldn't they use a set of rotating mirrors to sequentially distribute the light to the different sensors from a single entry point ?
    • Re:Paralax issue (Score:2, Insightful)

      by Narphorium ( 667794 )
      I was think of something very similar, except instead of rotating the mirrors, why not just set them up to provide some sort of kaleidoscope effect.

      Either way, its still some pretty cool tech.

    • by Anonymous Custard ( 587661 ) on Wednesday December 29, 2004 @10:06AM (#11208411) Homepage Journal
      Instead of having a hawkwardly swinging background, why wouldn't they use a set of rotating mirrors to sequentially distribute the light to the different sensors from a single entry point ?

      Duh! Because obviously it'd take some kind of super-genius to reconfigurize the franglehum reflectus so as to porta-pride the whoozimotron without disrupting the stratus field generator.

    • Because somebody else already came up with it and patented it. Don't have the patent number handy, but, IIRC, it's a company that does contract work for NASA.
    • Has anyone thought about the fact that cheap CMOS cameras have a particularly long charge time, meaning that by the time you get enough energy onto the CCD to render an image, the object has undoubtedly moved enough to produce a blur (and more than the desired amount of time has probably elapsed before you go on to the next image)? Either an extremely bright light will be needed, or you will get significantly sub-par image quality.
      • IIRC, a CMOS is not a CCD sensor, no?
        • not that im an expert, but CCD is 'charge-coupled device' and CMOS is 'complementary metal oxide semiconductor'. neither is mutually exclusive as you can make a CCD out of CMOS. in any event, you probably get the jist of my post.
      • Obviously they have, that was one of the points of their research. Their cameras are capable of taking .205 ms exposures. The trick is that no single camera is ever run faster than 30 fps, meaning the recovery times between frames remain at the same duration as the manufacturer intended.

        Yes, taking .205 ms exposures yields pretty poor quality without sufficient light, so the quick answer is "use plenty of light." Since it's pretty much a specialty item (Sony isn't likely going to offer these in a Handy

    • That's a feature, not a bug! It gives 3-D perspective to the observer.


      Cf: Burning Man photos [burningmanopera.org]

    • why wouldn't they use a set of rotating mirrors to sequentially distribute the light to the different sensors from a single entry point ?

      The big ugly array looks like the best solution, but I'd love to see it done your way. I'd use a set of beam splitter, you know, glass set at 45 degrees, but this has some of the same problems the array does, and you would need lots and lots of light to get a decent image. Early high speed cameras at Los Alamos used film rolled on barrels, each frame with it's own len

    • It wouldn't work, because at any one instant in time, several of the cameras are taking a picture at once. This is why in the fan video [stanford.edu], the blades are warped - they are moving as the image is being scanned from the sensor.

      This means that to get an image from a single moment in time, you need to take strips from all the cameras that are taking a picture at one time and splice them together.

      So the difficulty with a rotating mirror system would be splitting the light between several cameras at once. Also, t
  • anyone have any idea if this technology will be applicable for robotics, and studying organic motion? i know high speed cameras have been used in the past to study insect motion and stuff, any idea if this will aid in that area of research, or are current cameras already fast enough?
    • these cameras aren't faster than "normal" high speed cameras. it's just like they're build using walmart webcams.
      maybe they're cheaper, but I doubt they have the same qualitiy as the real thing.
      1kfps is nice but useless if the quality sucks
  • ..but why bother (Score:3, Insightful)

    by Anonymous Coward on Wednesday December 29, 2004 @08:19AM (#11207918)
    Reading between the lines, they seem to have custom hardware and (maybe?) an MPEG encoder behind each camera, and a huge amount of software and general hassle to get an unwieldy and inflexible system to work at all. The upper limit on frame rate is about 5K/sec due to the integration time, but they would need about 160 cameras to achieve this continuously, and a hell of a lot of processing to produce sensible output. A lot of effort for something that isn't actually very useful.
    For the same or less money/effort I have no doubt they could have either bought a purpose-made high-speed cam, or built one using something like This chip [micron.com] from Micron, which costs less than $2K and does 500 full-frame megapixel images per second, faster for partial frames. One neat feature is that it can effectively image individual lines at arbitary places in the frame at 500,000 per second - I'm sure these academic types could do some interesting interpolaty stuff with this to synthesise full-frame-like images at pretty high rates instead of messing with a system that doesn't have any realistic practical use.

    • Now think what'll happen when they put some of these babies into that beowulfish cluster.. (other than complication of computation).

    • No, they are just using nice webcams, not the $15 USB ones. From the article, they state that they are Firewire, not USB, have hardware MPEG encoding, and lots of customization, like color calibration and shutter speed. By "consumer webcams", they don't mean Staples bargin bin cams, they mean $150 per camera rich person cams.
    • > This chip [micron.com] from Micron, costs less than $2K and does 500 full-frame megapixel images per second.

      Now imagine a Beowulf cluster of these things!

      Seriously now, this made me think of a couple of things. One is that this technique, which a lot of you say is worthless, is actually adaptable, so nobody's stopping you from using better sensors.

      For example, if you use a sensor with a "snapshot" shutter, one that records the whole frame at the same instance, and not the über-cheap ones they used with "
  • Too slow .. (Score:5, Funny)

    by BESTouff ( 531293 ) on Wednesday December 29, 2004 @08:25AM (#11207938)
    I downloaded their sample videos, but they keep playing really too slowly. I'm affraid their technology isn't quite ready yet ..
  • I think it was at the 1994 Belmont Stakes I saw my dad tie two Nikon cameras together. Each shot at 6 fps (which was pretty good back then) for a total of 12 fps. Nice to see the researchers are picking up on the ideas of the old pros.

    A more recent application is the "bullet time" developed for "The Matrix" movies.

  • This is a neat tool for the amature scientist who can't afford hundred-thousand-dollar high speed cameras for doing research. Unfortunately, it is just hack, and the constantly shifting/rolling perspective makes it impractical for research. The builders might consider stacking the camera units so that the lens apertures are closer to the centerline, since a cm or two in focal length won't distort the resultant video as much as a few degrees of divergence.

    Another thought which would make this both competit
  • even more ridiculously drawn out slowmotion scenes than in alexander?

  • Judging from the images, they could have benefitted from use of evacuated glass bulbs containing resistance-heated tungsten filaments, arrayed in quantity such that the pictures aren't so *damned dark*. (They can afford 52 CMOS sensors, but where's the friggin lighting?)
  • I can see where this might be developed into some interesting tech but I think they need to come up with a way to overcome the slightly shifted perspective problem. The moving background is interesting but ultimately distracting.

    If they were to channel the optics through a single lens somehow and then divided the light among the many cameras, they'd come up with something much more seamless. I think that would be really REALLY expensive and maybe even impossible. Another possibility would be to create a
    • A spinning mirror behind a lens with all of the cameras trained on the mirror, and synced up with the rate of spin... would allow plenty of cameras in a ring around the mirror, and could be recorded nice and fast, limited only by the speed of the mirror.
    • A big part of the research of the group is to come up with the mathematics to create a "constant" image. Right now I think they're using simple perspective projections to make objects in the plane of the balloon appear in focus.

      I think that it is possible to make objects in a particular depth plane appear non-shifty (even from the same set of sample data). Making the entire background non-shifty would be a matter of properly segmenting the video so that various regions can be mapped to the right depth pl
  • Now we know where to use 100TB tape drives and what to expect in the next sci-fi movie."

    It will be a galaxy-spanning space epic about the disaster that befalls the new purser on the Star Galleon "WangChung" and his deserate fight to defeat the evil thing that made the space-virus that turns his shipmates into zombies and save the planet Zorkon-9 from a Terrible Fate. The working title is "Faster Wolfenstein! Kill, Kill!", and early reports say that Dave Callahan has been attached as scriptwriter.

  • Technology sufficiently advanced to capture the finer details of my cumshots.

    The women should be so lucky.
  • One reason you haven't had camera arrays capture your body movements and translate them into 3d for cool fighting video games is that the frame rate on cameras was too low. You'd get blurs in frames using a 30fps camera. I wonder if you still get blurs, or if you get an exact picture of where someone is at. Street Fighter where you actually punch and dodge would be nice, or some midevil sword game.
  • by peter303 ( 12292 ) on Wednesday December 29, 2004 @11:16AM (#11209032)
    California Governor Leland Stanford employed Eadweard Muybridge to settle a bet whether a horse gallopss with all four feet off the ground. Muybridge took the first motion picture by chaining 16 cameras together. The horse farm of this experiment is tucked away in a corner of the Stanford college campus which was founded ten years later.
  • Um, hello people! There's been prior art [msn.com] here...
  • FYI, I'm not affiliated to the manufacturer, but I do operate this kind of camera as a part time job. Hopefully it's OK for me to post to this thread, since I'm too far away (Finland) from most of you to sell my services.

    Citius Imaging [citiusimaging.com] manufactures affordable digital high speed cameras. AFAIK, you can get one for under 15000 euros.

    Some sample videos which I have shot can be found here [it-line.fi].

  • It's not a *new* idea. . .
    (think Muybridge)
  • Look at the movie of the fan that hasn't been corrected for CCD shutter trigger sequence. It's caused by the particular order in which the shutters are triggered in the array. If you trigger in a raster pattern (top to bottom, left to right) this distortion can crop up. As the triggered raster lines move down the shutter array, one side of the fan is moving up and one side is moving down. The side which is moving down is moving more slowly in relation to the triggering array than the side which moves up. Th
  • by Animaether ( 411575 ) on Wednesday December 29, 2004 @04:51PM (#11212790) Journal
    Sure, it's a lot more expensive, but there's dedicated camera systems that'll do a million frames per second - and more.

    One of the bigger problems, especially with this 'array', though has been noted above : exposure time.
    This might be correctible post-shooting, though. As each frame's exposure will overlap the next, whatever is similar in both could be presumed a no-motion area. Gets quite tricky, though.

    And of course the array posted about has parallax issues, etc. etc.

    Here's a fun high-end-ish camera :
    http://www.cordin.com/productsie.html

    The 510 at 25,000,000 fps for example. Only captures 48 frames, but that should be enough for something fun...
    Light travels at ~300,000,000m/s
    In the delta between frames*, light should thus travel 12 meters.
    Over 48 frames, it should travel 576 meters.

    In other words... if you set this camera up, hooked the shutter to a flash so that the flash fires the exact moment the camera starts its run, then you should be able to see the light travel down, say, a hallway.
    Better yet...if the flash is short enough, you should see a 'shelled sphere' sort of shape pass through the hallway, and bounced light bounce off the walls to other objects where the direct light from the flash wouldn't reach.

    Can't say I've seen any real-life animations of this, though. There's a few temporal raytracers that can do this.

    * again: exposure time means there's some blurring. You don't take a picture of a single moment in time. If you did, you would likely get no picture at all as no photon / electron / film-state change would occur to be recorded.
  • Does anyone know, of camera or camera units that can autoscan a given area, like a playground and generate images of various sections of the area ? I believe, one would have to auto mechanically move camera to aim at a section, click and move on to the next.
  • by bennettw ( 844546 ) on Wednesday December 29, 2004 @05:33PM (#11213171)
    I'm glad to see so much discussion of our (the Stanford Graphics Lab) work here! After reading through the discussion, I have a couple points that I'd like to make.

    First, this work is part of a larger research effort. In the past several years, cameras have become cheap, commodity devices, and you still get more processing power for the buck every year. I designed the Stanford Multiple Camera Array (http://graphics.stanford.edu/projects/array [stanford.edu]) not to be a high-speed camera, but to be a research tool for exploring the potential of large numbers of cheap image sensors and plentiful processing. High-speed video is one example of high-performance imaging using an array of cameras. We have also used our array for synthetic aperture photography, using many cameras to simulate a camera with a very large aperture. Such a camera has a very narrow depth of field, a property we exploit to look through partially occluding foreground objects like foliage. We are interested in view interpolation (Matrix-like effects, but with user control over the virtual camera viewpoint), too. If you want to learn more about the array and these applications, check out the links to our papers and my dissertation on the camera array website.

    About the high-speed video work in particular, there are plenty of commercial high-speed cameras that run at higher frame rates than our camera array. If you want high-speed video camera, I recommend buying one of them. Using an array of cheap cameras has its disadvantages. You have to geometrically and radiometrically calibrate the data from all the different sensors, and in our case, we had to deal with the electronic rolling shutter. One benefit of this work for us was developing accurate and automatic (very important for 100 cameras) calibration methods for our array. An interesting property of the camera array approach is that parallel compression reduces the bandwidth so we can stream continuously. By contrast, as frame rate increase, most high-speed cameras are limited to recording durations that will fit in memory at the camera, usually well under one minute. That said, one could certainly design architectures to compress high-speed video in real-time.

    What's most interesting to me about the high speed work is combining it with other multiple camera methods. One example is spatiotemporal view interpolation--capturing a bunch of images of a scene from different positions and times, then generating new views from positions and times not in the captured data. Think Matrix again, but with user control over the virtual camera view position and time. While the BulletTime setup from Manex captured one specific space-time camera trajectory, my goal is to capture images in a way that would let us create many different virtual camera paths later on. Traditional view interpolation methods use arrays of cameras synchronized to trigger simultaneously so they can reason about shape of the "frozen" scene, then infer how the scene is moving. In my thesis, I discuss how using the high-speed approach of staggered trigger times increases our temporal sampling resolution (effective frame rate) and can enable simpler interpolation methods. The interpolation algorithm I describe is also exactly the correction needed to eliminate the jitter due to parallax in the high-speed video sequences.

    I've described just a few of the applications we've investigated using our camera array, but we hope this is just the tip of the iceberg. We're hard at work on new uses for the cameras, so stay tuned.

  • I couldn't help but notice that this looks like bullet time shot from a circular array, which gave me ideas about improved "3D" recording. Of course the left-right anthropic 3D model is still cool for human viewing (even that might benefit by recording more frames/sec and multiple angles for display). I suggest the MIT folks consider the possible applications of such an array (or perhaps a circular or spherical array) in robotic vision.

What is research but a blind date with knowledge? -- Will Harvey

Working...