Cheap 3D Computer Vision? 102
InspectorPraline writes "According to this article at the New York Times [free reg req'd], a tech firm known as Tyzx is developing optics technology that will have three-dimensional capability -- using two cameras attached by a high-bandwidth connection to a custom processing card inside a PC. The article makes one believe that the system would have a top speed of as much as 132 stereo frames per second, which could be very useful in security systems. Of course, the real question is who's behind the cameras, but we can all drool over the other possibilities, right?"
TYZX website, links to details and publications (Score:3, Interesting)
2.5d? (Score:2, Interesting)
or is it merely 2.5d
Regardless of where the cameras are, is there not still a plane which the cameras/software cant determine the "height" of
Dont you need 3 cameras minimum for proper 3d?
Casinos will make great use of this technology.. (Score:2, Interesting)
Not exactly state of the art (Score:5, Interesting)
The technology employed (both hardware and software) is limited. CMOS sensors of the type described suffer from poor signal to noise as well as interlacing artifacts. Pixel jitter is of major importance in machine vision and I doubt these sensors offer much clock control over and above the 1 pixel mark (if any).
The matching algorithm described is very primitive, assuming rotation in depth between views doesn't effect the scene projection into the image - ooh but it does. The concensus matching algorithm is very simple and whilst it does recognise the problems of illumination variation it fails to solve the problem in a manner you could describe as robust. Also contrary to popular belief you cannot robustly recover depth from every pixel n the image! There is no evidence that the human vision system does it (without knowledge of the object) so why are people trying it? Even if you ataempt it you are going to need some way of telling which data is more accurate than not in order to start using the results. Edges are your best bet and I didn't see any evidence of preprocessing described in their system (although to be fair I only read it breifly).
I appreciate that this is supposed to be a cheap system and thus its limitations are probably to be expected. Might be fun to play with for a hundred Euros or so.
For more state of the art look at what is possible you could do better than take a look at TINA [tina-vision.net] an open source machine vision system with a very sophisticated stereo depth estimation algorithm (we even built a chip to accelerate it!)
application to user interfaces (Score:0, Interesting)
Imagine if your computer could respond to your gestures. It can analyze your posture and tell when you are unhappy with the current performance, thus transferring more processing power to the current job. It could see you waving your arms and know that you want to kill the current process. It could even tell that you have gotten up to get some coffee, and have the screen-saver kick in.
This could be the end of the keyboard and mouse...forever! I think it would be really cool to get this "up and running" on Linux. Anyone want to start a SourceForge [sourceforge.net] project for it? If we can pull this off, we might finally bump Linux into the forefront of OS innovation.