Cheap 3D Computer Vision? 102
InspectorPraline writes "According to this article at the New York Times [free reg req'd], a tech firm known as Tyzx is developing optics technology that will have three-dimensional capability -- using two cameras attached by a high-bandwidth connection to a custom processing card inside a PC. The article makes one believe that the system would have a top speed of as much as 132 stereo frames per second, which could be very useful in security systems. Of course, the real question is who's behind the cameras, but we can all drool over the other possibilities, right?"
Yay! (Score:4, Funny)
Re:Yay! Opinion to University Student from Britain (Score:2)
There's always the universal correct answer to that kind of question - porn
Triple-D cups in 3-D
-
TYZX website, links to details and publications (Score:3, Interesting)
2.5d? (Score:2, Interesting)
or is it merely 2.5d
Regardless of where the cameras are, is there not still a plane which the cameras/software cant determine the "height" of
Dont you need 3 cameras minimum for proper 3d?
Re:2.5d? (Score:1)
Re:2.5d? (Score:4, Funny)
Re:2.5d? (Score:1)
Re:2.5d? (Score:1)
Re:2.5d? (Score:1)
The reason humans view the world in 3 dimensions is our eyes are placed far enough apart on our head to provide 2 different images for 1 percieved object. Over time, we get used to this and learn to take the 2 images and compare them; taking that in to consideration with our preconceived notions of how large the object being viewed is (If it's smaller than we think it should be, we tend to believe it is far away), we're able to approximate the distance between ourselves and the object being viewed.
Security? Nah.... (Score:1, Redundant)
Re:Security? Nah.... (Score:2, Funny)
Re:Security? Nah.... (Score:2)
If I have understood correctly, this is for tracking/sensing movements accurately in 3 dimensions and being able to record them in binary, not for reproduction of images in 3D onto a screen with viewing glasses and all that stuff. Indeed, pretty good 3D technology is available but the pr0n industry relies on the cheapest technology available to make the most money - at least as a general rule this is the case. The current massive pr0n market has been enabled by Internet and digital media, but they ain't going to poney up loads of cash for this kind of technology.
Pr0n is not necessarily for the discerning film critic after all, known rather for hand relief and titilation of couples who like a bit of that. Not for amazing technology and three dimensional shots. Would you pay more for 3D DVD quality pr0n??? DVD works for the pr0n industry due to form factor and ease of quality pausing of the frames, at least that's what I reckon ;-)
Re:Security? Nah.... (Score:2, Insightful)
Just think back of the early 80's when audio CDs started hitting the market; lame cd players would cost 400$ and up, and the discs themselves were hard to find, but eventually gained popularity and eclipsed 4-track tapes. It took years for the transition to progress, and now if you're seen buying a music tape the clerks will be wondering where you've been living for the last fifteen years. But fifteen years ago if you were purchasing a CD, those same clerks (ok, their parents) were probably wondering where you got all the cash for a cd deck/discman, and pretty much everyone in the street would chat you up about your shiny cd player. Same thing's happening with video, right now we're somewhere in the middle, as DVD is well on its way to widespread acceptance in the home market.
Of course the pr0n industry has little choice but to follow the technology trends. Nowadays everyone wants 4-hour multi-angle ass-to-ass compilations with running commentary by the not-so-great Rocco himself, and since DVD discs are so compact, they can stuff more of them in the bottom dresser drawer underneath their socks.
Casinos will make great use of this technology.. (Score:2, Interesting)
Uses for 3D Computer Vision (Score:3, Insightful)
Fighter planes that don't need radar (but will need scads of cameras all over it -- both visible, infrared, and tetrawave)
Computerized athletic officiating (which may finally kill the politics of skating and gymnastics)
Better identity recognition software (now you don't have to face the camera)
Custom-tailored clothing (no more scanning mechanisms)
Automated grocery checkout (the machine identifies the fruits & veggies so that the clerk doesn't have to type in a 4-digit produce code)
Another reason for George Lucas to go back and re-film all 6 episodes into digital 3-D.
Re:Uses for 3D Computer Vision (Score:1)
How the DeepSea chip works (Score:5, Informative)
<clip>
The DeepSea chip is hardware implementation of the census correspondence algorithm invented by Tyzx staff... The algorithm's key concept is transforming a pixel's numeric absolute intensity value into a bit string that represents the pixel's brightness relative ot it's neighboring pixels. For each pixel, The DeepSea chip examines the pixels surrounding area called a neighborhood. A typical neighborhood is 7x7 pixels centered on the subject pixel. Comparing a subject pixel's intensity to its neighbours, the chip produces a relative intensity map (show in the document, page 8).
</clip>
(typos are mine)
Not exactly state of the art (Score:5, Interesting)
The technology employed (both hardware and software) is limited. CMOS sensors of the type described suffer from poor signal to noise as well as interlacing artifacts. Pixel jitter is of major importance in machine vision and I doubt these sensors offer much clock control over and above the 1 pixel mark (if any).
The matching algorithm described is very primitive, assuming rotation in depth between views doesn't effect the scene projection into the image - ooh but it does. The concensus matching algorithm is very simple and whilst it does recognise the problems of illumination variation it fails to solve the problem in a manner you could describe as robust. Also contrary to popular belief you cannot robustly recover depth from every pixel n the image! There is no evidence that the human vision system does it (without knowledge of the object) so why are people trying it? Even if you ataempt it you are going to need some way of telling which data is more accurate than not in order to start using the results. Edges are your best bet and I didn't see any evidence of preprocessing described in their system (although to be fair I only read it breifly).
I appreciate that this is supposed to be a cheap system and thus its limitations are probably to be expected. Might be fun to play with for a hundred Euros or so.
For more state of the art look at what is possible you could do better than take a look at TINA [tina-vision.net] an open source machine vision system with a very sophisticated stereo depth estimation algorithm (we even built a chip to accelerate it!)
Re:Not exactly state of the art (Score:1)
We need to improve our software a long way at the moment, but the ability is already there for the 3D images.
The main part of our application is actually how we are sending the video over multicast to several machines, including an SGI Onyx3200. We are using MPEG-4 for the video compression, real-time on high-end AthlonXP systems.
Inexpensive Object Tracking? (Score:2)
Product ideas anyone?
-Pete
Re:application to user interfaces (Score:1)
The "Cyberscope" was quite cheap (Score:3, Informative)
Works with any software as it is attached at the front of the screen. Surface mirrors and the idea of doing the view-master 'on screen'
I'll keep mine for a long time.
A description and pictures of it here [nau.edu]
Patent here [uspto.gov] with description.
Re:The "Cyberscope" was quite cheap (Score:2)
Seriously, this is a perfect example of the USPTO issuing patents for trivial things. I can't even imagine calling this an invention, there are so many precedent devices that use the same optical principles.
BTW, a similar device is currently available at http://www.pokescope.com/.
Re: (Score:1, Flamebait)
Re:The real question should be.... (Score:1)
Re:The real question should be.... (Score:1)
"The question," said Marc Rotenberg, director of the electronic privacy group, "will always be who's behind the lens?"
Interesting Applications? (Score:1)
Re:Interesting Applications? (Score:1)
It's ok, we all make mistakes. It just teaches us to use the preview button.
One camera and laser distance-o-meter instead? (Score:2)
Take image, feed the laser distance-o-meter, which scans the distances and embeds the results with the imagedata. We could even have a matrix of the lasers for example to measure the distance on a single shot, for example at 8x8 (64) beams would be already good for scanning an area of few square meters - if the objects that we are looking for area bigger than insects, ofcourse
Re:One camera and laser distance-o-meter instead? (Score:2)
square meter is not maybe the correct term to be used in here, but what I meant is focusing the camera so that the image taken covers a flag size of 2 x 2 meters. What's the correct terminology here ? I don't even own a camera, so...
Re:One camera and laser distance-o-meter instead? (Score:1)
real life 3d simulator? (Score:1, Funny)
"In event of an emergency in 'real life sim 3000' press [enter] to pause and scroll up the history window to see what went wrong"
I wonder if cheat codes are applicable
Oh no (Score:1)
"We must destroy X10! We must destroy all Internet ad!" - KOMPRESSOR [everything2.com]
Width? Angle? HEADACHES! (Score:1)
The article is about computers/robots seeing in 3d, not us. It will enable much more precise handling of objects in realtime, whatever the application might be. (insert ref to porn here)
3D glasses have already been with us for genrations.
Destoo - reading
Stereo vision is limited (Score:2)
Stereo vision is inherently limited. It requires that the objects have sufficient texture so that points on the two stereo images can be correlated. Our depth perception relies on much more than stereo e.g. common sense knowledge about the world, intution about shading and lighting, etc.
Re:Stereo vision is limited (Score:2)
Close one eye. Can you still estimate the distances of objects around you? Of course you can. This demonstrates that there's much more to depth perception than stereo vision.
This does not mean that you did not learn depth cues such as perspective and relative size from other experiences, such as 3D perception. Simply because you have learned that certain shading patterns imply depth does not mean that you did not initially gather that information via stereo vision
Stereo vision is inherently limited. It requires that the objects have sufficient texture so that points on the two stereo images can be correlated. Our depth perception relies on much more than stereo e.g. common sense knowledge about the world, intution about shading and lighting, etc.
Random dot stereograms were invented to disprove this statement. They clearly demonstrate that you do not need features to see in depth. There is a VERY large body of research surrounding these topics. Start with the book by David Marr.
S.V. IS limited (Score:1)
It's relatively easy to test your argument. A person blind on one eye from childhood would never be able to learn stereo vision. Yet, it's VERY likely that he are still able to estimate distances.
The argument that he gathered distance information through moving and seeing an object from different angles and constructing 3D (or 2.5 D as some argue) in his head could be a good one. However, if you provide a photo of some scenery never seen before, this one eye viewer should still be able to estimate object boundaries and relative distances.
Simple rules like "object A covering object B is in front of it" play a much more important role than SV. SV is rather an addition to an already existing machinery, not it's primary tool.
Re:Stereo vision is limited (Score:2)
yeah, also crazy things like size and objects blocking other objcets.
Eye Gaze Tracking (Score:1)
This is similar to some work I did on Eye Gaze Tracking in my senior year at University of Connecicut. The project page can be viewed here [gbook.org].
I wish I had done more with it, there are more applications for this than just tracking people in public. They can be used for keeping the laser in the correct position if a person moves their eye during lasic eye surgery. It can be used to by a paraplegic to use a computer. And most importantly it can used to target in Quake3.
other possibilities (Score:2, Funny)
You mean 3d pr0n?
Riiiight, and those X10 cameras are for surveillance too.
A very cool vision toolkit (Score:2)
A colleague and I are currently in the process of porting portions of EDISON to Java.
Other companies already sell similar stuff (Score:2)
(http://www.ptgrey.com/) that has external binocular and trinocular stereo units for sale that use firewire. They don't do the processing on the unit, but have algorithms that run on standard PCs to process the data for you. Pretty interesting little guys, the computer vision lab where I got my degree (http://cvrr.ucsd.edu) had 3 of the triclops camera systems. They have a new one called the bumblebee that looks to be cheaper and maybe do processing onboard?
There are linux SDKs available also. Note my version of Mozilla (version 1.0) doesn't load their page correctly, maybe some IE messy code?
2 camera stereovision? (Score:1)
but, wasn't this all invented in the early 1900s?
History of Cameras [photographer.org.uk]
If so, then why is taking a picture with two cameras and then displaying them to people so they have stereoscopic vision so "computationally intensive". It seems not to difficult for me. (What's really computationally intensive though would be rendering the two pics, but even then it only requires the "camera" to be shifted and two images to be rendered for each frame. So therefore requires O(f(x)) (f(x) = big O for time to render one picture) computation time and I am guessing it's roughly double the computation time.
Maybe, I am missing something though.
Re:2 camera stereovision? (Score:2)
Re:2 camera stereovision? (Score:1)
But, I still find it confusing as to why this would be more difficult then interpreting one image.
Since the algorithms would be the same algorithms used to render 3d images. You just have to compensate for the angle differences between the two overlapped images. Then you should be able to easy obtain a distance using simple trig formulae.
Re:2 camera stereovision? (Score:2)
Tyzx? Cool. (Score:2)
No, we are not related.
Kryzx
You filthy, filthy people!!! (Score:2)
It's always the same on Slashdot - somebody will eventually end up talking about p0rn...
I'm disgusted!!!
Re:You filthy, filthy people!!! (Score:1)
Re:You filthy, filthy people!!! (Score:1)
Aaah
So? (Score:1)
3d Implants (Score:1)
Are they called security tapes now? (Score:1)
The local video store puts all its p0rn under the "documentary" category. Has the codename been changed, and no one told me?
Obligatory "another method of 3D vision" link (Score:1)