Google Works On Kinect-Like Interface For Android 49
bizwriter writes "A patent filing made public last week suggests that Google may be trying to implement a motion-detection interface, like Microsoft Kinect, for portable electronic gadgets. The patent application is for technology that turns a mobile device's camera into a motion-input system. In other words, it could be goodbye to fingerprints and streaks on the front of your tablet or smartphone. Google could incorporate such a feature into Android in general or keep it as a differentiating advantage for its acquisition of Motorola."
Same stuff, different device (Score:4, Interesting)
From Claim 1 of the patent filing:
Claim 2 then says:
Then there is a lot of refinement, talking about edge detection, direction of movement, the usual definition of a computing device with memory, and finally kicking off predetermined actions based on recognized motions.
But look at Claim 2: "... comprises single tapping, double tapping, hovering, holding and swiping." To me, this patent seems to be a simple extrapolation of the gestures Apple made popular with their mobile UI, with the addition of "hovering" (assuming I understand the definition of that word, here). Same gestures, different input control.
Is there a significant difference between, say, swiping across a phone's screen and making the same gesture a few inches away? (I'm thinking that if the device interpreted motions from a larger distance then the only thing that will reliably happen is a serious of hilarious DoS attacks via interpretive dance.)
Re:Couple questions... (Score:4, Interesting)
Well, if you were deaf, this would be one easy way to have real-time communication with someone. Getting a table mount for it would be the least of your concerns.