Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Image

Artist Wants to Replace Lost Eyeball With Webcam 156

A one-eyed San Francisco artist, Tanya Vlach, wants to replace her missing eye with a Web cam. There has even been talk of her shooting a reality TV show using the video eye. "There have been all sorts of cyborgs in science fiction for a long time, and I'm sort of a sci-fi geek, with the advancement of technology, I thought, 'Why not?'" said Vlach. I'm a bit perplexed that the obvious things you'd want in a cyborg eye: range finder, infrared/lowlight vision, and a hypno-ray are not discussed in the article.

*

This discussion has been archived. No new comments can be posted.

Artist Wants to Replace Lost Eyeball With Webcam

Comments Filter:
  • Wireless? (Score:5, Insightful)

    by YrWrstNtmr ( 564987 ) on Monday November 17, 2008 @03:47PM (#25790439)
    FTA:
    "It is possible to build a wireless camera with the dimensions of the eyeball,"
    Want said the camera, which would be encased in Vlach's prosthesis to avoid moisture, could link wirelessly to a smart phone.
    The smart phone could send power to the camera wirelessly and relay the camera's video feed by cell phone network to another person,

    The effects of cellphone emissions are as yet unproven to be harmful or not harmful. But I'd think putting the rad source right next to your brain, without even the skull material as a blocker, would be a pretty bad idea.

    But, if she wants to be the guinea pig...go for it.
    Who knows...she may spontaneously sprout a 3rd eye.
  • Suggestion (Score:1, Insightful)

    by Anonymous Coward on Monday November 17, 2008 @04:09PM (#25790817)

    The first couple of versions aren't going to work right or be what you want anyway.

    So make a couple breadboard versions first to try out different feature sets.

    When you have the features you like, make a portable book-size prototype and work the bugs out.

    And then worry about reducing the size to fit your cybernetic eyeball.

    Remember Moore's Law. Electronic's size, power requirements and cost go down over time (yeah, I know, that's not exactly Moore's Law, but that's the effect of Moore's Law).

  • by name*censored* ( 884880 ) on Monday November 17, 2008 @05:50PM (#25792601)

    Solution: Capture at twice the output resolution (eg, 1600x1200 for an 800x600 video), then correct jitter by moving the video window within the capture frame and using AI to determine whether something is jitter or intentional frame movement (eg. does the new direction return to near the old one within some time limit, is the camera focusing on an object I should be locking on to, does the new position of the capture frame force a static video frame outside of the capture frame, etc). Basically similar to the peripheral vision, except videos only have discrete capturing (either something's shown or it isn't, using various filters on the outskirting pixels looks strange) instead of continually diminishing awareness on the peripherals.

  • by Red Flayer ( 890720 ) on Monday November 17, 2008 @06:21PM (#25793115) Journal
    Except that by mapping space using coordinates, we can have knowledge of absolute, as well as relative, location.

    Sure, everything is moving... without coordinates in an absolute system, it'd pretty pretty damn difficult to calculate how things are moving.

    A little bit rambling, but I find it annoying when people use transient landmarks when giving me directions ( thankfully not an issue anymore, due to the internet). "Take the second right after the Mobil station" they say... what if the Mobil station becomes an Exxon station due to their merger? Why can't you just tell me, "Proceed 2.4 miles then turn right onto Elm Street"? See why absolute coordinates are better?

    What if I give you directions to get to Alpha Centauri using directions relative to Sol, but you're coming from Betelgeuse IV? Relative directions suck.

    If you want to map the universe in a coordinate system, you'd simply add the movement curve and time to the location of an object. So location would be (x, y, z at t=0, t, curve). We'd just need to define the absolute location of (0,0,0,0) -- of course, this is assuming there is no warping of space-time, which is a big assumption... but I think we could adapt for this by compressing/expanding the axes where necessary. Please explain how in the universe you'd use a non-coordinate system to map the universe.

So you think that money is the root of all evil. Have you ever asked what is the root of money? -- Ayn Rand

Working...