Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Technology

Cell Phones for the Deaf 270

nitzan writes "Quoting from the article: 'the software translates the voice on the other side of the line into a three dimensional animated face on the computer, whose lips move in real time synch with the voice allowing the receiver to lip read.' Unfortunately this only works with laptops, but a pda version is in the works." The company website has a demonstration.
This discussion has been archived. No new comments can be posted.

Cell Phones for the Deaf

Comments Filter:
  • by Cap'n Canuck ( 622106 ) on Tuesday November 26, 2002 @03:16PM (#4761579)
    Still no?

    Ok, can you hear me now? Still no?

    Ok....
  • by Anonymous Coward on Tuesday November 26, 2002 @03:16PM (#4761581)
    ...so now we'll all have to learn how to sign "Turn off your fucking phone, asshole!"
  • by tyler_larson ( 558763 ) on Tuesday November 26, 2002 @03:16PM (#4761586) Homepage
    What was wrong with speech to text?
    • by p4ul13 ( 560810 ) on Tuesday November 26, 2002 @03:20PM (#4761637) Homepage
      Rather than have a computer interpret a person's speech, the software basically gives a representation of what the speaker's mouth is doing. This will allow the deaf person watching the device to do their own interpretation of what they see, which I'd imagine is much more reliable than speech-to-text could hope to be.
      • by Ted_Green ( 205549 ) on Tuesday November 26, 2002 @03:35PM (#4761819)
        Actually speech to text is much more reliable.

        Text to speech:

        1. person speaks
        2. software interprets phonetics converts it into words
        3. deaf person reads the words

        versus

        1. person speaks
        2. software interprets phonetics into picture based lip movements
        3. deaf person interprets picture based lip movements

        Point of fact this is unbelievably dumb and is right up there with converting Russian to German for an English speaker to read.

        • Who says the latter is easier?

          By doing the pictures, you're essentially leaving the last part (converting the phonemes into words and sentences) to the deaf person reading the lips, instead of a computer.

          In order for the computer to do a more reliable job on this last part, it either has to take a long time or the processor has to be really fast. And even with that, the computer is still going to make a lot of mistakes.

          This is certainly nowhere near as brain-dead as you make it out to be.
        • by fishbowl ( 7759 ) on Tuesday November 26, 2002 @04:30PM (#4762340)

          "2. software interprets phonetics converts it into words"

          Is a very different, much more complex problem than:

          "2. software interprets phonetics into picture based lip movements"

          Consider that for the first example, we need the computer to understand the language,
          whereas in the second example, all the computer needs is a fourier transform and
          Max Headroom anatomy.

          Personally, I think it would be simpler and more effective to put a
          camera on the phone and transmit an image of the speakers face.

        • by sakeneko ( 447402 ) on Tuesday November 26, 2002 @07:29PM (#4763883) Homepage Journal
          Point of fact this is unbelievably dumb and is right up there with converting Russian to German for an English speaker to read.

          Very well put!

          I've had deaf friends, one of whom attended Gallaudet University. (Famous liberal arts college for the deaf.) In addition, I lost most of my hearing for some years as a child -- fortunately, I got it back after surgery. I've thought about deafness, and dealt with it.

          Lip-reading works best for people who were hearing at one point and lost some or all of their hearing. I went deaf after I learned to talk, and went deaf slowly, which means I relied heavily upon it. People who have always been deaf often find lip-reading very difficult, or even impossible. When you have no concept of hearing or sound, trying to figure out what meaning is associated with specific lip movements is tough.

          This is true of learning to read, as well. A person who was already speaking, or could read, before going deaf has no real problem with reading. If you can't hear and never have heard, though, the concept of an alphabet and "sounding it out" makes no sense. A congenitally deaf person who wants to learn to read must learn each word as a whole, much as a Chinese or Japanese person who learns to read his/her language must learn each character separately.

          Since a congenitally deaf person faces a humongous task regardless of whether he/she is learning to read lips, or read and write, just which one do you think he/she would rather have to learn? In most cases, learning to read and write is going to be a lot more useful.

          From where I sit, speech to text would work better for most deaf people, congenitally deaf or not.

      • Bingo. Much better to communicate the sounds and let the person draw conclusions. Otherwise it's a game of "Did they really say that? If not, what words sound like the ones I'm seeing spelled out." Speech to text really sucks, especially over the phone.

        It's interesting that not only do the model's lips move, but there are visual cues in the cheeks, nose, and throat.
      • Why not do both? That way the deaf person has the maximum amount of visual information to work with, especially if both methods (as is implied by my peer replies) are inaccurate.
    • This is clearly a solution for the large population of completely illiterate deaf people, for whom speech-to-text is not an option.
    • David Stork has a chapter computer lip reading on in the book "Hal's Legacy" on A.I. methods. The combination is much more reliable that either audio or visual.
    • by N3WBI3 ( 595976 )
      We dont want the deaf people who cant read to be left out ;).

      On a more serious note not everyone reads lips in english, if you develop this right its common for any language

    • Nothing, that's what! Contrary to many of these other posts, speech to text is a much better solution.

      People seem to be forgetting that speech to text is the back-end for this lip service anyway. In order for it to work, speach is interpreted by a computer which then maps the interpreted speech to canned lip movements. The canned lip movements require cpu horsepower to drive the graphics and they need a large screen for it to be readable. These two reasons are why it is only available on a laptop.

      With the speech to text scenario; speech is interpreted by a computer and is matched to canned pieces of text. So far, pretty much the same. But, now the text is output to just about any screen, including the text screens of today's cell phones.

      Basically the speech to text would be an automated TTY/TDD system. TTY/TDD has been in use and has proven highly effective for decades.

      To answer your question, there is NOTHING wrong with speech to text. However, you won't draw too many VCs with it. Now, put a computerized talking head on it and extoll the greatness of its virtues and you may well be able to sucker in a few VCs. And afterall, isn't that what it's all about?
      • "speech to text is a much better solution."

        Of course it is. It's also not current technology.
        The phone is more of an oscilloscope than a speech translator.

        If you want an apples-to-apples comparison, consider the alternatives
        between an animated face or a spectrum analyser, not between an
        animated face or a text display.

    • Speech-to-text works fine for the deaf person "listening" to the phone. But what does the deaf person do when he/she needs to "talk"?

      I know I'm generalizing, and not to be politically uncorrect, but don't most deaf people have difficulty speaking "clearly"? So how does the phone deal with that? Or does the deaf user need to type in their response?

      It seems to me that speech-to-text for receiving and text-to-speech for sending is the way to go ... and then the speech part is pretty redundant.

      In which case you've just re-invented the Blackberry.
    • Ok, let's compare... with either speech-to-text or speech-to-graphics, you first start off with speech-to-phonemes. This is highly nontrivial and likely to be a large part of whatever you end up with.

      To do speech-to-text well, you have to know the language being spoken, so that you can pick out the words and try to spell them right (since phoneme-to-letter mappings are not well-defined or predictable for most languages). On top of this, you have to somehow deal with slurrings (people on the phone are not necessarily the best enunciators in the world), slang, names, etc. etc. Then you have to do this for every language that you want to support.

      Text-to-graphics, on the other hand, is comparitively simple. Humans the world over have a relatively small number of sounds that they use (probably on the order of 300, if you don't count tonal variations, and you're trying to count every distinction that's made in some language) and the mapping from these onto facial shapes is fairly well-understood. There is (in theory) no tweaking needed to make it understand other languages, so when your deaf Chinese friend borrows it to call home it'll work without trouble. Tonal languages could be an issue... but really, deaf people are going to have trouble with that anyway, and this could even help there, since extra cues (a raised or lowered chin, say) could be used to indicate tone.

      It's not so much that there's anything wrong with speech-to-text as that this has the potential to be more right, esp. if used in combination w/ s2t. The fact that no word DB is needed makes it much more likely that s2g'll appear on a PDA that s2t, at least in the near future.



    • > What was wrong with speech to text?

      Speech to text is a much harder problem!

      You can map the articulation to an animated face without needing
      to know the language at all. Now, whether you can do it well enough
      to help a deaf person understand, is an open question.

      All you'd need to do to represent someone saying "Oh" versus "Ooooh" is map the phoneme to a shape.
      I'd imagine Fourier analysis would be oen of the more useful tools here.

    • Good question. I remember reading that a good lip-reader can only interpret 50% of what someone else is saying. So even a crappy speech to text system shouldn't have too much trouble beating that.
  • What about TTY? (Score:5, Interesting)

    by genka ( 148122 ) on Tuesday November 26, 2002 @03:16PM (#4761587) Homepage Journal
    I worked with deaf people for a while and they were (and I am sure still are) disappointed that cell phones are not compatible with TTY devices. How difficult is this to do?
    • by mystik ( 38627 ) on Tuesday November 26, 2002 @03:24PM (#4761678) Homepage Journal
      My new Motorola phone I purchased this weekend mentions in it's menus Something about a TTY. I imagine I'd need data service from Verizon though.
    • WHy should they. (Score:4, Insightful)

      by Unknown Poltroon ( 31628 ) <unknown_poltroon1sp@myahoo.com> on Tuesday November 26, 2002 @03:24PM (#4761685)
      I still cnat get coverage, or hear the other person clearly, why should the deaf be different? But i can ply 3 different games and send a fucking picture of a duck. Stupid phone companies. Its a fucking phone!! First, fix it so i can hear someone, THEN gimme the damn bowling games.

      OK, this might be a troll. Im not sure myself. Its definately a vent. Fucking sprint. Oh well.

      • Stupid phone companies. Its a fucking phone!! First, fix it so i can hear someone, THEN gimme the damn bowling games.

        Yes, mobile phone service sucks because they've pulled all their best engineers out of the field and made them write bowling games instead.

        Negatory, good buddy. Building fancier handsets is trivial compared to actually improving coverage, and it's not like the two are mutually exclusive. Let's say the phone companies overcame all the various technical problems like interference, desensing, varying topography, etc. That still leaves political problems like unrealistic people in upscale neighborhoods who magically want 100% coverage without any visible antennas and dumb people who apparently think that cell phone towers will steal their souls.

        It would also help if the phone companies would stop taking on massive numbers of new customers when they can't support the ones they have, but I guess the money has to come from somewhere.
    • Hmmm. There's always the Blackberry or relay services.

      But yeah, with just a minute effort... It's slightly surprising that there hasn't been anything mandated by law on this front.
    • Re:What about TTY? (Score:3, Interesting)

      by mgrochmal ( 567074 )
      Speaking as someone who works with trying to get lots of accessibility devices to communicate (for the blind and visually impaired, but similar principles apply), one of the main problems is deciding on a standard, followed by making sure it works with those that won't adhere to said standard.

      Case and point: I recently got a cellphone, so that someone else could have their phone back. I shopped around for a while and settled for one that would be free after rebate. I had it for a few days, and returned it for a more expensive one. First, the phone had an odd number layout, so I had to relearn the key mapping (the keys were part of a curve, instead of straight across). Second, I use a laptop to connect to the Internet, and I occasionally use a cell phone adapter to do it. The phone I bought was incompatible with the connector, and the phone's manufacturer had no immediate plans to make one. Those two reasons, as well as several other factors, prompted a return.

      If the cell phone companies would agree on a single interface, it would make the compatibility much easier to implement. Not only that, but the TTY devices need the information to implement all the various brands and models of cell phones. The possibility's there, but there's not much of a chance it'll happen anytime soon.

      • I shopped around for a while and settled for one that would be free after rebate. I had it for a few days, and returned it for a more expensive one.

        Why didn't you just shop around a bit longer and make sure it would suit your needs?
  • by Greedo ( 304385 ) on Tuesday November 26, 2002 @03:17PM (#4761595) Homepage Journal
    No downloadable ring-tones.
  • That's probably the most frightening anthropomorphic mouth I've ever seen animated!

    .
  • Amaxzingly enough, the deaf can drive. I live near a technical deaf school, and quite a few drive. I just hope they don't try to use this cell ohone and drive at teh same time.
  • If it is anything like the demo they have on their site, this technology is doomed.

    I hope to God they are not using Flash to deliver this product.
    Uhhgg!
  • Complicated (Score:4, Insightful)

    by batboy78 ( 255178 ) on Tuesday November 26, 2002 @03:18PM (#4761617) Homepage
    This just seems complicated, why can't they just improve the speech to text capability. It seems like drawing a face with life-like facial movements to enable lip reading is a little beyond the scope of power for a PDA.

    • It's surprising how many people here don't realize that converting a sound to words is harder than converting a sound to pictures of lip movements.

      Converting phonemes into sentences requires context. Right now, speech recognition software "simulates" context by having large word banks and using probability. It tries to guess from sample text what the most likely string of words was. This is really not easy. This is why you have to train with speech recognition software. It tries to build up a database of likely things you'll say. It's like Tivo, sort of. And it can get a sentence totally wrong sometimes (often?). If you have a slow processor, going through all this data can take a REALLY long time.

      Speech-to-text might be out of the scope of today's PDA to do, whereas the lip-synching stuff wouldn't be.
  • by BWJones ( 18351 ) on Tuesday November 26, 2002 @03:19PM (#4761619) Homepage Journal
    This is a fantastic idea which will enable communication for the vast numbers of hearing impaired, however if the web-site is any indication, the technology needs improvement. I'm pretty good at reading lips and I was working pretty hard to figure out what was being said with the sound off.

  • Uhhh... (Score:5, Funny)

    by NilObject ( 522433 ) on Tuesday November 26, 2002 @03:19PM (#4761628)
    Being a severely hearing impaired person, I do find the virtual person's "O"'s to be highly disturbing if not graphic. Yikes.
  • She could call her bank to find the nearest drive through ATM with Braille!

    Oh wait, she'd still need to see the cell phone. Never mind. I guess its a good thing she isn't still here.
  • Presumably this technology does
    speech->text->animated model
    Wouldn't it be simpler to present the text to the user? I would have thought text->human is much higher bandwidth than animated-model->human.
  • by FunkyELF ( 609131 ) on Tuesday November 26, 2002 @03:21PM (#4761648)
    I lived with a deaf room-mate last year. It took me about 2 months for me to understand what he was saying, and took him about the same to get used to my lips. Anytime he meets someone new, its very hard for him to read their lips (i.e. every time a new telemarketer tries to prey on the deaf user). Also, its not just the lips, its the tounge also. It'd probably be easier to use speach-> text software than this stuff....and what about background noise? I doubt this thing works well if not at all.
  • Yeah, sure (Score:3, Interesting)

    by Wind_Walker ( 83965 ) on Tuesday November 26, 2002 @03:22PM (#4761665) Homepage Journal
    I was one of the fortunate ones who got to the company's website before it got Slashdotted, and was able to view the "demonstration" of their software. The demo consists of a mouth saying "Thank You" in various languages. I looked at English and Spanish, the two I know best.

    I sure as hell couldn't tell you what they were saying, even when I knew what words were coming out of their mouth. And this is not to mention cell phone static, distractions, contractions, mumbling, and lots of "ummm" and "uhhhh" that occurs during normal speech. I really don't see how this is a viable communication method.

    Maybe it's because I'm not experienced with lip reading. Maybe people who are deaf are better at it than I am, but I can usually tell what Football coaches are saying on the sidelines of games (of course, that's limited to "Bull****" and "You've gotta be ****ing kidding me!", but still...)

  • First of all, I have to say this is a great idea. Just because you are deaf doesn't mean you can't use a cellphone. I have a cousin who is deaf. The last I talked to her, she was using sign lang. She was not reading lips (atleast I don't think). I personally don't know how to read lips. So, is this really going to take off?
    • I think this will be especially useful to people who are partially deaf. For instance, if someone can hear limited frequencies, then sounds like "s" might not be audible. Looking at someone's mouth provides a way to compensate.
  • Ugh (Score:3, Insightful)

    by NilObject ( 522433 ) on Tuesday November 26, 2002 @03:25PM (#4761694)
    I just can not picture myself on a bus looking at this wildly articulate mout while yelling back: "Can yoo reepeeet dat agaannn???" Yes, I am hearing impaired. I would NEVER touch this thing. I'll stick with 2 way messaging.
  • by pretzel_logic ( 576231 ) <andy@shook.gmail@com> on Tuesday November 26, 2002 @03:25PM (#4761696)
    look at a lip reader and say:

    'I want a fig newton'

    IMHO:
    too many flaws, the investors will back out
    • I want to vacuum is more acurate.

      Dont even need to say that to a lip reader. Try mouthing it to the hot blonde across the room sometime. Everyone can lip-read that phrase.
    • Re:lip reading.. (Score:3, Insightful)

      by Dephex Twin ( 416238 )
      Well, for comparison, see how well a speech recognition program does with the same sentence.

      And unless you just randomly blurted out the sentence, you probably have context in the surrounding sentences (e.g. you are talking about fig newtons, food in general, newton's law, whatever).

      I'd definitely put my money on the lip-reader, frankly.
  • Accurate lip reading is a lot more difficult than sign language. SpeechView would have a much more usable product if they animated signing hands instead of a speaking face. I guess the software would be more complicated since it would involve speech recognition instead of just sound mimicry.
  • Developers are nearing a major breakthrough in 5.1 Surround Spacial Narrative Vision(TM). This amazing new technology targeted towards blind people immerses the blind viewer in an immersive field which narrates the scenery as an overlay of the movie soundtrack.
  • by Cyclopedian ( 163375 ) on Tuesday November 26, 2002 @03:29PM (#4761749) Journal
    Lip reading is only half the whole "info-stream" that comes out of peoples mouths. I know this. I'm deaf (severe to profound sensori-neural hearing loss, since birth) and I'll tell you one thing: lip-reading can give ambiguous results.

    Someone can say "Pot" and yet with the same lip movement, can also say "My". Men with bushy mustaches are a lip-reading disaster.

    For me, I've adapted in my own way: I rely heavily on my hearing aids. That combination of both lip-reading and hearing the audio stream from your mouth enables me to achieve at least a 70% success rate (under ideal conditions, if it's a party atomosphere, fudgeddaboutit). I've had hearing aids since I was 1 1/2, and only with extensive speech therapy can I speak well. I'm one of the few deaf-from-birth people that can do it this well. So, from that perspective, I can speak on a phone (as long as I can understand that mangled audio coming out the receiver, which is 0%).

    Why don't they just focus on speech recognition? A great speech recognition phone would enable deaf people that speak to use phones for near real-time conversations. In addition, such technology can also be (easily?) adapted to foreign language translators for tourists.

    However, until such technology is available at the consumer level, I'm stuck with two-way text messaging devices like the T-Mobile SideKick.

    -Cyc

    • A great speech recognition phone would enable deaf people that speak to use phones for near real-time conversations. In addition, such technology can also be (easily?) adapted to foreign language translators for tourists.

      Actually, this is not so easily done.

      You come across a number of ways to introduce errors. First, recognizing the speech and figuring out the phonemes, with some margin of error. Then, you have to convert these phonemes to text, which requires a lot of computing power to do in real time, and you introduce a lot more errors. Then you have to translate this text. We all know how babelfish is. So you'd end up with a very garbled (probably unintelligible) message.

      I'm not saying this is impossible, just that this is no trivial task.

      In the meantime, I think this technology we see in the article is a step in the right direction.
    • I'm deaf (severe to profound sensori-neural hearing loss, since birth) and I'll tell you one thing: lip-reading can give ambiguous results.
      Someone can say "Pot" and yet with the same lip movement, can also say "My". Men with bushy mustaches are a lip-reading disaster.
      Imagine if every person enunciated consistently and clearly. I know there's still ambiguity, but it wouldn't be nearly as hard. This computerized face doesn't have problems like bushy moustache, and the pronunciations are precise (of course, it could be better). So you'll be in the best conditions for recognizing what is said (theoretically).

      With the accuracy of speech-to-text these days, the margin of error you get reading those lips might very well be smaller than if a computer tries to make those sounds into words and sentences.
  • Read My Lips (Score:5, Interesting)

    by bytesmythe ( 58644 ) <bytesmythe&gmail,com> on Tuesday November 26, 2002 @03:29PM (#4761750)
    I thought it seemed a little weird at first, but then I checked out the other demos. When I knew what the words were ("Thank you" in English, German, French, Spanish, and Japanese), I could easily tell what was being said.

    I notice a lot of people complaining about improving text-to-speech, which is far more advanced than this technology. Speech sounds come out in a continuous flow. Getting a computer to recognize the breaks between words, properly spell them reliably, etc. is hard enough on a desktop system, much less a PDA. Especially considering in languages like English, where most vowels in unstressed syllables are rendered vocally as "uh".

    This system simply has to hear a sound, and immediately display an associated... well, not "grapheme", since this isn't writing... maybe "pixeme". It is the graphical equivalent of attempting to spell perfectly phonetically.

    Also, if you didn't notice it, "invisible" sounds that occur on the back of the tongue are indicated by circles on the cheeks (like hard 'g' and 'k'), and nasal sounds are indicated by a darkening of the nose.

    All in all, I think this is an interesting idea. It will be even cooler when they can render different faces so the "avatar" resembles the person to whom you're speaking.

  • Partly, because speech to text isn't very good.

    Speech to text isn't very good because its very hard to turn phonetics into words. Our ability to understand people is very reliant on context. Knowing what's been said helps you understand what's being said.

    Some will say that speech to text is getting fairly good in English, which is somewhat true. Obviously, though, there are bigger markets in other languages.

    So how does this thing work, if it doesn't do speech to text? It does speech to phonetics, and phonetics to lips.

    For example, its relatively easy to understand when someone has said "h -ee- r", but knowing if that's supposed to be "here" or "hear" is quite difficult.

    This is why the same software works across languages. "Th" is "Th" in any language, and your single algorithm doesn't have to care.

    -Zipwow
    • "Speech to text isn't very good because its very hard to turn phonetics into words."

      I wonder why it wouldn't be appropriate to deliver the phonetics themselves?
      In my university language classes, I'm expected to read French from a standard
      phonetic alphabet. If the device in the article is truly mapping phonemes from
      speech, then a representation of those phonemes would be very useful;
      never mind representing them into a given language.
      • Paraphrasing: "why not deliver the phonetics themselves"

        I'd guess that a text-based reprsentation of phonemes isn't as intuitive as one would like. At the very least, its another skill one would have to learn to use the phone, whereas the user presumably already can do some amount of lip reading.

        In general, though, I've often wondered if sending the phonemes over 'the wire' and reconstructing them as sound on the other side wouldn't work well.

        Granted, the phonemes would probably have to have a lot more detail than they do now to sound right, but it seems one could send a lot of that kind of detail and still take up less space than even a compressed sound wave.

        One application of this approach would be chat programs, and game communication. Because the phonemes are being reconstructed, you could reconstruct them with different characteristics than they were recorded, making them more feminine, or with a scottish brogue, etc.

        -Zipwow
  • Why don't they just turn it into scrolling text?

  • by Zildy ( 32593 ) on Tuesday November 26, 2002 @03:31PM (#4761767)
    Just imagine the confusion of having to read the lips on this thing...

    Wife: "I told my husband 'get a gallon of milk from the store'."

    Husband: "No, what I saw was 'kill my family and urinate in their lifeless armpits'."

    SprintGuy: "Sir, it's the SpeechView. Here, Sprint built the..."
  • Can [bconnex.net] you [bconnex.net] hear [bconnex.net] me [bconnex.net] now [bconnex.net]? Good [bconnex.net]!
  • Just turn on the vibrating feature. Then you can call your deaf girlfriend and say "I just called to say I love you" in morse code...

  • by jhines0042 ( 184217 ) on Tuesday November 26, 2002 @03:39PM (#4761850) Journal
    ... if you have this software running on a phone then if you are hearing impared you could get real time conversation with the other party without having to go through a human being.

    I've spoken with a hearing impared person on a phone before through a TTY system and it is painfully slow. First you have to say your sentence and then they send it. Then the other end needs to read it, type in a response, and then send it at which point it is read back to you. Imagine having a conversation over an Instant Messenger except you're secretary was reading the screen and typing for you. (IM for the blind for example)

    I agree that we need better voice to text and text to voice translation. That technology would give use better access for everyone. You could have "hearing" for the hearing impared (speech to text), "reading" for the vision impaired (text to speech), and you could even have "writing" for those with fine muscle control imparement or who are lacking the necessary limbs for various reasons.

    But this is an interesting approach to solve one of the three problems.
  • So close, but yet so far. Give us this [sighted.com] plus this [chordite.com]. That is, a portable chordal handset with braille output. Then connect it to either a blackberry like device, or one of those AIM cellphones.

    Can you imagine? I walk to work every day with my phone logged into AIM. I chat with people while I walk. I try not to step in potholes. The convenience of chatting and holding the cellphone at my side while waiting for the vibrating alert set me to thinking...

    Iduno. Y'all want a portable SSH client that you don't have to look at in order to use? Without the requirement for a screen, I don't care how big the device is. It goes in my backpack. The input/output is all tactile.

    I wonder how hard it is for sighted folks to learn braille. I wonder how hard it would be to mount braille-like output on a small handheld device. Dunno if that's possible, really.
  • I thought they had these already.

    At least I assumed that the folks speaking at 95 db into a highly compressed mic did so because they were deaf and unable to hear themselves.

  • I go to karaoke every week, and lately i've been making my own karaoke VCD's of more modern songs.

    I decided to do La La land by green velvet one week and just for kicks I thought I would make a talking head ala max headroom.
    http://www.zeromag.com/images/downloads/videos/t ry 1.avi

    (Divx compressed BTW 6 megs)

    Basically I just recorded a second track of my singing without the music, then pumped the wav through the facial animator in truespace 6.

    What I found was it actually made it a bit easier for me to keep up with the words because I would watch how the lips on my on screen persona and mimic them myself.

    Anyways, enjoy folks.
  • by egg troll ( 515396 ) on Tuesday November 26, 2002 @04:14PM (#4762158) Homepage Journal
    Reading this makes me realize that my Lightbulbs for the Blind scheme was not crazy! Bundles of cash, here I come!
  • How about we skip the whole idea of text output? This is stupid! It's a complete waste of technology and time. Translating one person's audio into a 3d modeled face. Brilliant ...

    How about they just use a video phone? Or have the audio be displayed in a text output? It has to go through that step anyway.

    3D modeled face ... what the crap.
  • wouldn't it be a lot more practical to just dump it to text???
  • by TekkonKinkreet ( 237518 ) on Tuesday November 26, 2002 @05:05PM (#4762663) Homepage
    Posting late, but wtf.

    By way of introduction: I developed the core coarticulation and other algorithms for lip synching when I worked at a now-defunct company called...wait for it...LIPSinc. We thought the resulting lip synching was pretty damn convincing, so on my own I tested out our stuff with a hearing-impaired friend, with mixed results. Anyway, I don't know a little about this stuff, I know a *lot* about it.

    What these guys have done is map phonemes onto exaggerated visemes (the pictures of the mouth). Not a bad idea at all! Bunch of problems, though. First, there's a data data reduction of about 3x in going from sound to video--there are 40-50 distinguishable phonemes, and 9-16 distinguishable visemes, depending on how you count each. This is because the visible part of the face only makes up the end of the vocal tract, a lot of distinctions between letters occurs without the involvement of the lips, like the difference between F and V, while others, like K, can be pronounced with the face in virtually any position. This is part of what makes lip reading so hard with a real person, and why they need a lot of context to pull it off. They also seem to be slowing down the timing, as if they recognized the phonemes and then synthesized each at the same length. This gives longer to recognize each one, but wrecks the visual prosody (rhthym) of the speech, which is a good cue for where the parts of speech are. Then there's the rest of the face. The eyebrows and head positions help you figure out key words, ends of clauses, tell if something is a question, etc.

    Those who say that TTS is superior to lip reading have a point. Good TTS contains *more* accurate information than an uninterpreted stream of phonemes (itself 3x richer than a stream of visemes, as I said above), because the machine can do a Viterbi search to find the most likely sequence of words from a continuous stream of phonemes. Words also open up higher NLP functions, so you can do constraint relaxation to test whether "wreck a nice beach" or "recognize speech" fits better in the context.

    Still, I'd like to see an experiment where the raw phonemes are fed, as text, to the recipient. I think with practice, your brain would start to decode the string (it manages with the sound, right?), despite the lack of word boundaries and the errors in phoneme detection (which is not all that high without text-I think seventy-something percent). Seems like an easier pattern recognition problem than lip reading. Who wants to go get funding?
    • This is because the visible part of the face only makes up the end of the vocal tract, a lot of distinctions between letters occurs without the involvement of the lips, like the difference between F and V, while others, like K, can be pronounced with the face in virtually any position.

      There are visual cues for "invisible" sounds, like the nose darkens, the throat turns blue and the dots appear on the cheeks for various sounds. Watch the demo.

      It'll take practice to get used to, I'm sure, but no more than it takes to pick up Graffiti on PDAs, I imagine.
  • Now there are going to be BLIND drivers swerving all over the road because the're talking on the phone!
  • by dissy ( 172727 )
    This is actually a pretty neat technology.

    Ive seen lots of suggestions for speech to text, but if you have had any experence with regular powerful PCs and speech->text you will see why that wont work on even a 2ghz intel system, let alone a pda/cel phone.
    (Didnt we just have an ask slashdot about this?)

    A wire frame of a face only requires slightly more CPU power than processing a WinAmp visual (No 32 bit color eyecandy here) and i have actually seen a visualization plugin (For ge-force, avail for winamp and itunes as well as some opensource packages) that has a module that draws a face and it in a way moves with the sound.
    Granted that was not its design, it was made to look good, but obviously with a few changes it could be made to acuratly simulate a face and mouth for this very purpose.

    Im all for any technology that makes interacting with a computer easier. While I personally would prefer direct sound into my ears over this, I also concider myself lucky to have that ability compared to those that dont.

    Personally Im all for the direct brain connection, but i have a feeling thats a ways off yet :)
  • Experience talks (Score:2, Interesting)

    by Malicious ( 567158 )
    Working in a call center, i get the occasional deaf call.
    It takes tremendous amounts of time, because not only does the translator have to interpret what the customer is saying, so that i can hear it, he then has to translate what i say back to the customer. It takes ages, and i'd imagine that with a cell phone, having a comptuer immediately translate, if slightly less accurate, would be preferable to having a human slowly (compared to the comptuer) enter it. Speed Vs Ease of Comprehension. Pretty common comparison. To each their own
  • It's called SMS!! Works for even the non-deaf crowd and doesn't piss as many people off.
  • Every GSM phone made has for years come equipped with SMS capabilities.

    SMS is very popular with the deaf (at least where I come from). It allows them to communicate just as easily with those with good hearing as other deaf people.

    SMS also solves the problem of being globally accepted (just as long as you're not on the North-Western side of the Atlantic pond), and you don't need a special kind of GSM phone to be able to communicate with SMS. Another nice feature is that it works no matter how noisy it is surrounding the sender.
  • Better yet: (Score:3, Insightful)

    by Misch ( 158807 ) on Tuesday November 26, 2002 @06:04PM (#4763170) Homepage
    We have tools like Sprint Relay On-Line [sprintrelayonline.com] that will do text-to-speech... and every state provides confidential relay services [cmu.edu] to begin with. Many states are moving towards making 711 a standard relay number.

    If a deaf person wanted a "cell phone", they'll probably have one from Wynd Communications [wynd.com], a two-way pager with text/e-mail and other services built right into the damn thing. They're all the rage here [ntid.edu]. Screw lip reading over the phone. This technology is pure eye-candy. Nice, but how useful will it really be?
  • From
    • For Hearing People Only

    "Lipreading involves a high proportion of guesswork and "instant mental replay." Only some 30% of all spoke sounds are visible on the lips. Many sounds like "b," "p," and "m," are virtually impossible to distinguish by watching the mouth. [...] Anyway, 'lip-reading' is a misnomer. A more accurate term is
    speechreading. Speechereaders don't just look at the mouth;they read the entire face. [...] They note changes in expression, shoulder shrugs, posture, gestures. [...] Picking up these associational cues is an art in itself." (127-128)
    I'd also like to add that For Hearing People Only, ISBN 0-934016-1-0 is a great source of information about the complex and interesting world of Deaf people, and the language of ASL.

Our business in life is not to succeed but to continue to fail in high spirits. -- Robert Louis Stevenson

Working...