Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
Technology

The Future of Speech Technologies 101

Posted by Zonk
from the you-want-some-toast? dept.
prostoalex writes "PC Magazine is running an interview with two of the research leaders in IBM's speech recognition group, Dr. David Nahamoo, manager of Human Language Technologies, and Dr. Roberto Sicconi, manager of Multimodal Conversational Solutions. They mainly discuss the status quo of speech technologies, which prototypes exist in IBM Labs today, and where the industry is headed." From the article: "There has to be a good reason to use speech, maybe you're hands are full [like in the case of driving a car]. ... Speech has to be important enough to justify the adoption. I'd like to go back to one of your original questions. You were saying, 'What's wrong with speech recognition today?' One of the things I see missing is feedback. In most cases, conversations are one-way. When you talk to a device, it's like talking to a 1 or 2 year old child. He can't tell you what's wrong, and you just wait for the time when he can tell you what he wants or what he needs."
This discussion has been archived. No new comments can be posted.

The Future of Speech Technologies

Comments Filter:
  • by blair1q (305137) on Saturday January 28, 2006 @02:43PM (#14589089) Journal
    I have a solution to the "one-way" communication problem.

    More popups.

    Audio popups!

    Heads-up display popups!

    Holy blackberries! Get me my patent attorney!
  • Oh no! (Score:5, Funny)

    by Ardeocalidus (947463) on Saturday January 28, 2006 @02:48PM (#14589125)
    "Car, brake"

    "I'm sorry, Dave. I'm afraid I can't do that"

  • by backslashdot (95548) on Saturday January 28, 2006 @02:53PM (#14589155)
    mast and the stand can't aches.

    (the future of speech technology must understand context)
  • its been a while (Score:3, Insightful)

    by joe 155 (937621) on Saturday January 28, 2006 @02:53PM (#14589158) Journal
    I've been waiting for years for speach recognition technology to get to an acceptable standard and over that time I've used a couple, the one i got lately (dragonsoft I think) was ok, but they need to come quite a bit further before I'll be adopting all the way.

    I'm looking forward to when I can say "computer, open openoffice for me mate" and it'll go "sure"... That'll be sweet.
    • Ever try Nitrous VoiceFlux? http://www.voiceflux.net/ [voiceflux.net]
    • Re:its been a while (Score:4, Interesting)

      by SoSueMe (263478) on Saturday January 28, 2006 @03:41PM (#14589438)
      Dragon Naturally Speaking from Nuance [nuance.com] is about 75-80% accurate out-of-the-box. It is the other 20-25% that you have to invest the time in to get it to your liking. Even after a few months, you will probably still only reach up to 95% accuracy.
      Using it when you have a cold, sore throat or when you have been indulging in your favorite alcoholic beverage can corrupt your voice profile and set you back considerably.

      Never let someone else use it under your voice profile.

      Will voice rec systems ever be 100% accurate and spearker independant? Maybe, but I don't expect to see it for a long time.
      • This is something that needs to be inbuilt. Are you [user]. Have you been drinking? Do you have a cold? For the second two you'd probably then apply some kind of clever filter or make the profile looser, and don't write changes.
        • Re:its been a while (Score:2, Interesting)

          by eam (192101)
          We use Dragon in a digital dictation system for the radiology department where I work. We moved to the system about 6 years ago.

          We have all the problems mentioned (except drinking). There are also some others that you might not consider. For example:

          As the day wears on, the radiologist will get tired, and the recognition will become worse.

          Also:

          A radiologist who started at 6:30AM will see the sound characteristics of the room change dramatically as more people begin working and activity in the reading roo
  • by Nuclear Elephant (700938) on Saturday January 28, 2006 @02:59PM (#14589207) Homepage
    What's wrong with speech recognition today?

    I took a brief poll, and nobody seems to have a problem:

    Bruce: I sure like being inside this fancy computer.
    Vicki: Isn't it nice to have a computer that will talk to you?
    Agnes: Isn't it nice to have a computer that will talk to you?
    Kathy: Isn't it nice to have a computer that will talk to you?

    Except the trinoids, who complained:
    We can not communicate with these carbon units.

    I wasn't sure which Carbon they were talking about.
    • Apple has had speech technology for years!
      • Yes, and Apple's speech recognition technology is many years behind the state of the art. IBM and others had better speech recognition and speech synthesis a decade ago than Apple has today.

        And where exactly is new speech technology supposed to come from inside Apple anyway? They fired all the people who knew anything about speech in the 90's and shut down the labs.
        • Re:MOD PARENT UP (Score:1, Interesting)

          by Anonymous Coward
          That is subjective. IMO, Apple's approach was right. The only commercially "successful" approaches so far have been for dictation. I'd say that's still a niche market. The rate of error for transcription is way beyond the rate of even the most bubble-headed secretary, and that's not even considering specialty terms for particular fields of industry.

          I don't think too many people realized how much more useful Apple's speech technology became when S.I. was teamed with AppleScript. I'm sure IBM's technology is/
  • "In this 10-year time frame, I believe that we'll not only be using the keyboard and the mouse to interact, but during that time we will have perfected speech recognition and speech output well enough that those will become a standard part of the interface."
    • Heh. I think it would be more useful to have eye movement controled cursors.
    • I think mouse and keyboard with screen is far faster than audio recognition/feedback will ever be.
      • And keyboard only is much faster than mouse+keyboard if the system is designed correctly. Except in the case where you are required to point at spots on the screen, such as for editing images, I would rather not use a mouse at all. Word Perfect 5.1 was the king of word processors. Everything could be done as a combination of alt/ctrl/shift and the F keys.
  • by GnomeChompsky (950296) on Saturday January 28, 2006 @03:08PM (#14589257)
    I'm a linguist, and it seems to me that Speech Recognition would be incredibly, incredibly useful in the research that's going on right now into Language Acquisition.

    You see, the problem right now is that there's really not much data that's in the public domain for linguists/psychologists/what-have-you to study, because it's incredibly, incredibly laborious to do longitudinal studies of children's utterances, or of input to the child. People spend hours and hours and hours transcribing 20 minutes of tape. They're understandably reticent to just share their data out of the goodness of their hearts. Even when they do, it's never a large sampling of children-and-their-interlocutors from-birth-to-age-X, it's usually just one child and maybe his or her parents from age 8 months to 3 years.

    So we have arguments about whether or not kids hear certain forms of input (Have you used passive voice with your child recently? Where's your child going to learn subjacency?) that go back and forth between psychologists and linguists, and people perform corpus studies on 3 children and feel that that's representative -- never mind the fact that these three kids were all harvested from the MIT daycare centre, and were the children of grad students or faculty members, and thus may not be representative of the population at large.

    Speech recognition would make it much, much easier to amass large corpora of data for larger samples of the population. It'd make it much more likely for people to share their data. And, what's more, it'd likely be possible to have a phonetic and syntactic-word-stub (for lack of a better word) transcription made from the same recording. We'd have a better idea of how the input determines how language is acquired by children, and what sorts of stages children go through.
    • Very interesting. Since you're a linguist, I wonder if you might address a concern I've had about speech recognition technology in general.

      I've dabbled a bit with Dragon Naturally Speaking in the past (v.7) and frankly found it still too immature to be of much use to me. I find it still far easier to deal with an accurate yet artificial interface (keyboard and mouse) than an inaccurate but more "organic" interface (speech recognition).

      But one of the things that stood out from the experience was the wa

      • You are concerned that people will adapt their speech patterns to be more clear and easier to understand, and that it will catch on?

        I have a problem seeing how we should be worried...

        • If such clarity and ease of understanding comes with the cost of restricting the range of human expression or the future evolution of language, then yes, I would say that's a reason to be concerned. I happen to believe that linguistic idiosyncracies like slang are an important part of human expression and the ongoing evolution of language, and I suspect the vast majority of linguists would agree.
    • I'm a linguist, and it seems to me that Speech Recognition would be incredibly, incredibly useful in the research that's going on right now into Language Acquisition.

      Err... I'm confused, isn't this research going on exactly to provide speech recognition/transition systems with the data? So a perfect speech recognition system would make further acquisition unnecessary. What else would you want to collect the data for?
      • Human language acquisition, not machine language acquisition.

        Also, as another linguist, let me add the part the parent forgot: In the literature, the term "language acquisition" is usually distinct from "language learning." Language acquisition is usually the term used to describe the process by which children acquire their "native" language(s). It appears to be very different from language learning, which is what you do if you start studying a foreign language after you have left the so-called "critica

    • I'm of the mind that humans have an innate understanding of certain linguistic building blocks, which we then play around with more as we grow up. An inherited pre-existing structure which our minds expect to experience around us and from which all our languages are derived.

      The linguistic parallel to the collective unconscious.

      If developing artificial speech and hearing with computers takes us closer to this, then I think the results should be extraordinary.

      But it's just my two cents obviously!
    • I work with speech recognition and to me, your comments sound a little misleading. When "people spend hours and hours and hours transcribing 20 minutes of tape" they usually aren't simply transcribing to text. The time is consumed by transcription of all the additional features in the text (ie. time alignment of words and phonemes, prosody, additional syntactic information such as parsing structure or part of speech tags). This is where all the time is spent. There are, of course, automatic processes fo
    • I'm not sure if you are aware of it, but several speech databases are avialable to researchers. Some have licences with a yearly fee, but some are free of charge.

      Try http://www.cavs.msstate.edu/hse/ies/projects/speec h/databases/ [msstate.edu] for a list of some of these, including the CMU kid speach database.
      • Yes. I am aware; it's just that there isn't as much data available as there needs to be in order to be able to say with any confidence that, yes, this is what speech to children looks like, and this is what speech spoken by children looks like. Because like it or not, you have to get your grad students transcribing things for hours in order to get anything out of it. You want to research bilingual acquisition? Fine, but you're probably going to have to do years of legwork to get data for even three children
    • All sorts of scientific research is going to get fantastically easier as we approach the Singularity. If you have a really tremendous amount of data available, then instead of having to go out and collect data in order to answer questions that occur to you, all you need to do is extract your query from the records. You might say to your computer (your computer can talk, naturally, since it's in this thread): "How many times had Joanna been exposed to the subjunctive before she made this utterance?" or "Wh
    • You could establish a speech-recognition@home type of application, in the style of SETI@Home et al. If you create an application that can do some useful, but well defined set of tasks using speech recognition, you could build up a very useful open (as in useable by all) and free (as in beer) data base of everything you just mentioned to aid a linguistic study.

      Set the application to do something useful for the user that downloads and installs the application as an incentive and well-define the task such tha
  • by Anonymous Coward on Saturday January 28, 2006 @03:08PM (#14589259)
    On the other hand, IBM is not actually selling much speech technology.

    Scansoft, who earlier all but cornered the market for Optical Character Recognition (OCR) technology, did the same with speech recognition by acquiring the largest players in this space, SpeechWorks and Nuance. Scansoft changed their name to Nuance as a part of that last acquisition.

    IBM, meanwhile, has been struggling to find a market for their "Superhuman" (sneer) speech reco technology. A few years ago, they sold distribution of their retail desktop product, ViaVoice, to (wait for it) Scansoft. Their commercial product was RS/6000-AIX-only until a couple of years ago, when they ported it to more platforms, including Windows and Linux, and integrated it more tightly with their Rational and WebSphere marketing platforms.

    The current enterprise product sounds really sexy, at least for Rational-WebSphere shops. You can develop your WebSphere VXML application in Eclipse and leverage all those groovy WebSphere services you've built. No (or not much) special skill required!

    The problem is that their target market is Telecom Managers, who face a choice between IBM, with a few hundred ports installed, and Nuance (-ScanSoft-SpeechWorks), with tens- or hundreds-of-thousands of installed speech reco ports. Telecom Managers live in a world where their clients expect six-sigma/five-nines reliability. This is a hard sell to make.

    The question is, how long can IBM keep pouring money into speech R&D and product development in the face of dismal sales? Some in the industry expect the answer is, "Not too much longer." And that. of course, makes nervous enterprise buyers even more nervous and less likely to buy.

    • I want technology that'll run on a cheap single end-user or SOHO box. Too bad there's no money for companies to develop for that.
      • by Anonymous Coward
        I want technology that'll run on a cheap single end-user or SOHO box.

        As I said, Nuance (Scansoft) bought them all up; not just SpeechWorks and Nuance, but Draggon, Lernout & Haupsie, etc. They still sell a bunch of (Windoze) retail SOHO packages for a hundred bucks or two.

        Microsoft has some crappy .NET-based stuff, but I'd give it a pass, if I were you. It's neither SOHO nor enterprise. Not sure what it is...

        It's not really soup yet, but there is also a free solution. See http://www.speech.cs.cmu [cmu.edu]

    • Actually I think the question with speech R&D is can IBM aford NOT to keep pouring money in?

      At some point within the next 10-50 years, *someone* is going to develop SR technology that CAN act as a totally natural HCI. The potential profits from this exceed those of MS, possibly on patents alone! At the very least IBM is going to want the patent leverage to be able to take advantage of that technology if they are not the ones who develop it.

      I am often suprised that MS/Apple arnt making a significant inve
      • patents are only useful if they can be leveraged without the use of someone elses patent. in this case, scansoft owns the entire book of speech patents... IBM already even sold them their last couple chapters.

        so even if someone does build the complete natural-linguistic speech recognition, it'd be worthless since scansoft (they've chagned names a # of times now) owns a couple of the stages in the stack. you can try to sell it to them or try to buy the rights, but you're just some schmoe with cool technolo
        • I know nothing about the particular details of this deal, but wouldn't it make sense if IBM's sale of the patents also included a reciprocal agreement, that Scansoft would not sue IBM in the future for use of it's IP?

          It just seems like IBM, seemly a company obsessed with creating and preserving intellectual capital, wouldn't so hastily sell off patents that they might ever be able to use / need, unless there was a catch, like they got access to Scansoft's portfolio as part of the bargain?

          Just speculation, b
  • integration (Score:4, Interesting)

    by caffeinemessiah (918089) on Saturday January 28, 2006 @03:11PM (#14589280) Journal
    personally, i can't wait till they take speech recognition and couple it with natural language processing as a standard part of the desktop interface. it should be quite feasible now that we're seeing affordable 64-bit computing with fast memory and bus speeds. imagine excel with a speech-recognition interface, so instead of typing and filling formulae you would just tell it to "sum the row labeled timing, but only include values greater than 10". ok, back to work...
    • There's enough wittering going on in the office already, thanks.

      Of course, it seems you'll have the advantage of not having to tell it to switch to uppercase no i meant put the letters in uppercase not the word quote uppercase quote shift shift er fuck hey Joe what is it for uppercase huh was that caps lock YOU SAID OK THANKS NO DELETE DELETE THAT.

    • Re:integration (Score:4, Insightful)

      by cagle_.25 (715952) on Saturday January 28, 2006 @03:50PM (#14589497) Journal
      Spot on. Many interfaces today make it difficult to get from user's idea to computer's execution. Because we are much more facile at using spoken language to be precise than we are at using mouse+keyboard to be precise, a "G+AUI" (graphical+audio user interface) should, in principle, be much more powerful than a GUI.

      Dragon Naturally Speaking is a baby step in that direction, but it is pretty much limited to single nouns or verbs.

      • Because we are much more facile at using spoken language to be precise than we are at using mouse+keyboard to be precise, a "G+AUI" (graphical+audio user interface) should, in principle, be much more powerful than a GUI.

        I'm not convinced that spoken language is more precise than any other form of interface. In fact, I'd suggest just the opposite.

        When one wishes to communicate anything with precision, writing it down is likely to lead to far better results. For the really demanding material, diagrams, equa

        • This is a reply to both you and the child below.

          The key is in the difference between the words "facile" and "precise." You are absolutely right that written language is more precise, and written language with diagrams even more so, than spoken language. The problem is facility. The time it took me to write this and think about my choice of words is about 10x the time it would have taken for me to explain it verbally.

          In an interface situation in which the computer provides me with reasonable feedback so t

      • Because we are much more facile at using spoken language to be precise than we are at using mouse+keyboard to be precise

        Wow. Where did you get that idea? Most of the non-engineers I have encountered require an interpreter ('consultant') to translate their spoken words into something which is sufficiently precise to enter into a computer. Anybody who's been involved in the analysis/specification stage of a development project will know what I mean.

        They aren't any better at doing it with a keyboard, but they
    • The problem is, even if we get speech recognition, the computer might know which words you are saying, but not what they mean. Assuming the computer understood "sum the row labeled timing, but only include values greater than 10", then your idea of speach recognition would work great. But since computers don't understand that, and it will be a while before they understand arbitrary commands, speech recognition will only be for those who are too lazy to learn how to type.
      • you just bashed the whole field of natural language processing! while its true that computers probably won't be "understanding" words for quite a while (cue the AI discussion), it's quite within the realm of NLP to reasonably accurately tag the parts of speech in your sentence, and then possibly use some heuristic to reason about what was implied. of course, we're talking a very restricted subset of english as usable. you won't be able to say "hey ol' computer boy, howz about.....". simple imperative sent
        • Computers aren't that good at understanding natural language in order to follow commands. This is why we have programming languages, scripting languages, and macros. If computers were anywhere close to be able to understand natural language, then we would have no need for programming. The only thing i've seen work on a computer as far as voice commands, are using your voice to navigate the menus. I realize the usefullness of these technologies for those without the ability to type or use a mouse, but be
  • by RussP (247375) on Saturday January 28, 2006 @03:34PM (#14589400) Homepage
    A few years ago my wife was thinking about studying to become a court reporter. The training is very demanding, and I heard the dropout rate is about 95%, but the pay is good if not great.

    In any case, I warned her about the potential for voice recognition technology to render court reporters obsolete. It probably won't happen, but the mere prospect tipped her in the direction of foregoing the opportunity. Was that a mistake?

    The same concern applies also to medical transcription.
    • by Anonymous Coward
      Being a court reporter, I'd say no. A computer doesn't say "What?" when it doesn't understand the words, and it doesn't tell people not to talk at the same time so that the record's clear. Some courts try video, some try just audio recorders, but so far the results haven't been so good. You need people to operate the machine, people to catalog the recordings, people to transcribe the recordings if necessary. It's just better to have a court reporter there to do all that (and often cheaper).

      The problem w
  • by Animats (122034) on Saturday January 28, 2006 @03:52PM (#14589511) Homepage
    Several good mainstream voice applications are on the way out. Wildfire [savewildfire.com] is gone. TellMe [tellme.com] is laying off people and no longer promoting their public services. These are good systems; you could get quite a bit done on the phone with them, and they had good speaker independent voice recognition. Yet they're gone, or going.

    Try TellMe. Call 1-800-555-TELL. It's a voice portal. Buy movie tickets. Get driving directions. News, weather, stock quotes, and sports. All without looking at the phone. So what's the problem?

    • by mikeylebeau (68519) on Saturday January 28, 2006 @04:58PM (#14589863) Homepage
      You're mistaken about Tellme laying people off; they are doing quite well and are growing. You're right that the voice portal idea is no longer emphasized, but Tellme's making great money selling voice services to enterprise customers.
    • I'll tell you why.

      The problem is rooted in human psychology. For example, when I'm ready to compose my thoughts and ideas to written format, I don't want to be talking aloud in thin-air. I find the prospect of eavesdroppers to be unnerving. Flat out, it makes me feel insecure. As such, I like to keep my thoughts and ideas private on paper via typing in an office environment or out in the public. When I'm ready to be heard, I will send the text document via e-mail or printed format. If asked, I will hold a p
  • Actually... (Score:2, Informative)

    by ijablokov (225328)
    ...the point of our multimodal work is that you can have a two way dialog with the device, as well as have visual feedback to the interaction. See http://ibm.com/pvc/multimodal [ibm.com] for some examples.
  • by Anonymous Coward
    My name is Dr. Sbaitso. I am here to help you. Say whatever is in your mind freely, our conversation will be kept in strict confidence. Memory contents will be wiped off after you leave. So, tell me about your problems.
  • You know this technology will be a big hit in the porn industry when the big man of the area says

    "There has to be a good reason to use speech, maybe your hands are full"

    Now, what if the mouth is full too? Ventriloquism?
  • by Anonymous Coward
    One great thing about keyboards and typing is that it's relatively private. Like phone menus. I hate when they ask me to speak my choice or answer a question or recite my account number just let me freakin type.

    Babblin' all over the place is dumb.

    Instead of speech recognition let's work on better speech synthesis. Here we are in 2006 and the average synthesized voice sounds hardly better than my freakin' Phasor card I had for my Apple // in 1988.
    • Babblin' all over the place is dumb.

      On the other hand, it is a joke killer. Star trek producers would probably sue IBM if this would go mainstream. Nobody will laugh in the scene where Scoty talks to PC mouse anymore. IBM would ruin their best scene ever

      Now, if they could make my computer make coffe and a beautifull babe ready to do anything out of nothing,... that would be something. It is something I would be proud to call progress.
    • This was one of the biggest things I thought this article would be about, and it would be nice if somebody put some decent work into speech synthesis, the text to speech synthesiser on my PC gaming clan's ventrilo server wouldnt sound so crappy pronouncing stuff - in some cases funny though, like when it tries to pronounce urls...

      BTW: ya I know the synthesiser is part of the client, usually uses MS Sam I think...
  • by Aggrajag (716041) on Saturday January 28, 2006 @04:18PM (#14589644)
    Doctors in Finland are starting to use speech recognition to update patient records. I think it is in testing at the moment, check the following link for details.

    http://www.tietoenator.com/default.asp?path=1;93;1 6080;163;9862 [tietoenator.com]
  • XP Pro (W/AT&T voices, Office language Bar W/Word, firefox W/Foxy Voice) and OSX are a bit more polished. But I had a similar voice recognition/TTS setup in 1993. And what I concluded was that it is far simpler to interact physically (double click) with a GUI than to tell the computer to double click. What is needed is a different type of interface for speech to take off mainstream. However it is sad that Windows will not read dialog boxes. And that's a pretty obvious useful feature that the Mac ha
  • It's Daleks all the way down.
  • I'm convinced speech technologies have a fantastic future when they are used for improving human communications like providing for an electronic bablefish. However it looks like most are concentrating on using speech as a way to interact with machines.

    Which is so terribly ineffient and cumbersome. You really don't want to spend the time to socially interact with your coffeemachine at 7am.
    Unless it's able to go to the shop, put in exactly the right amount of coffee and is able to turn itself to on once it h
  • Good speech recognition would be great for searching audio. We could index webcastings, not only text. It would also be great for reporting meetings and conferences.

  • the company keeps changing, but what was once scansoft (dragon dictate) had a bunch of really big patents. its my understanding taht they did what any true capitalist should do once they gain complete monolopy over something; they sat on it and milked the big fat tit they'd engineered themselves. and thats what they're doing today. just think of the god damned margins on something like that...

    and tahts why speech recognition 2006 is the exact same as speech recognition 1997.

    FUCK YOU CAPITALISM. FUCK YOU
  • Bring on the system that learns language in simlar way that a human does...of course it would come out of the box with a reasonable starting point. Then the ultimate backend would be a HAL-like system (2010 not 2001), hopefully not a skynet-like, borg, VGER, or the trapper keeper from southpark. VGER wouldn't be too bad once it knew about carbon based infestations.

    Anyone know of a project to simulate human life starting at a fertilized egg? That would be sweet once we understood all of the chemical processe
  • Something that has not been mentioned, because, evidently, no one has actually worked with it, is that it is seriously annoying to work in the proximity of someone USING speech recognition. I worked with a fellow that had speech recognition on his machine who used it for programming. YOU try working on YOUR own code when someone is droning in the background: "for left paren int i equals zero semi-colon i less than mumble mumble delete word delete word ..." ALL DAY LONG! Even with head phones on it sometimes
  • by mandreiana (591979) on Sunday January 29, 2006 @04:23AM (#14592440) Homepage
    speech recognition
    http://www.speech.cs.cmu.edu/sphinx/ [cmu.edu]

    image+speech recognition
    http://sourceforge.net/projects/opencvlibrary/ [sourceforge.net]

    Desktop voice commands
    http://perlbox.sourceforge.net/ [sourceforge.net]

    Others
    http://www.tldp.org/HOWTO/Speech-Recognition-HOWTO /software.html [tldp.org]
    http://www.cavs.msstate.edu/hse/ies/projects/speec h/software/ [msstate.edu]

    Do you know about other usable open source speech solutions?
  • ...is the *biggest* problem with speech recognision. I used it extensively for a good period of time, but it's not reliable. Someone walks into the room/some music plays. etc. Speech recognision would greatly benefit from either the computer getting an audio & visual input to determine the source, or better yet, adopting the military throat microphones that only pick up vibrations directly from the skin (even whispers)
  • "There has to be a good reason to use speech, maybe you're hands are full..."

    "Computer, play video!"
    "Hmm, to much talk..."
    "Computer, fast forward!"
    "Wow, nice!"
    "Computer, resume normal play!"
    "Mmmm"
    "Computer, play that scene again..."

    (Girlfriend comes home)

    "Computer, stop playback! Stop! Shut down!"
  • by zobier (585066)
    Imagine trying to code this way:

    I en tee space main open-parenthesis i en tee space a ar gee cee comma cee aitch a ar asterisk space a ar gee vee open-bracket close-bracket close-parenthesis open-curly-bracket...

  • I see great potential for interfaces which make use of whispered speech recognition (referred to in some papers as "non-audible murmurs"). Using a contact microphone that picks up vibrations transmitted through your jawbone rather than ones travelling through the air, you can have effective speech recognition without speaking out loud. This eliminates the problem of annoying your coworkers with loud dictation in a shared office, allows passwords to actually remain secret, and has even been documented to w

Hold on to the root.

Working...