Forgot your password?
typodupeerror
Software Technology

IBM Strives For 'Superhuman' Speech Tech 289

Posted by ScuttleMonkey
from the fansubbing-in-jeopardy dept.
robyn217 writes "IBM unveiled new speech recognition technology today that can comprehend the nuances of spoken English, translate it on the fly, and even create on-the-fly subtitles for foreign-language television programs. One of the projects perpetually monitors Arabic television stations, dynamically transcribing and translating any words spoken into English subtitles. Videos can then be viewed via a web browser, with all transcriptions indexed and searchable."
This discussion has been archived. No new comments can be posted.

IBM Strives For 'Superhuman' Speech Tech

Comments Filter:
  • Which ... (Score:4, Interesting)

    by spiny (87740) on Wednesday January 25, 2006 @05:36AM (#14555797) Homepage Journal
    Which witch blew the blue candle out ?
    • by jakeweston (785112)
      To wreck a nice beach...
    • The one that understood context.
    • Not really a problem. Machine translation already can handle many words that spell the same but have different meaning (homographs), based on context and position in the sentence. With speech recognition, you just have more of those, you have to throw in homonyms, too.

      For simple example, blue in "the blue candle" cannot be a verb.
    • Re:Which ... (Score:5, Interesting)

      by jcupitt65 (68879) on Wednesday January 25, 2006 @06:41AM (#14556023)
      Or I can wreck a nice beach versus I can recognise speech.

      Sometimes you need rather a large context to disambiguate: is this sentence part of a discussion on shore-front management, or spoken language understanding?

      • I agree with the parent, but will take it one step further:

        So do we. I can recognise the differences and meaning of "Which witch blew the blue candle" written - but if someone said it to me out of the blue (npi), I'd have to think through it a couple of times to parse it, because if said as intended - with matching sounds, to rely entirely on context inside the sentence to decipher which word is which, then I'd have as much problem as a computer. The semantic rules I was taught as a child are what enables
        • Re:Which ... (Score:3, Insightful)

          by mwood (25379)
          Just remember that *you* have a truly enormous and well-filled content-addressable memory, a huge and richly-connected semantic network, and untold numbers of self-adapting heuristics that have been trained all day every day for decades, with more coming into production constantly. It's hard for a machine to match that. Feeding 100,000 distinct pattern matchers in parallel is something most computers just aren't architected to do well. That a machine can do even a passable job of speaker-independant cont
    • This is a fantastic development. It is exactly the kind of thing that 64-bit processors were made for. It is the 'killer ap', the best since MP3 and CD-rippers. If it actually works, the high-tech equivalent of 'in-shaa Allah'.

      We should encourage IBM to allow enough of the technology to 'escape' in order to enable other languages to be translated from speech into English. There should be some kind of open review of the translation involved, also. This can help prevent subtle errors in tr
  • Coherency? (Score:5, Insightful)

    by PrinceAshitaka (562972) * on Wednesday January 25, 2006 @05:38AM (#14555810) Homepage
    From The article "For now, all video processed through Tales is delayed by about four minutes, with an accuracy rate of between 60 and 70 percent" and "The accuracy rate could be increased to 80 percent, Roukos added"

    Still even at 80 percent how good is this translation. If that 20% is the important parts of speech You could still be left clueless. Even the best Machine translations of text I have seen always leaves the text a bit garbled and confusticated.

    I don't know how much delay is implied in the phrase "on the fly" , but I personally don' think there could ever be real time translation for the following reason. Sentences in different languages have different sentence structures. While in English the verb is usually the second part, in other languages the verb comes many times last (German). For the translator to get the second word of a sentence, it would have to wait till the end, of what could be a long sentence. This necessarily adds delay.
    • Re:Coherency? (Score:4, Interesting)

      by Yahweh Doesn't Exist (906833) on Wednesday January 25, 2006 @05:48AM (#14555849)
      yes, there will always be delay for the reason you state. but that's true even with human translators, yet no-one claims real-time meetings between people via translators is a waste of time.

      since even "live" boradcasts are usually delayed several minutes for technical and legal reasons anyway, if this technology can get to the state where you're just one or two sentences behind real-life it will be effectively real-time anyway for almost all practical purposes.
    • In what cases is a four minute delay noticable if the picture and sound are delayed four minutes too? I'd love this for watching movies that are currently completely incomprehensible to me.

      For the 80% part, it's good enough to get the gist of what is said. It won't compete with professional human translators, but it will make translation easily available for those who don't have access to a translator.
      • by sumdumass (711423)
        I'm wondering if this was used durring the lead up on Iraq? "i'm unclear if there are bombs here" and end up getting translated into "there are nuclear bombs here".
    • Re:Coherency? (Score:2, Informative)

      by wizrd_nml (661928)
      For the translator to get the second word of a sentence, it would have to wait till the end, of what could be a long sentence. This necessarily adds delay.

      Not necessarily. An on-the-fly translator could translate words as it hears them filling in the translated words in the correct location in the sentence. In other words, the sentence doesn't have to be completed in order. It can dynamically expand to fit in new words.

      If you listen to human translators doing on-the-fly translation you'll see this is h

    • Re:Coherency? (Score:3, Interesting)

      by dancallaghan (890674)

      but I personally don' think there could ever be real time translation for the following reason. [German]

      You are going to have that problem whether it's a machine doing the translating or a human. As I understand it, interpreters of German get around this by some quick-thinking restructuring of the translated sentence, or they simply lag a half-sentence or so behind.

      The real problem for machine translation is, and always has been, determining the sense of a word from context (indeed I recall a recent S

    • by Ogemaniac (841129) on Wednesday January 25, 2006 @06:44AM (#14556030)
      It is as closer to English as any other language. In general, European languages have the same basics as English (such as "the") and are fairly easy to learn and translate. Right now I live in Japan, where the language and its underlying way of thinking basically run in the reverse direction of English. To translate, you are essentially running the whole thing backwards. Worse yet, the fundamental parts of the language are quite different. For example, Japanese does not have articles or prepositions, though it has post-positions that roughly correspond. However, there are fewer of them, so they have "lots of meanings" when translated into English. Translation can be a "#$#, even for a human who understands both languages very well (which is why anime comes off so corny sometimes). There are countless times where there is just no simple way to express a thought in one language that is trivial in the other.
      • Although from the same linguistic family (but English also owes a lot to French and Latin) there are some important grammatical differences. The issue with interpreting German is that the verb (and any negation) may come at the end of the sentence. German can have some very long sentences.

        For a human, the issue is that you can't interpret based on the phrase, so a human interpreter has quite a lot to do. The interesting thing is that experienced interpreters do this unconsciously.

        I have been an admirin

        • But not to the extent of Japanese. I lived in Austria for a summer, and after just three months, with no prior study, I started "getting" it sometimes. On the other hand, with 2.5 years of university study and ten months of living in Japan, I often hard time following the logic of a long sentence - even when written and when I know all of the words.

          Generally, it is estimated that it takes an English speaker about twice as long to learn a languages from the Asian or Arabian groups as it does a European
        • For some entertaining examples, see Mark Twain's "The Awful German Language".
      • I would not say that german is easy.
        Anyway, in japanese, you forgot the fact that the verb is not even always present in the sentence (just guessed depending on the context), and that sometimes, with the exact same sentence, subject and object are switched depending on the context too.
        This require some training to understand, I still did not mastered it well, and seeing lots of fansubs shows me that I'm not the only one that has not mastered this (and I'm not the worse).
        I guess a machine would have a really
    • I don't know how much delay is implied in the phrase "on the fly" , but I personally don' think there could ever be real time translation for the following reason...

      Still, the only thing faster or just as fast is a human translator for real time translation. Even then it is more or less based on the skill of the person doing the translating.
  • first? (Score:5, Funny)

    by Anonymous Coward on Wednesday January 25, 2006 @05:39AM (#14555811)
    however the researchers stated "We still can't figure out what Bob Dylan is saying"
  • Nuances (Score:4, Funny)

    by AnonymousYellowBelly (913452) on Wednesday January 25, 2006 @05:43AM (#14555834)
    GB on TV: "We have prevailed"
    Subtitle: "All your base are belongs to us"
  • by Elixon (832904)
    I cannot wait when I buty the first eBabelfish gadget that I will put in my ear so I can understand spoken language of my russian colegues... ;-) :-) I hope that someobody will not consider it as "important technology for the national security" and will not restrict it by any mean...

    (I'm sure that this eBabelfish is already installed - not in my ear - but on the telecommunication centers...)
  • by pubjames (468013) on Wednesday January 25, 2006 @05:52AM (#14555857)
    I'm afraid this type of technology will be used as an exuse for people not to learn foreign languages, which is a shame.

    It's not until you learn another foreign language that you realise how complex languages are, and how subtle. Learning another language can literally change the way you think about things.

    This type of technology will make people think they completely understand a foreign language, but they won't. Their understanding will be crude, without the subtleties and cultural understanding.

    I can speak English and Spanish fluently, and if I watch an English film with Spanish subtitles I'm always thinking - damn, they missed a good joke there, they got that wrong, etc. (Equally so with a Spanish film with English subtitles). And film subtitles are done by professional translators. God only knows what a terrible job a computer would make of film translation.
    • "It's not until you learn another foreign language that you realise how complex languages are, and how subtle."

      And how wierd sometimes. English for example loves to use the word "up" in all
      sorts of unsuitable places:

      give up
      shut up
      fed up
      wash up
      fuck up
      laid up
      muck up
      turn up
      free up
      look up
      make up
      put up
      screw up
      hang up
      wrap up
      hold up
      grow up

      Wtf?

      And home come we say "didn't he.." but in longhand its "did he not...". Shouldn't
      it be "did not he"? Why does the "not" shift to the other side of the pronoun?
      But then all la
    • I have a friend works in Japan and he tells me the same. He often goes to watch English films that are subtitled in Japanese and tells me that they completely miss-translate most of the jokes and miss subtle nuances of speech. One example he gave was a scene from 'The Full Monty' (im doing this from distant memory so it might not be quite right - in fact, a bad translation :-)

      One of the characters is shouting up to someone in their bedroom window. They don't respond to the shouting and the character says "H
      • Another example of this I saw in a french film recently. A character was overhearing a conversation about a ship being under quarentine. He said "Is it the captains birthday?" Makes no sense at all in English but in French it is a play on words and (feeble) joke. Impossible to translate.

        • Do you remember the joke ? I speak french and I can't figure out what it originaly was.
          • The (stupid) character assumed that the captain was having fortieth birthday party - forty being "quarante" in French, so a "quarantaine" sounds a bit like a word for a fortieth birthday party. I said it was feeble. But it is an example of a joke that's impossible to translate.
            • That's far stretched since the right word would be "quarantième" et non "quarantaine". Beside, it's hard to make a sentence that doesn't make it obvious that it's the ship and not the captain whose the subject.

              An exemple of English / French I saw in a movie :

              - Yeah but...
              - What about my butt ?!

              I don't remember at all how they translated that :)
      • I suspect films are probably translated in one pass and there is no time to understand the context of each sentence spoken so it's left to literal translatation only

        I think it is more to do with the fact that they have to write the subtitles so that they can be read at the speed of the speech. And so they cannot go into subtleties. In fact often when there is fast dialogue they will miss whole phrases out.
      • and it is usually extremely difficult to translate jokes. The senses of humor are quite different as well. I think this is part of the charm of anime, actually - we are laughing at things Japanese aren't always intended to find funny, while missing half of the jokes that are supposed to be there.
    • I'm afraid this type of technology will be used as an exuse for people not to learn foreign languages, which is a shame.

      I'm not quite sure what you mean here not bother because of this technology?

      I can't see anyone not wanting to bother learning a language because of this technology. Not unless it was a babelfish/universal translator type technology - i.e. basically invisible. In which case, what's the issue? ;-)

      What are you going to do:
      a) Walk around with a little device which translates with 60-80% accura
      • I'm not quite sure what you mean here not bother because of this technology?

        Perhaps you a not like most people... I often hear English only speaking people say there is no point in learning another language because everyone learns English these days. This just gives them another excuse.
    • Learning a foreign language is a net good and the only way to really understand another culture is to experience it. That said, there are a large number of languages and an even larger number of cultures. Do you intend to learn/experience them all?

      Can you see no good in a rough translation for some purposes?

      Calculators have largely eliminated the need (an in some cases the ability) for people to do basic math. Therefore we should eliminate calculators before these people start believing that they comple
      • Can you see no good in a rough translation for some purposes?

        Of course.

        But from the description I think this is being developed for military or intelligence work. In those fields, mistranslations can cause death. And unfortunately I think the current administration is unsophisticated enough to think that machine translation is better than (more expensive) human translation.
        • Ya, I got ya'.

          I almost added "I just hope GWB doesn't decide to fire all his intell linguists based on this post" but it seemed kind of like bashing the Prez and i would never do that...

          Cheers
      • Time to check out that Asimov story about a society where mechanical computation was so pervasive that people no longer learned arithmetic. "The Feeling of Power"
    • There's a really simple reason why film subtitles omit jokes and get things wrong. It is almost never possible to directly translate from one language to another, so subtitles inevitably have to be an aproximation of the original speech in order to help match the pacing of the original film. They also have to not be too wordy, since the viewer needs to watch the film, as well as read the subtitles.

      Language is about more than just words, it's about phrases too. A speakers choice of words and phrases gives
      • Read the English translation of Lem's _Cyberiad_ before you tell us how impossible it is to translate humor. I'll buy the time-to-read argument, though.
    • I don't necessarily agree. Like most tech it's a tool - the task is up to the user. I find that fansubbed anime helps my Japanese. I'm picking out words and grammar from the flow of speech and simultaneously matching them against the translation. Often I can actually pick out where the translation was fudged or the subtleties were left out. Without the feedback from the subscripts, I wouldn't have that yet.

      On the other hand, there are cases where I just want to read something quiclkly, and putting the page
  • Ghee... (Score:4, Insightful)

    by Anonymous Coward on Wednesday January 25, 2006 @05:54AM (#14555864)
    Hmm, instantaniously translation from arabic, wonder who "cough cough echelon cough!" they are marketing this to.. ?
    • > cough cough echelon cough

      Funny you should mention that. I recall a US government department set up just after 9/11 which one of the things it would be working on was a handheld device that could translate from English to Arabic on the fly.

      Only reason I recall this is because the logo of said department was the all seeing eye shining some kind of beam over the rest of the world. Prehaps someone with a better TFH then me has a link. :)

  • by Viol8 (599362) on Wednesday January 25, 2006 @05:56AM (#14555872)
    ...they should send it to Glasgow on a saturday night just after the pubs
    have closed.

    "Ye loooiii ahhh me jimmeh??! *belch* C'mere ya wee electrahnich bastid, I'll
    shoo ye!"
  • by YearOfTheDragon (527417) on Wednesday January 25, 2006 @06:00AM (#14555893) Homepage
    May be IBM is going to make speech recognition true, but Bill Gates said that this was posible a long time ago [mpt.net.nz]. Simply genius.
  • On-The-Fly (Score:5, Informative)

    by Trurl's Machine (651488) on Wednesday January 25, 2006 @06:02AM (#14555901) Journal
    They really do it on the fly? You mean, [on the surface of] [a particular] [insect of a Musca domestica species]?

    I have read a lot of auto-translated documents and it is always a good laughter in terms of "crapslation cabaret". So far, there is no technology that could auto-translate a text document succesfully. The "80% success" is a myth - they just count how many words were found in the vocabulary, not how many of them were put into a good context. A "fly" translated as an insect would be accounted as a success!

    Even if you are not a bot but a human being with some knowledge of the other language and culture, it's very easy to involuntary offend someone or just to make a ridiculous faux-pas. Polish and Czech languages, for example, are very much alike and use common roots for many words, but because of the way both languages evolved, some neutral terms on one side of the border have become offensive on the other side. Czechs evolved an euphemism for sexual intercourse based on the verb "to look for". Poles still use this word when they look for something, which leads to constant crapslation cabaret gags when a Polish tourist appears in a Czech town "looking for a parking lot". Now, auto-translate this...
    • Portuguese is both spoken in Portugal and Brasil.

      Still, for example the slang word use in Portugal for "traffic jam" (bicha) is the slang word in Brasil for "gay".

      Talking about the congestion on the streets of Lisbon takes a whole new meaning in Brasil.
    • machine translation is ropey admittedly but one of the best for polish english translation is
      English Translator3 www.techland.pl
      Earlier versions didn't know the difference between a shower of rain and taking a shower for instance. although you still need to take care with Polish and polish the capital P makes a difference.
      it does provide alternative translations so you can do a basic translation and apply a more appropriate translation.
      It's getting old now so perhaps there has been an update.
    • Polish and Czech languages, for example, are very much alike and use common roots for many words, but because of the way both languages evolved, some neutral terms on one side of the border have become offensive on the other side. Czechs evolved an euphemism for sexual intercourse based on the verb "to look for". Poles still use this word when they look for something, which leads to constant crapslation cabaret gags when a Polish tourist appears in a Czech town "looking for a parking lot". Now, auto-transla

  • by Mostly a lurker (634878) on Wednesday January 25, 2006 @06:13AM (#14555934)
    IBM has been one of the pioneers in speech recognition for a long time. However, indications are that Google (in the lab) [slashdot.org] has been making tremendous progress in translation. While the two companies are bound to be fierce competitors, it would seem they would both have much to gain from cooperation in the area of language recognition and translation.
  • by thbb (200684) on Wednesday January 25, 2006 @06:16AM (#14555949) Homepage
    As it has been the case for the past thirty years, the description of the prowesses of the system are still written in the conditional form: "...IBM technology can be used to control computers and devices..." rather than the active form: "is being used"...

    Ben Shneiderman is the person who, in my opinion, articulates the best the limits of speech recognition [umd.edu].

    One of my favorite phrases to explain this issue is: "You don't want to speak to a computer, because you can't speak and think at the same time". More precisely, speech utterance makes use of some modules in our brain which are required for planification too. Hence, you can't plan as well what to do next when you speak, which is a big hurdle in the type of intellectual activities one carries with a computer.
  • Awful default TTS (Score:4, Insightful)

    by Council (514577) <rmunroe AT gmail DOT com> on Wednesday January 25, 2006 @06:19AM (#14555957) Homepage
    Speech-to-text is cool, but for 30 years they've been predicting it's the next new thing in interfaces, and it's remained a niche thing as it gets better and better. Maybe it'll hit the point where it's flawless and suddenly find new markets, but we'll see.

    What really bothers me is the state of Windows text-to-speech. The TTS that ships with the most popular operating system on Earth is easily trumped in understandability by a small third-party program I downloaded literally TWELVE YEARS AGO. I really wonder if M$ made some pact to give out crappy TTS so as not to stifle sales of some business partner's application.

    This seems pretty ridiculous, but I'm at a loss as to why their text-to-speech programs are of 12-year-old quality.

    I'm glad people are doing good speech research, (I know I've seen a demo of good IBM TTS somewhere) but I hope it finds its way into Windows someday.
    • Re:Awful default TTS (Score:2, Informative)

      by wfWebber (715881)
      Then again, if they supplied a version that produced awesome quality voices, they'd be accused of trying to kill their TTS competition.

      That said, in Microsoft Windows Vista (ETA 2019), the default TTS engine will be replaced by a new one sporting Anna [wikipedia.org]. Have heard her in the preview and I have to say, it's one hell of an improvement.
    • Probably BECAUSE speech is a niche market , MS don't want to spend the
      money on making it any better. So long as it sort-of works then the marketing
      droids have something apparently bleeding edge to waffle on about in the sales
      pitch knowing full well very few people will use it and discover how crap it
      is, and the ones who do are such a small percentage anyway that they won't care.
  • Serious, you hear how some people "talk" these days?
  • American or English? (Score:3, Interesting)

    by squoozer (730327) on Wednesday January 25, 2006 @06:30AM (#14555989)

    I realize that Anericans and British (English at least ;o)) speak essentially the same language but I have yet to find any speech recognition software that can get more than roughly 85% of what I say correct. I have a fairly soft neutral english accent with pretty good enunciation so I would have expectd to be getting a recognition rate in the high 90%s. I'm wondering if, as most of this software is developed in the US, it is tuned specifically to pick up on english with a US accent? I realize that you train the software for your voice but AIUI all you are doing is tuning a basic speech model. Has anyone else had this problem or is it just me?

    • I'm sorry, what?!?!?

      I cannot understand a word you're saying. What's with that accent?
    • Existing speech recognition engines rely on statistical approaches just like this "miracle" product does to disambiguate sounds and words, and yes about 80% accuracy sounds right. Of course this is too low when competing against a keyboard, even though speech recognition could be a lot faster by the time you corrected all the mistakes it works out slower - hence the reason it's only used in limited applications.

      I have virtually no accent at all, except for very mild British overtones, yet speech recogniti

      • I have virtually no accent at all, except for very mild British overtones...

        That claim makes no sense whatsoever. You have a regional accent, it just happens to come close to the one you hear around you most commonly. I'm guessing it's a midwest accent, aka "General American", aka the US TV network announcer accent.
  • Oh oh oh. (Score:3, Funny)

    by Anonymous Coward on Wednesday January 25, 2006 @06:33AM (#14556003)
    I think it was about 1996 or maybe 1997 when I attended an IBM demonstration (for retailers) for its speech recognition software. Anyway, the lady who was narrating the text and. talking. like. a. robot. to. do. it. was half-way through when, for no apparent reason, the word uterus appeared in the text.

    So I'm sitting here thinking of how funny it was to the juvenile me back then, and how unfunny it seems right now. Oh well.
  • Not _that_ amazing (Score:2, Interesting)

    by johndoe42 (179131)
    It's been well-known among language researchers that both speech recognition and parsing/comprehension are much easier when applied to a small problem domain. SRI in Palo Alto and CSLI at Stanford, for example, have a number of very impressive speech recognition packages that understand, for example, medicine-related sentences. The dashboard controls just sound like a logical progression of this to faster computers and an even smaller problem domain. They're cool nonetheless.

    The translation, on the other
  • Buyer beware (Score:5, Insightful)

    by 99luftballon (838486) on Wednesday January 25, 2006 @07:04AM (#14556085)
    Speech recognition has long been the land of inflated promises and little returns. Anyone remember Lernout & Hauspie and its supposed 15 minutes learning time?

    Speech recognition is riddled with problems. From a computing side it's enormously processor intensive and memory hungry. From a computer side it's very com,plex code and the 'learning' process is fraught with problems - surnames, company names and locations are all very poorly recognised.

    So don't rush to buy. Let the labs check it out first.
  • it does what the current generation of speech recognition claims to do. I have yet to find any dictation software that is even remotely accurate, and the voice command software has been pap, at least for me. There is something about my accent that really upsets speech recogntion software.

    Nintendogs: I've stopped trying to train my dog, its never going to happen.
    Apple Speech: Only works if I use a terrible californian accent. Not worth the embarresment.
    Nokia: Even with just one voice command, my girlfriends
  • I've actually never used any speech recognition software before today. That said, today just happens to be the day. That said, I tried out Dragon NaturallySpeaking for the first time, and it is a complete coincidence that this topic should come up. I'm actually dictating this post with Dragon, as we speak. ha ha

    the training process definitely has its ups and downs. The more you work with it however, the more it becomes attenuated to your own speech patterns and moreover, the quirky words we use every day. I
  • by 0xC2 (896799)
    Although most of the discussion so far has focused on foreign language translation, this technology is about *real-time-audio-to-text* conversion. The feds will be able to monitor, analyze, and record our conversations in real time:

    Monitor all conversation.
    Apply real-time text filters.
    Assign live agents to priority eavesdropping.
    Profit!

    If you could apply a filter to listen in to any call what would it be?
  • We can figure out just what the hell Ozzy Osbourne is saying!
  • Translating Arab TV (Score:3, Informative)

    by Perl-Pusher (555592) on Wednesday January 25, 2006 @08:56AM (#14556481)
    I imagine it is easier to translate repetitive phrases such as "The zionist oppresssor shall be eliminated", "The great Satan America will be destroyed" and "Our martyrs have struck fear in the hearts of the infidels ".

    I was in Kuwait and watched arab TV with english subtitles, it was enlightening to say the least. One long tribute to racism paid for by the Amir of Quatar. Only on arab TV will you see such trash as "the jews are descended from pigs".

  • One of the projects perpetually monitors Arabic television stations, dynamically transcribing and translating any words spoken into English subtitles.

    10 PRINT "DEATH TO AMERICA";
    20 GOTO 10

    RUN
  • So I think there should be a program to resynthesize the "learned" words into the most exact average of any given way to say it. I'd love to hear the results, that would be fascinating.

  • ViaVoice Embedded, the product that they're releasing, works on limited-domain problems: for example, tasks related to control of your car's peripherals. When the vocabulary and grammars are constrained it's possible to acheive very decent accuracy.

    Dictation, however, is a completely different problem. There are far fewer constraints on what can be said, and the system makes errors as it picks through the possible choices. As a result, most dictation software requires training: the system will use your voic
  • by roman_mir (125474) on Wednesday January 25, 2006 @09:57AM (#14556900) Homepage Journal
    When and if it can translate poems [slashdot.org] from language to language, while keeping the style, the nuances, the rythm, the cultural references, the general idea and the details, then we will know - it is done. Until then, don't hold your breath.

  • What a boon this will be to those anime fansub groups who can't find decent translators, or at least translators who aren't overworked.

  • Ah yes, super-duper speech recognition is right around the corner!

    I've been hearing this every 6 months for about the last, oh, thiry years.

    Given that the state of the art in something much simpler, like automatic language translation, is pitifully inadequate, how likely is it IBM has conquered speech recognition AND translation?

    Har har har.

  • S-to-T in hospitals (Score:2, Interesting)

    by stardancer (665878)
    I know that one hospital in Norway has been experimenting with/testing speech-to-text software for a while, and reports say it's been very successful! (this supports what was said about speech recognition within a tight context in an earlier comment). I believe the plan is to, at some point, eliminate the need of secretaries transcribing what the doctors dictate, so that ideally the doctors can just speak into a mic and the text automagically appears in the patient's (electronic/digital) journal!

    this of

  • by bdwoolman (561635) on Wednesday January 25, 2006 @10:47AM (#14557367) Homepage
    Here we go:

    I can wreck a nice beach. I can recognize speech.

    Well, Dragon Systems eight passed the beach test first try. Knowing the program, however, I did use pretty clear diction.

    I use Dragon Systems and find it absolutely great. There are a few persistent errors. For example, It frequently fails to get "there" and " there" right on the first try. But the fly down menu system enables me to quickly correct the problem on the run. Certainly I pick it up on an edit. If IBM has something better than this -- and it sounds like they do -- then it must be pretty darn good. Of course, you have to insert the punctuation verbally. But that comes with a little practice -- provided that you know what to do in the first place.

    It does take a little bit of investment in time. But not nearly as much as learning to type at seventy words a minute, which I can now do in dictation. I have added very little by way of customized commands etc. The program has done a lot of learning on its own.

    Let's try once again: I can't recognize beach. I can recognize speech. Oops. Okay, it failed that time. Let's try one more time: I can wreck a nice beach. I can recognize speech. Well, the phrases have to be enunciated pretty clearly or the program has trouble.

    Which which blew the blue candle. Failed on the second "which" the b*tch.

    Okay, okay. I'll put the laundry in the dryer. No I am not just screwing around on Slashdot again I'm getting some work done down here. Just a minute. Just a MINUTE.

    One trouble. You do have to put the mike to sleep during family discussions.

Man must shape his tools lest they shape him. -- Arthur R. Miller

Working...