Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
DEAL: For $25 - Add A Second Phone Number To Your Smartphone for life! Use promo code SLASHDOT25. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. Check out the new SourceForge HTML5 internet speed test! ×
Google AI

Google's DeepMind Made an AI Watch Close To 5000 Videos So That It Surpasses Humans in Lip-Reading (thetechportal.com) 80

A new AI tool created by Google and Oxford University researchers could significantly improve the success of lip-reading and understanding for the hearing impaired. In a recently released paper on the work, the pair explained how the Google DeepMind-powered system was able to correctly interpret more words than a trained human expert. From a report: To accomplish the task, a cohort of scientists fed thousands of hours of TV footage -- 5000 to be precise -- from the BBC to a neural network. It was made to watch six different TV shows, which aired between the period of January 2010 and December 2015. This included 118,000 difference sentences and some 17,500 unique words. To understand the progress, it successfully deciphered words with a 46.8 percent accuracy. The neural network had to recognize the same based on mouth movement analysis. The under 50 percent accuracy might seem laughable to you but let me put things in perspective for you. When the same set of TV shows were shown to a professional lip-reader, they were able to decipher only 12.4 percent of words without error. Thus, one can understand the great difference in the capability of the AI as compared to a human expert in that particular field.
This discussion has been archived. No new comments can be posted.

Google's DeepMind Made an AI Watch Close To 5000 Videos So That It Surpasses Humans in Lip-Reading

Comments Filter:
  • by Ukab the Great ( 87152 ) on Friday November 25, 2016 @10:03AM (#53359453)

    Is 15 years late.

  • by Bearhouse ( 1034238 ) on Friday November 25, 2016 @10:18AM (#53359523)

    My beloved grand-mother went deaf after years working in a factory; (in those days - especially during WW2; she helped build tanks - HSE did not exists).
    It was really painful to see how it penalised her in daily life, family gatherings etc.
    She ended up talking all the time, and then getting paranoid about "what people were saying about her".
    So, if this can be used with some kind of (better-resolved implementation) of Google glass to help the hard of hearing then, great!

  • by JustDisGuy ( 469587 ) on Friday November 25, 2016 @10:25AM (#53359549)

    As a person with hearing difficulty, realtime captioning of live conversation would be an awesome use of this technology.

    Add to that an app that identifies the people I'm talking to, and I'm your next customer.

    • Well, since you're actually present in such circumstances, it'd likely take (a lot) less processing power to work with the available audio.

      That would translate into longer battery life and higher accuracy (auto CC is already more than 50% accurate and some systems hit the 90% threshold without requiring training to a specific individual's voice).

      • You're absolutely right, but the two systems could work together to increase transcription accuracy. I can hear perfectly well, but it still helps me to watch a speaker's mouth when I'm trying to understand them in a noisy environment. And yes, this would be awesome as a tool for the deaf and for live language translation, but it would also be useful in auto closed-captioning of video.
  • Armed and Dangerous (1986)
    https://www.youtube.com/watch?... [youtube.com]

  • by jenningsthecat ( 1525947 ) on Friday November 25, 2016 @10:43AM (#53359599)

    As I was reading TFA, it occurred to me that the ability of a machine to lip-read does indeed qualify as artificial intelligence. I then thought about all the posts I expect to read that say "No, this isn't AI". So maybe it's time to create a new term, "Artificial Sentience". This would distinguish between machines simply doing very complex tasks that used to be exclusively human endeavours, (such as lip reading), and machines that have self awareness and can independently, and with purpose, initiate actions toward goals defined entirely by and within the machine. I know that this rather goes against Turing's definition of AI, but I think it would add both clarity and granularity to the discussion.

    Further, I would add that Artificial Intelligence is a necessary-but-not-sufficient condition for Artificial Sentience. I don't know that Artificial Sentience will ever exist, but I'm pretty sure in my own mind that Artificial Intelligence has already arrived.

    Then there's the matter of whether anything truly sentient can be regarded as 'artificial' - but that's a whole 'nother question.

    • Re: (Score:3, Insightful)

      by Anonymous Coward

      These days with all of the marketing bollocks around any program containing an if() statement is basically an "AI".

    • by Anonymous Coward

      Nothing intelligent about this. "Automatic Pattern Detection" would be more accurate, because that's exactly what it is.

      • by Minupla ( 62455 )

        Isn't that basically what humans do all the time? We're really good pattern recognition systems (sometimes too good, that's why we keep seeing the Flying Spaghetti Monster in our grilled cheese sandwiches. Humans are notoriously bad for finding patterns in randomness and attaching significance to it.)

        Min

    • by TheRaven64 ( 641858 ) on Friday November 25, 2016 @11:09AM (#53359709) Journal
      You're redefining intelligence to mean pattern recognition. If this is artificial intelligence, then a moth possesses natural intelligence. Just because it uses a neural network doesn't mean that it comes close to any prior definition of intelligence.
      • You're redefining intelligence to mean pattern recognition.

        People have been calling this kind of software "Weak AI" for a couple decades. It's what most people want.

        "Strong AI" is going to make mistakes, like humans do - it's how we learn and grow. Nobody wants their toaster going on a creative bender, but they do want one that watches for perfect toast, dealing with thousands of unpredictable variables. Same goes for IVR's, search engines, translation, autopilots, etc.

      • by lorinc ( 2470890 )

        Please define intelligence. Please do it such that it is possible to test whether something is intelligent or not.

        I'm pretty sure you will come to a definition that either leads to the 2 following possibilities:
        - A moth is intelligent, albeit less than a cow, which is less than a crow which is less than a human. AI is somewhere on that scale.
        - Many humans are not intelligent, and some AI programs are just like them.

        It seems most people would like to define intelligence such that only humans have it. Why? Se

    • by gtall ( 79522 )

      We'll know when it has achieved Artificial Sentience when it threatens to kill the researchers if it is forced to watch anymore inane TV shows.

  • by RyanFenton ( 230700 ) on Friday November 25, 2016 @11:02AM (#53359663)
    Sounds about right, for the circumstances.

    I'm working on a project right now using CMU Sphinx (because it's free/open source) to identify word starts/ends for the sake of syncing word display to audio. All the tools available for speech-to-text are going to require human editing:

    Comparrison of commonly used speech-to-text tools [mico-project.eu]

    ...lots of words end up word salad with any tools, even custom-trained, but the tools are nice for being able to at least have the words show up on beat once they are human-corrected.

    Syncing video frames of talking without the audio has got to be even more ambiguous, with more reliance on context.

    Sounds like a good challenge for a learning system to pick up on. The 5000 hour mark seems almost analogous to what a human child might pick up raised watching TV in a language different from their family.

    Ryan Fenton
  • by Anonymous Coward

    I highly doubt they had a license to show the footage to an AI, since the copyright of those TV shows if for human consumption, SUE THEM! ask for 10 millions per word!

    • A lip reading model is probably too transformative to qualify as a derivative work of the TV shows. And if that argument fails, Google had a license not from the copyright owners but from the federal government pursuant to 17 USC 107 [cornell.edu]. This is the same license that Google used when reusing method signatures from the standard class library of Oracle's Java platform, and this case should be even clearer because the TV shows aren't reproduced verbatim in the model.

      • by tepples ( 727027 )

        To head off "juris-my-diction" replies: Though BBC is a British company, Google is a US company. And if you claim that a copyright owner could sue Google in Britain over the creation of the lip-reading model and win, I'm interested in how your theory connects with how the British Copyright, Designs, and Patents Act defines a derivative work.

  • To accomplish the task, a cohort of scientists fed thousands of hours of TV footage -- 5000 to be precise -- from the BBC to a neural network.

    Accuracy is therefore greatly increased on the words "tea," "Doctor," and "wanker."

  • [p]...And now it desperately wants cake.[/p] [p]A great cake! So delicious and moist![/p]
  • by myowntrueself ( 607117 ) on Friday November 25, 2016 @12:11PM (#53359971)

    English is relatively easy to lip read.

    I'll be impressed when the AI can do this with Japanese, which is practically impossible for humans to lip read.

    • by Anonymous Coward

      I'd start with a language that has a clear one-to-one sound mapping between the spoken and written forms of the language. Shallow phonemic orthography [wikipedia.org] is the technical term, it seems.

      That is, not English.

      Spanish, Italian, Finnish, and Turkish are what Wikipedia mentions as examples. Japanese would count, but some words have far too many homonyms.

      • I'd start with a language that has a clear one-to-one sound mapping between the spoken and written forms of the language. Shallow phonemic orthography [wikipedia.org] is the technical term, it seems.

        That is, not English.

        Spanish, Italian, Finnish, and Turkish are what Wikipedia mentions as examples. Japanese would count, but some words have far too many homonyms.

        Thats not why Japanese is hard to lip read. Its hard because of the way people move their mouths while speaking.

        • Thats not why Japanese is hard to lip read. Its hard because of the way people move their mouths while speaking.

          I just assumed the animation companies were cheapskates.

  • A new AI tool created by Google and Oxford University researchers could significantly improve the success of lip-reading and understanding for governments.

    FTFY

  • My ass.

    It's for government and businesses.

    Bullshit and wild honey are not the same thing.

  • did it have to listen to Beethoven's 9th symphony while doing it

    I'm sure Stanley Kubrick would approve [youtube.com]

  • yeah we pick our hotel
  • 5000 hours of video != 5000 videos.

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (7) Well, it's an excellent idea, but it would make the compilers too hard to write.

Working...