Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Mozilla The Internet Technology

Mozilla Updates Common Voice Dataset With 1,400 Hours of Speech Across 18 Languages (venturebeat.com) 13

Mozilla wants to make it easier for startups, researchers, and hobbyists to build voice-enabled apps, services, and devices. From a report: Toward that end, it's today releasing the latest version of Common Voice, its open source collection of transcribed voice data that now comprises over 1,400 hours of voice samples from 42,000 contributors across 18 languages, including English, French, German, Dutch, Hakha-Chin, Esperanto, Farsi, Basque, Spanish, Mandarin Chinese, Welsh, and Kabyle. It's one of the largest multi-language dataset of its kind, Mozilla claims -- substantially larger than the Common Voice corpus it made publicly available eight months ago, which contained 500 hours (400,000 recordings) from 20,000 volunteers in English -- and the corpus will soon grow larger still. The organization says that data collection efforts in 70 languages are actively underway via the Common Voice website and mobile apps.
This discussion has been archived. No new comments can be posted.

Mozilla Updates Common Voice Dataset With 1,400 Hours of Speech Across 18 Languages

Comments Filter:
  • Having a corpus of transcribed voice recordings available is indeed the most relevant prerequisite to implement decent speech recognizers. About Mozilla's own attempt on this, "DeepSpeech", I have so far heard disturbing things, like being painfully slower than "real time" even on utilizing a mid-range GPU. (And we talk of the recognition, not the training!)

    Back in the 1990s, our speech recognizers allowed "real time" recognition on a Pentium-133MHz. Admittedly with probably a smaller vocabulary and a hig
    • by rtb61 ( 674572 )

      You only need one voice, the voice of the current user. In reality too many voices will entirely screw up voice recognition because there is a lot of word overlap in the way words are pronounced to sound like other words when other people pronounce them. So too many voice will create more problems than they solved, they just have to learn to accept voice training of devices but they don't want that because then voice commands would be localised, rather than broadcast back to home base to be recorded and dat

    • by ImdatS ( 958642 ) on Friday March 01, 2019 @04:12AM (#58198316) Homepage

      If you need voice training data (specifically also for Speech Synthesis) my former company created a dataset that I am now making available on my website. Since the company is closed (and I used to be the CTO/MD), I had decided to release it into the public with a BSD 3-clause license.

      Here is the link: https://www.caito.de/2019/01/t... [caito.de] (M-AILABS Speech Dataset).

      It contains German (237hrs), Queen's English (45h), US-English (102h), Spanish (108h), Italian (127h), Ukrainian (87h), Russian (46h), Polish (53h) and French (190h).

      All details about structure and how to use it is on the website.

      Have fun.

  • Obligatory Monty Python https://www.youtube.com/watch?... [youtube.com]

  • I really like the fact that there are so many very useful online services these days. Especially I like dating websites. And if you've never used them, you just should view site here [datingrussia.net] and check out how the thing works. I am sure that you'll like it.

Happiness is twin floppies.

Working...