Mozilla Updates Common Voice Dataset With 1,400 Hours of Speech Across 18 Languages (venturebeat.com) 13
Mozilla wants to make it easier for startups, researchers, and hobbyists to build voice-enabled apps, services, and devices. From a report: Toward that end, it's today releasing the latest version of Common Voice, its open source collection of transcribed voice data that now comprises over 1,400 hours of voice samples from 42,000 contributors across 18 languages, including English, French, German, Dutch, Hakha-Chin, Esperanto, Farsi, Basque, Spanish, Mandarin Chinese, Welsh, and Kabyle. It's one of the largest multi-language dataset of its kind, Mozilla claims -- substantially larger than the Common Voice corpus it made publicly available eight months ago, which contained 500 hours (400,000 recordings) from 20,000 volunteers in English -- and the corpus will soon grow larger still. The organization says that data collection efforts in 70 languages are actively underway via the Common Voice website and mobile apps.
Re: (Score:2)
Nefarious uses aside, I'd actually like to see low cost realistic text-to-voice software. I have a novel (link below if anyone's interested) and would love to release an audio book. Making an audio book is crazy expensive, though. It can cost around $2,000 or more. I don't have that kind of money to spend. I made a trial version of my upcoming book using text-to-speech software for my son who likes reading along with audio books. That sounded like a robot reading my book, though. I'd never charge anyone for
Good! Something faster than DeepSpeech needed next (Score:2)
Back in the 1990s, our speech recognizers allowed "real time" recognition on a Pentium-133MHz. Admittedly with probably a smaller vocabulary and a hig
Re: (Score:2)
You only need one voice, the voice of the current user. In reality too many voices will entirely screw up voice recognition because there is a lot of word overlap in the way words are pronounced to sound like other words when other people pronounce them. So too many voice will create more problems than they solved, they just have to learn to accept voice training of devices but they don't want that because then voice commands would be localised, rather than broadcast back to home base to be recorded and dat
Re:Good! Something faster than DeepSpeech needed n (Score:5, Interesting)
If you need voice training data (specifically also for Speech Synthesis) my former company created a dataset that I am now making available on my website. Since the company is closed (and I used to be the CTO/MD), I had decided to release it into the public with a BSD 3-clause license.
Here is the link: https://www.caito.de/2019/01/t... [caito.de] (M-AILABS Speech Dataset).
It contains German (237hrs), Queen's English (45h), US-English (102h), Spanish (108h), Italian (127h), Ukrainian (87h), Russian (46h), Polish (53h) and French (190h).
All details about structure and how to use it is on the website.
Have fun.
Check carefully (Score:2)
Obligatory Monty Python https://www.youtube.com/watch?... [youtube.com]
Re: (Score:1)