当前位置: 希尼尔首页 > 双语新闻



双语新闻:继创造Alpha Go之后,谷歌DeepMind要让Siri声音堪比真人

青岛希尼尔翻译咨询有限公司(www.sinosenior.com)整理发布  2016-09-20

  

青岛希尼尔翻译公司(www.sinosenior.com)2016年9月20日了解到:Google’s DeepMind have revealed a new speech synthesis generator that will be used to help computer voices, like Siri and Cortana, sound more human.

谷歌旗下的人工智能公司DeepMind 研制出了一种新型语音合成系统, 该技术可以让Siri 和 Cortana等计算机合成语音听起来更接近真实人声。

Named WaveNet, the model works with raw audio waveforms to make our robotic assistants sound, err, less robotic .

这项名为WaveNet的技术通过研究原始音频波形使机器人助手的声音听起来不那么像机器人。

WaveNet doesn’t control what the computer is saying, instead it uses AI to make it sound more like a person, adding breathing noises, emotion and different emphasis into senteneces.

WaveNet并不会控制计算机的说话内容,它只会应用人工智能技术在句子中添加呼吸声,情感和各种重音,从而使计算机语音听起来更像真人。

Generating speech with computers is called text-to-speech (TTS) and up until now has worked by piecing together short pre-recorded syllables and sound fragments to form words.

用计算机合成语音的技术叫做“从文本到语音(TTS)”,现存的工作原理是将提前录制好的短音节和声音碎片合成语言。

As the words are taken from a database of speech fragments, it’s very difficult to modify the voice, so adding things like intonation and emphasis is almost impossible. This is why robotic voices often sound monotonous and decidedly different from humans.

由于语言是从语音碎片数据库中提取出来的,声音很难修饰,所以几乎不可能添加声调和重音等因素。这就是为什么机器人语音听起来很生硬,明显和人声不同。

WaveNet however overcomes this problem, by using its neural network models to build an audio signal from the ground up, one sample at a time.

然而WaveNet克服了这个难关,利用神经元网络模型从头建立一个音频信号,每次生成一个样本。

During training the DeepMind team gave WaveNet real waveforms recorded from human speakers to learn from. Using a type of AI called a neural network, the program then learns from these, much in the same way a human brain does.

培训期间,DeepMind团队让WaveNet学习了一些真实记录的人类语音波形。通过一种叫做神经元网络的人工智能技术,这个系统可以像人类的大脑一样对这些波形进行学习。

The result was that the WaveNet learned the characteristics of different voices, could make non-speech sounds, such as breathing and mouth movements, and say the same thing in different voices.

所以WaveNet学习了不同声音的特点,可以发出非语言声音,比如呼吸声和嘴部活动的声音,并且可以用不同的声音说同样的内容。

Despite the exciting advancement, the system still requires a huge amount of processing power, which means it will be a while before the technology appears in the likes of Siri.

虽然这个系统有激动人心的进步,但是它需要很强大的处理能力,这意味着这项技术并不能很快应用到Siri等中。

Google’s machine learning unit DeepMind is based in the UK and have previously made headlines when their computer beat the Go World champion earlier this year.

Google旗下的机器学习企业DeepMind总部设在英国,之前他们的计算机打败了围棋世界冠军,因而上了头条。

Using deep neural networks to create 16kHz audio seems to be just the tip of the iceberg as to what to machine learning can be applied to.

机器学习的应用领域非常广泛,利用深层精神网络建立16kHz音频似乎只是其冰山一角。
来源:Mirror

注:部分新闻来源于网路,如有涉及版权,请及时通知我们,我们将尽快删除。

---------------------------------------------------------------------------------------------------------------------------------------------------------------

青岛翻译公司  青岛驾照翻译  学历认证翻译  翻译区域  英国大学学历翻译  澳大利亚大学学历翻译