科学家发明设备将大脑信号转化为语音

    Scientists say they have created a new device that can turn brain signals into electronic speech.
    科学家表示,他们发明了一种新设备,可以将大脑信号转化为电子语音。

    The invention could one day give people who have lost the ability to speak a better way of communicating than current methods.
    该发明有一天可能会给那些失去说话能力的人一种比目前更好的沟通方式。

    The device was developed by researchers from the University of California, San Francisco. Their results were recently published in a study in the journal Nature.
    这种设备是由加州大学旧金山分校的研究人员开发出来的。他们的研究成果最近发表在《自然》杂志上。

    Scientists created a "brain machine interface" that is implanted in the brain. The device was built to read and record brain signals that help control the muscles that produce speech. These include the lips, larynx, tongue and jaw.
    科学家发明了一种植入大脑的“脑机接口。”这种设备可以读取和记录大脑信号,这种大脑信号可以帮助控制产生语言的肌肉,其中包括嘴唇、喉咙、舌头和下巴。

    The experiment involved a two-step process. First, the researchers used a "decoder" to turn electrical brain signals into representations of human vocal movements. A synthesizer then turns the representations into spoken sentences.
    该实验涉及两步。首先,研究人员使用“解码器”将大脑电信号转换为人类声带运动的表征。然后使用合成器将这些表征转换为口语句子。

    Other brain-computer interfaces already exist to help people who cannot speak on their own. Often these systems are trained to follow eye or facial movements of people who have learned to spell out their thoughts letter-by-letter.
    目前已经存在其它脑机接口用于帮助无法自己说话的人们。通常这类系统都经过训练,可以跟踪人们的眼球或面部动作,这些人都学过如何逐字拼出他们的想法。

    But researchers say this method can produce many errors and is very slow, permitting at most about 10 spoken words per minute. This compares to between 100 and 150 words per minute used in natural speech.
    但是研究人员表示,这种方法会产生很多误差,并且速度很慢,每分钟最多只能拼出10个口语单词。相比之下,自然发声情况下每分钟可以说出100到150个单词。

    Edward Chang is a professor of neurological and member of the university's Weill Institute for Neuroscience. He was a lead researcher on the project. In a statement, he said the new two-step method presents a "proof of principle" with great possibilities for "real-time communication" in the future.
    Edward Chang是一名神经学教授,也是该大学威尔神经科学研究所的成员。他是该项目的首席研究员。他在一份声明中表示,这项新的两步法提出了“原理论证,”未来可能具有实时沟通的巨大可能性。

    "For the first time, this study demonstrates that we can generate entire spoken sentences based on an individual's brain activity," Chang said.
    Chang表示:“这项研究首次表明,我们可以通过个体大脑活动来生成完整的口语句子。”

    The study involved five volunteer patients who were being treated for epilepsy. The individuals had the ability to speak and already had electrodes implanted in their brains.
    该研究涉及了5名正在治疗癫痫的志愿者。这些人有能力说话,并且已经在他们的大脑中植入了电极。

    The volunteers were asked to read several hundred sentences aloud while the researchers recorded their brain activity.
    这些志愿者被要求大声读出数百个句子,同时研究人员记录下他们的大脑活动。

    The researchers used audio recordings of the voice readings to reproduce the vocal muscle movements needed to produce human speech. This process permitted the scientists to create a realistic "virtual voice" for each individual, controlled by their brain activity.
    研究人员使用这些朗读声的录音来重现人类发声所需的声音肌肉运动。这一过程允许科学家为每个人创造出一种真实的“虚拟声音”,并由他们的大脑活动控制。

    Future studies will test the technology on people who are unable to speak.
    未来研究将会在无法说话的人身上测试这项技术。

    Josh Chartier is a speech scientist and doctoral student at the University of California, San Francisco. He said the research team was "shocked" when it first heard the synthesized speech results.
    Josh Chartier是加州大学旧金山分校的语音专家和博士生。他说该研究小组第一次听到合成语音结果时感到“震惊”。

    The study reports the spoken sentences were understandable to hundreds of human listeners asked to write out what they heard. The listeners were able to write out 43 percent of sentences with perfect accuracy.
    该研究报告称,对于那些被要求写出他们所听到内容的听众来说,这些口语句子是可以理解的。这些听众能够完全正确地写出43%的句子。

    The researchers noted that - as is the case with natural speech - listeners had the highest success rate identifying shorter sentences. The team also reported more success synthesizing slower speech sounds like "sh," and less success with harder sounds like "b" or "p."
    研究人员指出,就像自然语音一样,听众们在确定较短句子方面的成功率最高。该团队还报告称,在合成例如sh等较慢的语音中成功率更高,而合成b或p等硬音时成功率更低。

    Chartier admitted that much more research of the system will be needed to reach the goal of perfectly reproducing spoken language. But he added: "The levels of accuracy we produced here would be an amazing improvement in real-time communication compared to what's currently available."
    Chartier承认,为了达到完美再现口语的目标,需要对该系统进行更多研究。但他还说:“与目前可用的系统相比,我们在实时沟通上的准确度是一项惊人改善。”

    I'm Bryan Lynn.
    我是布莱恩·琳恩。(51VOA.COM原创翻译,禁止转载,违者必究!)