Scientists Create Device to Turn Brain Signals into Speech

    快捷键:S播放/暂停,A快退,D快进,W循环。
    Scientists say they have created a new device
    that can turn brain signals into electronic speech.
    The invention could one day give people
    who have lost the ability to speak
    a better way of communicating than current methods.
    The device was developed by researchers
    from the University of California, San Francisco.
    Their results were recently published
    in a study in the journal Nature.
    Scientists created a "brain machine interface"
    that is implanted in the brain.
    The device was built to read and record brain signals
    that help control the muscles that produce speech.
    These include the lips, larynx, tongue and jaw.
    The experiment involved a two-step process.
    First, the researchers used a "decoder"
    to turn electrical brain signals
    into representations of human vocal movements.
    A synthesizer then turns the representations
    into spoken sentences.
    Other brain-computer interfaces already exist
    to help people who cannot speak on their own.
    Often these systems are trained to follow eye
    or facial movements of people who have learned
    to spell out their thoughts letter-by-letter.
    But researchers say this method
    can produce many errors and is very slow,
    permitting at most about 10 spoken words per minute.
    This compares to between 100 and 150 words
    per minute used in natural speech.
    Edward Chang is a professor of neurological
    and member of the university's Weill Institute for Neuroscience.
    He was a lead researcher on the project.
    In a statement, he said the new two-step method
    presents a "proof of principle" with great possibilities
    for "real-time communication" in the future.
    "For the first time, this study demonstrates
    that we can generate entire spoken sentences
    based on an individual's brain activity," Chang said.
    The study involved five volunteer patients
    who were being treated for epilepsy.
    The individuals had the ability to speak
    and already had electrodes implanted in their brains.
    The volunteers were asked to read several hundred sentences aloud
    while the researchers recorded their brain activity.
    The researchers used audio recordings of the voice readings
    to reproduce the vocal muscle movements
    needed to produce human speech.
    This process permitted the scientists to create
    a realistic "virtual voice" for each individual,
    controlled by their brain activity.
    Future studies will test the technology on people
    who are unable to speak.
    Josh Chartier is a speech scientist and doctoral student
    at the University of California, San Francisco.
    He said the research team was "shocked"
    when it first heard the synthesized speech results.
    The study reports the spoken sentences
    were understandable to hundreds of human listeners
    asked to write out what they heard.
    The listeners were able to write out 43 percent
    of sentences with perfect accuracy.
    The researchers noted that
    - as is the case with natural speech
    - listeners had the highest success rate
    identifying shorter sentences.
    The team also reported more success
    synthesizing slower speech sounds like "sh,"
    and less success with harder sounds like "b" or "p."
    Chartier admitted that much more research of
    the system will be needed to reach the goal of
    perfectly reproducing spoken language.
    But he added: "The levels of accuracy we produced here
    would be an amazing improvement
    in real-time communication
    compared to what's currently available."
    I'm Bryan Lynn.
     

05 May, 2019

Scientists say they have created a new device that can turn brain signals into electronic speech.

The invention could one day give people who have lost the ability to speak a better way of communicating than current methods.

The device was developed by researchers from the University of California, San Francisco. Their results were recently published in a study in the journal Nature.

Scientists created a "brain machine interface" that is implanted in the brain. The device was built to read and record brain signals that help control the muscles that produce speech. These include the lips, larynx, tongue and jaw.

The brain machine interface, shown here, was developed by researchers at the University of California, San Francisco, to turn brain signals into electronic speech. (University of California San Francisco)
The brain machine interface, shown here, was developed by researchers at the University of California, San Francisco, to turn brain signals into electronic speech. (University of California San Francisco)

The experiment involved a two-step process. First, the researchers used a "decoder" to turn electrical brain signals into representations of human vocal movements. A synthesizer then turns the representations into spoken sentences.

Other brain-computer interfaces already exist to help people who cannot speak on their own. Often these systems are trained to follow eye or facial movements of people who have learned to spell out their thoughts letter-by-letter.

But researchers say this method can produce many errors and is very slow, permitting at most about 10 spoken words per minute. This compares to between 100 and 150 words per minute used in natural speech.

Edward Chang is a professor of neurological and member of the university's Weill Institute for Neuroscience. He was a lead researcher on the project. In a statement, he said the new two-step method presents a "proof of principle" with great possibilities for "real-time communication" in the future.

"For the first time, this study demonstrates that we can generate entire spoken sentences based on an individual's brain activity," Chang said.

The study involved five volunteer patients who were being treated for epilepsy. The individuals had the ability to speak and already had electrodes implanted in their brains.

The volunteers were asked to read several hundred sentences aloud while the researchers recorded their brain activity.

The researchers used audio recordings of the voice readings to reproduce the vocal muscle movements needed to produce human speech. This process permitted the scientists to create a realistic "virtual voice" for each individual, controlled by their brain activity.

Future studies will test the technology on people who are unable to speak.

Josh Chartier is a speech scientist and doctoral student at the University of California, San Francisco. He said the research team was "shocked" when it first heard the synthesized speech results.

The study reports the spoken sentences were understandable to hundreds of human listeners asked to write out what they heard. The listeners were able to write out 43 percent of sentences with perfect accuracy.

The researchers noted that - as is the case with natural speech - listeners had the highest success rate identifying shorter sentences. The team also reported more success synthesizing slower speech sounds like "sh," and less success with harder sounds like "b" or "p."

Chartier admitted that much more research of the system will be needed to reach the goal of perfectly reproducing spoken language. But he added: "The levels of accuracy we produced here would be an amazing improvement in real-time communication compared to what's currently available."

I'm Bryan Lynn.

Bryan Lynn wrote this story for VOA Learning English, based on reports from Reuters, Nature and online sources. Hai Do was the editor.

We want to hear from you. Write to us in the Comments section, and visit 51VOA.COM.

_________________________________________________________________

Words in This Story

interface n. connection between pieces of electronic equipment

decoder n. device used to discover the meaning of a coded message

synthesizer n. electronic machine that creates sounds and music

virtual adj. something that can be done or seen using computers or the Internet instead of going to a place

accuracy n. correctness