5:32pm

Thu January 30, 2014
Science

Researchers Watch As Our Brains Turn Sounds Into Words

Originally published on Fri January 31, 2014 7:19 pm

Researchers are a bit closer to understanding one of the brain's greatest accomplishments: making sense out of spoken language.

An area of the brain that interprets speech contains cells that respond to the dozen or so basic units of sound we use to form words, according to a team from the University of California, San Francisco.

Some of these cells respond specifically to plosives, like the initial "puh" sounds in "Peter Piper picked a peck of pickled peppers," the team found. Other neurons respond to fricative consonants, like the "f" sound in the word "fish."

"We were shocked to see this kind of selectivity," says Dr. Edward Chang, a brain surgeon at UCSF and one of the paper's authors. The team was also surprised that the cells were responding to basic "phonemic features," rather than to phonemes themselves, which are larger chunks of sounds. Phonemic features are "what we would consider the building blocks for speech and language," Chang says.

The study was published Thursday in the journal Science.

This reveals the mechanics behind one of the first steps in processing language in the brain, says David Poeppel, a professor of psychology at Columbia University who was not involved in the study.

"Imagine how many different things have to happen for you just to understand the sentence, 'I need a cup of coffee,' " Poeppel says. "First of all you have to identify all the different sounds in the background that you don't want. You have to break [the stream of sound] into units. You have to look up the words. You have to combine the words and generate the correct meaning. And each of those parts has its own subroutines."

The study builds on what linguists have already learned about the basic units of speech, Poeppel says. And he says it also shows how much neuroscientists still have to learn about how language works in the brain.

The research was possible thanks to six people who were in the hospital being evaluated for epilepsy surgery and who agreed to let surgeons place a special recording device on the surface of their brains. The device monitored the activity of groups of brain cells in what's called Brodmann area 22, an area known to help people understand words.

The volunteers listened to recorded voices speaking phrases. Eventually the scientists had a record of each volunteer's brain responding to every sound used in the English language. Then the scientists looked to see precisely what the sets of brain cells were doing as each bit of sound passed by.

The results help explain how we can process speech so quickly and accurately, even in a noisy place, or when the speaker has an unfamiliar accent, Chang says. "It's the starting point of thinking about how to build up some better understanding of how language occurs in the brain," he says.

The study also could have practical applications, says Nima Mesgarani, who worked in Chang's lab before moving to Columbia University. For one thing, he says, an impaired ability to process speech sounds seems to be a part of many language and communication disorders, including dyslexia.

Also, understanding how the brain recognizes basic units of speech could lead to more productive conversations with machines, like ATMs and smartphones, Mesgarani says. Artificial speech processing systems like Apple's Siri were actually inspired by research on the brain, he says, adding that this new research could help them become better listeners.

Copyright 2014 NPR. To see more, visit http://www.npr.org/.

Transcript

ROBERT SIEGEL, HOST:

Right now, as I'm speaking, your brain is transforming this stream of sounds into meaningful words and sentences. It's a remarkable achievement and scientists are just beginning to understand how it happens.

NPR's Jon Hamilton reports on a study that let researchers watch brain cells performing one of the earliest steps in processing speech.

JON HAMILTON, BYLINE: Understanding spoken language is something we're so good at we don't think it's much of an accomplishment. But David Poeppel of New York University says it is.

DAVID POEPPEL: Imagine for yourself how many different things have to happen for you just to understand the sentence: I need a cup of coffee. First of all you have to identify all the different sounds in the background that you don't want, or the competing speakers. And you have to break it into units. You have to look up the words. You have to combine the words and generate the correct meaning. And each of those parts has its own subroutines.

HAILTON: Researchers at the University of California, San Francisco, set out to understand just one part of the process. Edward Chang, a brain surgeon at UCSF, says they wanted to know how the brain recognizes the individual sounds that we combine to form words.

EDWARD CHANG: These are what we would consider the building blocks for speech and language.

HAILTON: So Chang and a team of researchers studied the brains of six people who were in the hospital being evaluated for epilepsy surgery. The team placed electrodes on the surface of each patient's brain, which allowed monitoring of an area that processes speech. Then Chang says they exposed the volunteers to lots and lots of words.

CHANG: What it involves is actually just listening to a long series of sounds, a lot of them just sound like they are clips from movies.

HAILTON: All kinds of voices saying phrases like...

UNIDENTIFIED MAN: And what eyes they were.

HAILTON: Eventually, the scientists had a record of each volunteer's brain responding to every sound used in the English language. Then, they studied the data using a sort of slow-motion replay.

UNIDENTIFIED MAN: And what eyes they were.

HAILTON: This let them see precisely what different brain cells, or neurons, were doing as each bit of sound passed by. And Chang says they realized that some were responding specifically to plosives, like the initial puh-sounds in Peter Piper Picked a Peck of Pickled Peppers. Meanwhile, other neurons were responding to a particular type of vowel sound.

CHANG: We were shocked to see the kind of selectivity. Those sets of neurons were highly responsive to particular speech sounds.

HAILTON: Chang says these sounds are what linguists call phonemic features, the most basic components of speech. There are about a dozen of these features. And they can be combined to make phonemes, the sounds that allow us to tell the difference between words like dad and words like bad.

Chang says the finding helps explain how we can process speech so quickly and accurately, even in a noisy place, or when the speaker has an unfamiliar accent.

CHANG: It's the starting point of thinking about how to build up some better understanding of how language occurs in the brain. And that's certainly been a long-term passion and interest of mine.

HAILTON: The result also could have practical applications.

Nima Mesgarani worked in Chang's lab before moving to Columbia University. He says an impaired ability to process speech sounds seems to be a part of many disorders, including dyslexia.

NIMA MESGARANI: In order to help people who are suffering from various speech and communication disorders, we need to first understand how these processes become impaired. And the first step toward that goal is to understand how they work normally.

HAILTON: Mesgarani says knowing how the brain identifies speech sounds also could lead to better conversations with machines, like ATMs and Smartphones. He says artificial speech processing systems, like Apple's Siri, were actually inspired by research on the brain.

MESGARANI: We've always dreamed of artificial systems that are able to communicate with us, the same way that we communicate with other humans.

HAILTON: By speaking and listening. He says knowing precisely how the brain does this should eventually make Siri and her cousins better listeners.

The new research appears in the online version of the journal Science.

Jon Hamilton, NPR News.

(SOUNDBITE OF MUSIC)

AUDIE CORNISH, HOST:

This is NPR News. Transcript provided by NPR, Copyright NPR.