Studying how the brain gets meaning from sound

    Listen
    Maria Geffen

    Maria Geffen

    The Pulse profiled the work of Maria Geffen, a researcher in the department of otorhinolaryngology — ear, nose and throat — at the University of Pennsylvania’s Perelman School of Medicine.

    Scientists understand well the mechanics of hearing. What interests Geffen is the next step: how the brain assigns meaning to sound.

    When someone says the word “hot,” there are endless variables of different accents, background noises and speaking rates, and still the brain comes to the same conclusion.

    “Once it sees the signal, the series of zeroes and ones, these electrical impulses, how does it then put it back together into hearing actually the words?” Geffen said. “That’s what my laboratory is trying to solve.”

    “Without understanding how our brain does that, we can not build better hearing aids or improve the design of cochlear implants,” she said.

    The mechanics of hearing

    “Sound is pressurized vibrations of the air. In the case of speech, they are produced by the vocal chords,” Geffen said. Rustling leaves or two hands clapping disturb the air and send vibrations traveling. Those sound waves hit a system of eardrums, and then a membrane transforms those vibrations into electrical pulses for the brain to interpret.

    Billions of nerve cells called neurons transmit signals to do that work.

    “You can think of them as a set of Christmas lights, where they just go on and off, and you can almost see the lights travel,” Geffen said. “Neurons are connected to each other by the wire called axons, and those axons carry the little electrical signals.”

    A stand-in for speech

    At her Laboratory of Auditory Coding, Geffen uses rat vocalizations as a proxy for human speech. Maybe you’ve heard a pet rat squawk or a wild rat shriek, but Geffen says those are usually distress calls. When rats are talking among themselves and “happy” they communicate using ultrasonic peeps outside of the human hearing range, she said.

    “One of the trills is like a soprano’s aria — a really rapid modulation in frequency,” Geffen said. “We don’t know exactly what they mean.”

    The researchers use custom equipment to record the ultrasonic rat songs then play them back to the animals

    For Geffen, the fun begins when she starts to manipulate the rat sounds — morphing them together, adding background noise and measuring how the animals’ behavior changes with each new experiment.

    Her newest technique comes from the field of optogenetics. Some of Geffen’s lab animals are genetically modified so that certain cells can be turned on or off by shining a light on the brain tissue.

    “The neurons have an added function,” Geffen said.

    The researchers flip the light switch while an animal is doing a task and listening to a sound cue. Geffen watches for a nose poke left or a nose poke right, all the while monitoring patterns in the rat’s brain activity.

    “If they do that reliably, we can actually say: Oh, they heard this sound, or they heard that sound,” Geffen said. “And then we can tell: Do those neurons matter, do they not matter, for their perception.”

    The lab also uses sounds from nature in Geffen’s experiments.

    For example, the team has isolated the mathematical property in a sound wave that makes the brain think “water” when it hears the recording of a babbling brook.

    With that knowledge, Geffen can take the same recording, strip out a particular signal, and suddenly the brain comes up with a completely different answer.

    “That doesn’t really sound like anything, maybe static noise on the TV,” Geffen said.

    Finally, she can amp up a completely lab-made sound to trick the brain into thinking it is hearing water.

    “It makes all the difference in perceiving an artificial sound or a gurgling water brook,” Geffen said.

    “We could just play a bunch of water sounds and ask: Are the neurons sensitive to those water sounds or not? And what we will find is, yes, a bunch of neurons are sensitive to those water sounds. But that’s not going to tell us how that sensitivity comes about,” she said. “That does not answer the question of how that is done. We really want to figure out how is it that neurons are encoding the meaning of the sounds.”

    Her team is just beginning to crack that code.

    Studying speech and hearing

    Speech is one of the traits that make humans such a successful species, Geffen said, because very few other animals can relay abstract information.

    “To accomplish all the higher-level, brain-powered things that humans can do, we have to put our thoughts into language, and our brain has to understand that language,” she said.

    Humans can tell others in the pack not only that a predator is approaching but from what direction and how fast. Or, we can share other useful information.

    “We can tell somebody: ‘I saw this great dress on sale at Target,'” Geffen said — which evolutionarily isn’t too different from telling a member of the tribe that there are edible berries on the third bush from the left.

    The ‘cocktail party problem’

    In recent years, researchers studying auditory processing have come to understand that some neurons are tuned to specific subsets of sounds. When there is a complex sound scene to figure out, nerve cells in the brain split up the task of making sense of the information.

    Another application of Geffen’s work is understanding the “cocktail party problem.”

    “That’s something that humans do really well,” Geffen said. “We can just walk into a party, music playing, people talking at the same time, and we can just zoom in and have a great conversation with somebody.”

    Sometimes though, people with hearing loss struggle in that kind of acoustical environment. Geffen notices the problem when she’s out with an older relative.

    “It is so difficult for them to talk to me because of all the surrounding sound,” she said. “They can’t hear which of those sounds are my speech and which are the surrounding sounds that the brain needs to filter out.”

    “So imagine you are crossing a busy intersection. You hear the sound of cars approaching; you hear the steps of other people crossing. You look across the street and you see your friend waving and shouting your name. At the same time there’s a bus that’s screeching its brakes. And there’s a car going really fast through the intersection honking at everybody,” Geffen said. “An important task is to suppress the sounds that are not important, and another important thing is to amplify that sounds that are important.”

    “Based on the context under which you hear some sounds, you need to respond to it in a different way,” Geffen said. “I can hear the honking — that’s probably more important, because I don’t want to die right now.”

    WHYY is your source for fact-based, in-depth journalism and information. As a nonprofit organization, we rely on financial support from readers like you. Please give today.

    Want a digest of WHYY’s programs, events & stories? Sign up for our weekly newsletter.

    Together we can reach 100% of WHYY’s fiscal year goal