The invisible force that trains us to read

Written by Rachael Moeller Gorman

Sound sculpts the brain. Scientists have been studying how the brain encodes sound, and they’re using that code to not only understand hearing problems but also to diagnose and treat reading issues such as dyslexia.

Image courtesy of Nina Kraus. See more.

In a quiet room in downtown Los Angeles, a child wearing black Sennheiser headphones hears a sound. It resembles the background din at a party, or a noisy classroom, and soon a soft sentence is murmured within the din: “Sugar is very sweet.” She doesn’t quite hear it, so the experimenter adds a few decibels to the phrase. Then, clarity.

The child is part of a longitudinal study testing how well kids can pick a relevant sound signal from background noise, which is one of the most computationally demanding feats our brains perform. It’s so difficult, in fact, that we have yet to be able to train a computer to do it.

Processing sound integrates the cognitive, sensorimotor, and reward networks of the brain; more experience with a variety of complex sounds is directly linked to stronger language ability (1). As Nina Kraus, a neurobiologist at Northwestern University wrote in a 2016 paper, “Experience in sound tunes the auditory brain.”

“Sound has many features, like pitch, timing, and timbre. It’s a tremendously rich signal,” said Kraus. “But it’s invisible and so people often don’t realize what a powerful impact sound has on our brains.”

Sound affects spoken language and hearing, but Kraus and others have learned that it also affects reading ability. For years now, she has been investigating how to use sound not only to diagnose children who may have problems reading later on but also to treat them before problems develop in the first place.

Visualizing the Brain’s Code

For every sound we hear, our neurons fire in a distinct pattern. One of the most important techniques Kraus and her colleagues use to study these neurons is a noninvasive electrophysiological technique called frequency-following response (FFR). FFR creates a “biological snapshot” of auditory processing using three electrodes on a subject’s scalp that record the oscillatory activity of neural ensembles at the intersection of the cognitive, sensorimotor, and reward networks in the brain. It’s time-locked to the sound the subject hears.

This technique has long been used to test hearing sensitivity in people presented with a series of rapid clicks or sinusoidal tones, and researchers have used FFR to investigate the brain’s response to more complex sounds, such as musical notes and speech. Perhaps most interestingly, when Kraus plays back the FFR-recorded brainwaves on a computer, it sounds eerily like the original sound or word stimulus, just slightly muted (see the website for an amazing demonstration of this). It’s like attaching speakers to our brain and playing back the neural code for whatever sound we just heard.

The team has been using FFR to follow the brains of individual children over time in longitudinal studies. Recording one person’s brain’s response to complex sounds provides researchers with a wealth of information. “FFR can tell us a tremendous amount about an individual person, the kinds of bottlenecks and strengths they can have in processing sound,” said Kraus.

“Sound waves carry information with a precision of fractions of milliseconds. This is much, much faster than what we ask our visual system to do,” said Kraus. “The auditory system is a timing expert of the brain. Using FFR, we are able to measure the brain’s response with an equal level of precision and determine how good a job the brain does at processing the individual components of complex sound.”

Reading on the Beat

The auditory system is linked to more than just hearing, however. Kraus and colleagues have been using FFR to investigate its involvement in reading. Reading is a fairly recent development in human evolution, and it makes sense that the ability to read might borrow brain circuits already in place for other skills, such as attention and spoken language.

People with dyslexia, for example, may have an abnormal perception of sound, especially rhythms in speech and sounds that transmit phonemic cues (the vowels or consonants of a word). “If a child is forced to learn with a ‘blurry’ representation of incoming signals, this will create an imprecise phonemic inventory that, in turn, causes problems when these sounds need to be associated with written letters,” Kraus wrote.

One of the richest types of sound is music, and Kraus’ lab has spent the last decade conducting cross-sectional and longitudinal studies to study how music molds the nervous system. Study after study shows that musicians have advantages in various types of learning.

So what particular aspect of music is most important to reading?

Adults with dyslexia don’t recognize rhythm in nursery rhymes as well as people without dyslexia, and children with dyslexia don’t recognize rhythm in music as well. The dyslexic brain has been called “in tune but out of time.”

In a 2014 study, Kraus and her team investigated preschoolers who had not yet begun to read. She played an acoustic beat at two speeds and tested whether the kids could drum to the beat. Those who could were called “synchronizers,” whereas those who couldn’t were classified as “non-synchronizers.” They ran FFRs on the children while they listened to consonant–vowel syllables (such as [ba], or [da]), with and without background noise, to uncover the precision in the neural encoding. Synchronizers showed more precise neural encoding in every situation, and had stronger pre-reading skills such as rapidly naming words and auditory short-term memory (3).

Kraus’ group now runs a large longitudinal study where she follows kids from an early age, tests them with FFR, and watches how their reading develops. In 2015, she developed a 30-minute neurophysiological assessment that used FFR to test the precision of a child’s neural coding of consonants in a noisy background. This test predicts which kids are going to have trouble reading later on; preschoolers with stronger neural processing were stronger readers a year later (4).

“We would like to know at age three, or even earlier, if a child is at risk for struggling to learn to read years before he begins to struggle. That way you can focus on intervention as early as possible,” said Kraus.

Making Music Trains the Brain

Music improves the same aspects of sound processing that are necessary for language processing and reading (distinction of speech syllables, harmonics, timing, resistance to noise, variability, etc.). Music and rhythm training improves auditory–temporal awareness, which researchers think is related to timing cues in speech. They’ve seen over and over that musicians’ brains encode sound as neural oscillations much more precisely.

Kraus and her group teamed up with The Harmony Project, an initiative in Los Angeles that provides free music education—where kids actively make music—to children from low-income communities. Kraus found that after 2 years of music training (1 year was not enough), the children were significantly better able to detect a sentence such as “sugar is very sweet” amid background noise (5).

In another music training program in Chicago, Kraus and colleagues investigated older students entering high school who engaged in either music or paramilitary training. After two years, the FFRs of those in the music training group showed more resilience to background noise (again, 1 year was not enough to have an effect) (6).

Focusing more specifically on kids with dyslexia, psychologists Alessandro Antonietti and Alice Cancer developed a computer program called Rhythmic Reading Training, which combines reading exercises with a rhythm background.

“The idea that if we improve the processing of rhythm thanks to music—which has a structural rhythmical connotation—maybe we can also improve language. This is the idea we started from,” said Antonietti, who works at Milan’s Università Cattolica del Sacro Cuorea. “What other people did before was to train music apart from language, and we tried to combine instead music and language by training the two aspects together.”

A 2015 trial on 14 junior high school students with dyslexia who underwent training twice a week for 30 minutes found that reading speed and accuracy improved compared to children who did not receive the intervention. The rhythm gave the words a temporal structure, which the researchers hypothesized contributed to the improvement (7).

Another 2015 trial randomly placed 24 elementary school children in either a music training group or a painting group, comparing them to a control group of students who didn’t participate in either class. While both groups improved, the music group showed significantly greater improvement in tasks assessing phonological awareness, rhythm, and reading accuracy (8).

The more experience a person has with complex sound, the better their brains process language, both spoken and written.

“It took 2 years for us to see the biological impact of the music making—it takes time to change the brain,” said Kraus. “Learning through sound involves our cognitive, sensory, and motor and reward pathways in the brain—how we think about sound, how we feel about it, how we move our mouths and lips and bodies in communication, and how all this impacts how we learn through sound.”