Unlocking the neural code for speech

Written by Tristan Free (Digital Editor)

New research sheds light on the pathways by which the brain encodes speech, information that could prove crucial in developing communication technology for ‘locked-in’ patients.

Research has uncovered new information as to how the brain encodes speech.

Recent research led by Marc Slutzky and conducted at Northwestern Medicine and Weinberg College of Arts and Sciences (IL, USA) has uncovered new information as to how the brain encodes speech. The study set out to identify new ways to help fully-paralyzed patients communicate again and could prove useful in developing new devices for speech.

Speech is comprised of distinct sounds, known as phonemes, formed by the lips, tongue and larynx. Prior to this research the neuronal pathways responsible for encoding the intended phonemes and the resulting gestures were unclear. Slutzky explained how they approached the research: “We hypothesized speech motor areas of the brain would have a similar organization to arm motor areas of the brain. The precentral cortex would represent movements of the lips, tongue, palate and larynx, and the higher level cortical areas would represent the phonemes to a greater extent.”

To test their theory, the team monitored the neuronal signals of patients undergoing brain surgeries, using electrodes placed on the cortical surface. The conscious patients were instructed to read words from a screen while the researchers noted the times they made phenomes and gestures. Comparing the phenomes and gestures to the recorded brain signals, the team was able to decode which signals resulted in which pattern of speech.

The results appeared to validate the team’s proposed hypothesis. Slutzky explained that “the precentral cortex represented gestures to a greater extent than phonemes. The inferior frontal cortex, which is a higher level speech area, represented both phonemes and gestures.”

Slutzky et al. hope that this information will help them update the current devices used for speech, which rely on eye or cheek movements to spell out words, to one that will be able to directly decode the signals from the brain into speech, known as a brain–machine interface.

The information could also prove useful for research into other speech disorders such as speech apraxia in children and stroke victims. The immediate future of the research, however, is the development of an algorithm that can decode the signals for the gestures and phonemes and assemble them to create full words and sentences, taking a vital step towards a better quality of communication for people who are unable to speak.