Science of Language: Newborns learn speech sounds using same brain regions as adults
A new study found that newborns start recognizing vowel sounds just hours after birth using a speech-processing system very similar to the one adults use to learn and use language
We adults engage in some pretty unnatural behaviors to learn languages.
I personally shove bluetooth earbuds into my skull and flood my poor brain with hours of the same old podcast (looking at you, Servus. Grüezi. Hallo.) even though I’ve already listened through at twice. I know people who spend hours every day doing workbook exercises. In school, I drove myself nuts memorizing tables of verb conjugations. And a lot of us have phones crowded with apps that have us learn sentences like “she has the holy potato.”
All that makes it easy to forget that language learning is actually pretty natural for humans. And recent research shows that the neural circuits that we use to process language as an adult seem to start doing their jobs just hours after birth.
Researchers from Shenzhen University in China recently published a study in Nature Human Behavior showing that newborn babies can be taught to better distinguish natural vowel sounds from the same sounds played backwards within just hours of birth. Infants do this using brain areas that are known to be associated with human speech — and they don’t get better at noticing the backwards vowels, which is a good sign that we’re born hard-wired to use our language-learning circuits for language and not for meaningless auditory garbage.
Newborns recognize speech sounds, but do they learn them? And how?
Babies start life already tuned-in to human speech. Newborns respond differently to human speech than to other noises, and they can distinguish their mom’s voice from other female voices.
It’s also clear that babies start the more sophisticated task of distinguishing different phonemes — the individual sounds that make up human speech — very early, within the first few hours or days of life. But it’s still not settled whether phoneme recognition is something we’re born with or something that we start learning shortly after birth.
The evidence isn’t clear one way or the other. Hearing organs come online at around 24 weeks of gestation, and there is some interesting work showing that newborns come into the world already preferring the sounds of their native language over foreign ones. This would suggest that the infant brain comes pre-installed with phoneme recognition. However, another study found that while newborns can distinguish between simple vowels right after birth, they only start picking up the differences between trickier sounds after training.
Watching newborn brains learn language
Innate or learned, phoneme recognition clearly starts early. But exactly how that recognition plays out in the infant brain wasn’t clear — do newborns have a unique way of processing speech, or do their brains already respond to human language the way adult brains do?
To find out, researchers measured the brain activity of infants to see if — and how — they could distinguish between natural vowel sounds in their native language, Mandarin, and the same vowels played backwards.
Nerd Alert: How measuring brain activity works
The researchers in this study used a very cool method called Functional Near-Infrared Spectroscopy (fNIRS) to track brain activity, and the nerd in me just couldn’t resist giving a quick explanation. Skip this if you don’t care.
In a nutshell, fNIRS works by shining light into the brain and seeing how much comes back. In this study, each baby was fit with a cap containing a network of tiny light sensors called optodes, each of which is either red or blue. The red optodes shine near-infrared light into the scalp, which passes harmlessly through human tissue until it bounces back out. The blue optodes measure how much light makes it out of the brain.
Because active brain areas need more oxygen and because fresh, oxygen-loaded blood absorbs lots of near-infrared light, scientists can measure brain activity by watching how much light from the red optodes makes it to the blue ones.
The researchers “trained” 25 babies to recognize a set of 3 vowels (/ɑː/, /ɔː/ and /iː/) by having them listen to 10-minute recordings of the same vowel sound played forward and then backwards for 5 hours. After training, the babies took a “test” — the scientists played recordings of the vowels they had “learned” both forwards and backwards and watched how their brain activity responded. 2 hours after the 1st test, the babies took the same test again.
For reference, the researchers also gave the same two tests to 25 babies that had “trained” on vowels that didn’t show up in the test (what scientists call an “active control”) or not trained at all (a “passive control”).
Babies' brains have a little spike in activity after hearing sounds. If that peak is stronger and faster for natural vowels than it is for backwards vowels, that’s a good indication that babies can distinguish between the two. So by comparing the speed and strength of those post-vowel spikes, the researchers could compare how quickly the different training groups responded to natural vs unnatural vowel sounds.
Training and testing took about 8 hours — something I have to say sounds tough considering these babies had just been born and had to spend that time away from their parents. Parents were compensated for participating in the study and gave written informed consent to participate, and babies that cried were excluded from the study and returned to their parents early.
The language grind starts on day 0
The results seemed to show that — not even a day after birth — babies' brains are hard at work learning to distinguish specific speech sounds, and that their brains learn using neural circuitry similar to the adult language-processing system.
Before training, none of the babies were very good at distinguishing vowel sounds. This was unexpected because previous studies have shown that babies are born with at least some ability to recognize speech, and the researchers think that it might be because fetuses can’t hear vowel sounds well enough through their mother’s tissue to learn them precisely in-utero.
However, babies who trained on the vowels they were tested on did better at distinguishing natural from unnatural vowels. Their brain activity spiked much higher and faster for natural vowels relative to backwards vowels compared to the active and passive controls.
The rest, or “consolidation” period, between the two tests seems to have been important. This is fancy science speak for “you need sleep to learn” — the babies pretty much all napped during that 2-hour “consolidation period.” Trained infants were better at distinguishing between forwards and backwards vowels after they had time to sleep on their training.
When the researchers looked at the regions of the brain that were most active in the babies as they distinguished vowel sounds, they saw that many of the same regions important for adult language processing seem to be active in newborns too.
Finally, and in my opinion most interestingly, there was no sign that the babies got better at recognizing the unnatural backwards vowels even after training — it seems that, at least to some degree, we come into the world hard-wired for language.
Language learning — whether you enjoy it or not — seems to be one of the very first thing your brain started doing as soon as you were born (and maybe earlier). And you’re still processing language using the same old neural circuits that kicked in when you were born. Besides coordinating your basic bodily functions, you’d be hard-pressed to find something else that your brain has had more time to practice.