The science behind ‘baby talk’


We’ve all heard adults cooing to babies in “baby talk” — that high-pitched, singsong cadence we tend to slip into around infants. The overall effect of baby talk may sound unnatural, but as Princeton neuroscientist Elise Piazza explains, the exaggerated high pitch, repetition, rhythm and even the pauses in baby talk can give babies important acoustical information about how language works.

“All of these cues combine to help highlight the inherent structure in speech, so to help babies to segment this constant stream of noise into the building blocks of language — like syllables, words and sentences,” she says. And as Piazza and her colleagues recently found, baby talk has another quality: Adults talking to babies change the timbre of their voices, too.

Their research was published in October in the journal Current Biology.

Timbre is the unique quality, or “tone color,” of a sound, Piazza explains.

“It's actually much less understood than pitch or rhythm, but we rely on it constantly to distinguish and enjoy all of the different flavors of sounds around us,” she says. “So, for example, we can easily discern different idiosyncratic celebrity voices, like Barry White, who has a famously velvety voice, or Gilbert Gottfried, who has a much more nasal voice, or maybe Tom Waits with his sort of gravelly growl — even if these three people were all singing the same note with the same rhythm.”

Likewise, if you listen to the different sections of an orchestra tuning at the same pitch, you might notice that the woodwinds sound “reedy,” the brass instruments “buzzy,” and the strings “mellow.”

“These are all timbre descriptors,” Piazza says.

Timbre helps us quickly get at the gist of sounds, she explains. Hearing just a second or two of music doesn’t give you much information about its beat structure, or whether it’s in a major or minor key. But you can hear timbre. “And similarly with the voice, you can really get a lot of information about someone's identity from pretty short clips of voices, which you might not be able to just by you know, a single note or a single piece of rhythm,” she says.

In their research, Piazza and her colleagues recorded 12 English-speaking mothers as they played with their babies, and as they spoke with an adult researcher. Then, the team used a sophisticated machine learning algorithm to model the shift in timbre between the mothers’ “baby talk” and their everyday language. “It’s so consistent across mothers,” Piazza said in a Princeton press release. “They all use the same kind of shift to go between those modes.”

When the researchers tested mothers speaking in nine other languages, from Cantonese to Hebrew to Polish, they found similar results. “The most remarkable thing was that when we then brought this new group of 12 mothers, also from the central New Jersey area, who spoke this wider range of languages from around the world,” Piazza says, “the same model that we had devised to discriminate these two modes in the English group generalized immediately to this new group of non-English speakers.”

In fact, we all might shift the timbre of our voices when we communicate with babies, even if we don’t realize it. “We used mothers to keep overall pitch range fairly consistent across participants,” Piazza explained in the press release. “However, I’d predict that our findings would generalize quite well to fathers.”

A previous version of this story misspelled Elise Piazza's name.

This article is based on an interview that aired on PRI’s Science Friday with Ira Flatow.

Will you support The World with a monthly donation?

We rely on support from listeners and readers like you to keep our stories free and accessible to all. Monthly gifts are especially meaningful as they help us plan ahead and concentrate on the stories that matter. Will you consider donating $10/month, to help sustain The World? Thanks for your support!