In the future, people might really wear their emotions on their sleeves

The World

Picking up on subtle cues in our conversations with other people is tough — and it can be even trickier for people with social anxiety or Asperger’s syndrome.

Scientists at the Massachusetts Institute of Technology's Computer Science and Artificial Intelligence Laboratory have developed a potential solution: a wearable wrist device that can “read” the emotions of a conversation and clue wearers in as to how their partners are feeling. 

The technology isn’t ready to hit the market. But, science writer Daniel Oberhaus — who reported on the device for Motherboard — says that someday, it could do things like vibrate our phones to signal that our conversation partners are bored, based on vitals like heart rate. Even so, the perfected technology may not save us from awkward pauses entirely. 

“When you imagine these things, they're going to be this AI-wearable that both people might have to wear to have it really be effective,” he says. “Which, you know, wearing these things might actually make the situation more socially awkward than it was in the first place. So, I mean, getting some test cases for this is going to be, I think, interesting.”

Oberhaus explains that to develop the artificial intelligence, researchers had 31 undergraduates come into the lab to each tell a story that was just a few minutes long. “And the researchers, while the students were telling the story, would have them wear an armband that would monitor their vitals such as their skin temperature, blood pressure, heart rate, things like that.”

After recording them, researchers asked students whether, overall, their tales were happy or sad. They used the data to train an artificial neural network to recognize whether other stories were happy or sad, Oberhaus says. “It turns out the machine is actually pretty good at doing it. It had about an 83 percent success rate.”

Then, the research team split the stories up into five-second segments and trained another algorithm to classify them as positive, negative or neutral. As it turns out, the AI can’t keep up with our emotional roller coasters in real time — yet. In his article, Oberhaus points out that “the neural net's ability to classify the short segments was only 17.9 percent better than if the machine had randomly guessed.”

“It wasn't so great at picking out smaller chunks in the story, out of context, to determine whether they were happy or sad, but it's getting there,” Oberhaus says.

He explains that right now, the technology is “at a pretty low level of emotional granularity. So, it’s kind of happy or sad.” But he’s interested in seeing what the AI learns to do next.

“A lot of times, when people tell a story, the story might be entirely sad for 90 percent of it. And then right at the end, something happens that makes the story turn out to actually be a happy story. And so I think that, to me, is really the interesting aspect of this, is, how a machine is able to learn this.”

“And the researchers are hoping that eventually once they have a bigger data set, it will be able to be applied with much finer emotional granularity — to the point where, you know, you can tell if a story was exciting or funny.”

This article is based on an interview that aired on PRI's Science Friday

Sign up for our daily newsletter

Sign up for The Top of the World, delivered to your inbox every weekday morning.