First, “fake news” from questionable news sites permeated social media during the 2016 presidential campaign. Now, behold the next trend in skewed reality that experts say could threaten US democracy: fake videos that appear authentic by embedding real people's faces onto other bodies through artificial intelligence algorithms. It has sparked a debate on how to verify videos shared online.
This phenomenon also began during the presidential campaign. People began slicing videos to falsely make it look as if events took place.
"This is just the next level in that," says Samantha Cole, an assistant editor at Vice's tech news site, Motherboard. "How we view things as true, a lot of the times is — if you see a video and you see that person in the video, you can say that [event] happened. And now maybe that's not the case."
Cole wrote a series of stories in December about the now-closed Reddit thread, “deepfakes,” in which users could post fake porn using the faces of celebrities or even their exes or friends (or ex-friends?).
The videos were created using a machine-learning algorithm. It works by taking a data set with hundreds of photos of one person and blending them into original video footage where the person's face is pasted onto another person's body. Recently, an app was released that could help anyone achieve this result.
“We say that anyone can do it, but it does take a lot of patience, some curiosity and a little bit of knowledge about AI to begin with,” Cole says. “So, it's accessible and it's democratized, but I'm not going to say it's easy.”
Reddit and the popular pornography site, Pornhub, have banned deepfakes videos. But Cole says it's just a first step because the videos are “just being driven to more scattered places on the web.”
Aviv Ovadya is the chief technologist at the Center for Social Media Responsibility at the University of Michigan’s School of Information. He says this technology can easily be adapted to audio-only platforms.
“Let's say there's a hot mic of [President] Trump ordering a missile attack on North Korea. It didn't actually happen, but that doesn't mean North Korea won't launch a missile attack back,” Ovadya says.
The technology seems to already exist: A Chinese company already has an audio-forgery video creating the illusion that Trump is fluent in Mandarin.
"It's something that we should really worry about,” Ovadya says. “It affects the foundations of our democracy and a lot of our civil institutions."
"I think that for a long time, video has been our gold standard of truth. Maybe not legally, but definitely in our minds. You see something, you say, 'Oh my gosh, that happened.’ You see it for five seconds. You hit "share" and ‘retweet’ and it gets a million shares on Facebook, and then, that is what happened,” Cole explains.
Things that may have previously seemed far-fetched appear true because a video is thought to be indisputable evidence, Cole says, adding, "And that's the scariest part of this."
Siwei Lyu serves as an associate professor in the department of computer science at the University at Albany, State University of New York, in addition to working as the director of the university’s computer vision and machine-learning lab.
Lyu, who is an expert on digital media forgery, says that while the foundation of this technology has been around for years — mainly used by Hollywood studios — those who worked with it needed special hardware tools, software systems and technical training.
“What has been changed recently are these new artificial intelligence-enabled algorithms that can take a lot of data and bypass a lot of this manual process and a need for technical facilities," Lyu says. People who may not have been able to afford the technology can suddenly access it almost for free.
Lyu is conducting research with a team at Dartmouth College to develop technology that distinguishes real and fake videos by using physiological signals of those featured: namely, heart rate and blood flow. He says his team’s algorithms can detect tiny fluctuations in the skin color due to changes from the blood flow. Then, using a separate technology developed by the Massachusetts Institute of Technology, Lyu and his colleagues can enhance those physiological signals to determine a video’s authenticity.
“So, we have some pretty clear results [and it] seem to be promising, but [we're] still exploring this idea,” Lyu says.
Ovadya says that even once a mainstream system to detect fake videos is established, those who create the videos will eventually come up with new technology to counter it.
“I don't see that as sort of a super long-term solution. I think it's a cat-and-mouse game,” he says.
“And even if you can detect whether or not something has been manipulated, that still needs to be shown in some way when it's being represented on Facebook or YouTube wherever. And so this is sort of a [question of] not just, can you detect it, but also how does this actually affect the ecosystem around which we share information."
Lyu is working on a media forensics program sponsored by the federal Defense Advanced Research Projects Agency (DARPA) that is poised to assemble teams from academia, industry, government and international partners to devise technical solutions. The holy grail of the project is a kind of crowdsourcing platform where viewers can decide a video's authenticity.
“We as a research committee are working very hard toward that goal. So, there’s just a little bit [of] hope for this problem … We're working hard; we're trying to fight back this trend. As Aviv just mentioned, there's a cat-and-mouse game, so we keep we keep growing both sides of the war."
This article is based on an interview on PRI’s Science Friday with Ira Flatow.
The World is an independent newsroom. We’re not funded by billionaires; instead, we rely on readers and listeners like you. As a listener, you’re a crucial part of our team and our global community. Your support is vital to running our nonprofit newsroom, and we can’t do this work without you. Will you support The World with a gift today? Donations made between now and Dec. 31 will be matched 1:1. Thanks for investing in our work!