Like many freelancers, Rochelle LaPlante is paid by the piece. “So, I have to balance doing it fast enough to make it worth my time, but also make sure I'm doing high-quality work,” she says.
But LaPlante’s job isn’t the writing or design work you might expect in today’s gig economy. She’s an independent content moderator, tasked with keeping unwanted, sometimes graphic content off the social apps and websites we use every day. “So, it's like modern-day piecework, but with the added layer of psychological stress,” she says.
There are human moderators like LaPlante all over the world, screening the user-generated content that swells across the internet every second — and doing so largely invisibly. While artificial intelligence can increasingly spot harmful content like child pornography or extremist messaging online, researcher Sarah Roberts says “the actual social media production cycle is much more complex” than automation can currently handle.
Roberts is an assistant professor of information studies at the University of California, Los Angeles, who writes about the work of content moderators. She explains that many are independent contractors like LaPlante — even some who work on-site at well-known social media companies.
“So, despite the fact that they're working at the headquarters in Silicon Valley for one of these major firms, they still may not have full employee status,” she explains. “And that can actually really matter when it comes to things like health insurance.”
Other moderators, she says, work in call center environments in places like the Philippines, Europe and even Iowa. “So, it's really a global practice, and it stands to reason since social media is a 24-by-7 operation.”
While it’s tough to quantify the number of workers involved in content moderation worldwide, Roberts points to the massive volumes of content we generate on social media to give a sense of the need. “In 2014, YouTube was reporting that it was receiving 100 hours of user-generated content to its platform per minute, per day,” she says.
LaPlante, who’s based in Los Angeles, finds her work through a freelance platform that many companies use to have their text, images and videos moderated. She inspects the content, which is usually user-submitted, to see whether it violates any of the company’s guidelines. The worst images she’s come across? “Child pornography,” she says — no question.
Some companies she works for (anonymously, since the platform doesn’t require companies to disclose their real names) pay her well. “Others just don't. Sometimes it's a penny an image, and I have to make the decision about whether I want to spend my time doing that.”
Two former content moderators at Microsoft are suing the company, claiming they developed post-traumatic stress disorder from the work. In a statement, Microsoft responded that its moderators are provided with filtering technology to distort images, as well as company-mandated psychological counseling.
Roberts says it’s not clear whether Microsoft’s filtering techniques and other mitigation practices had been in place for the workers all along — or alternatively, whether some of the content they encountered was too new to be in the filters' databases. But for her, the case is a reminder that we don’t know how much disturbing content is too much for a human moderator to bear.
“Is it the fact that you could see one too many videos, and that's too much for you to have been exposed to and then you become disabled from the work?” she says.
“Could it happen that you just see one particular video that's too much for you to take? We don’t know. I think that's what makes this case so novel. And some of these kinds of blanket statements about best practices and things being in place aren't clear to me that, you know, those are necessarily sufficient.”
LaPlante, who doesn’t receive health insurance or counseling through her work, says she turns to other moderators when the content takes a toll. Together, they’ve formed an informal support network. “It's just a lot of talking about, you know, what did you see today, and how is it difficult,” she says. “And sometimes it's just sharing cat photos and funny videos on YouTube to get through it.”
For her, the most important thing for other internet users to know is that her work isn’t done by computers or artificial intelligence. “When you're scrolling through your Facebook feed, your Twitter feed or whatever social media you're using, and not seeing those images, just to take a moment to realize that it’s humans that are doing that and making them appear that way,” she says.
“And it's not some computer system that's handling it all for you.”
This article is based on an interview that aired on PRI's Science Friday.
Every day, reporters and producers at The World are hard at work bringing you human-centered news from across the globe. But we can’t do it without you. We need your support to ensure we can continue this work for another year.
Make a gift today, and you’ll help us unlock a matching gift of $67,000!