Advances in automation technology mean that robots and artificial intelligence programs are capable of performing an ever-greater share of our work, including collecting and analyzing data. For many people, automated colleagues are still just office chatter, not reality, but the technology is already disrupting industries once thought to be just for humans. Case in point: science publishing.
Increasingly, publishers are experimenting with using artificial intelligence in the peer review process for scientific papers. In a recent op-ed for Wired, one editor described how computer programs can handle tasks like suggesting reviewers for a paper, checking an author’s conflicts of interest and sending decision letters.
In 2014 alone, an estimated 2.5 million scientific articles were published in about 28,000 journals (and that’s just in English). Given the glut in the industry, artificial intelligence could be a valuable asset to publishers: The burgeoning technology can already provide tough checks for plagiarism and fraudulent data and address the problem of reviewer bias. But ultimately, do we want artificial intelligence evaluating what new research does — and doesn’t — make the cut for publication?
The stakes are high: Adam Marcus, co-founder of the blog Retraction Watch, has two words for why peer review is so important to science: “Fake news.”
“Peer review is science's version of a filter for fake news,” he says. “It's the way that journals try to weed out studies that might not be methodologically sound, or they might have results that could be explained by hypotheses other than what the researchers advanced.”
The way Marcus sees it, artificial intelligence can’t necessarily do anything better than humans can — they can just do it faster and in greater volumes. He cites one system, called statcheck, which was developed by researchers to quickly detect errors in statistical values.
“They can do, according to the researchers, in a nanosecond what a person might take 10 minutes to do,” he says. “So obviously, that could be very important for analyzing vast numbers of papers.” But as it trawls through statistics, the statcheck system can also turn up a lot of “noise,” or false positives, Marcus adds.
Another area where artificial intelligence could do a lot of good, Marcus says, is in combating plagiarism. “Many publishers, in fact every reputable publisher, should be using right now plagiarism detection software to analyze manuscripts that get submitted. At their most effective, these identify passages in papers that have similarity with previously published passages.”
But in the case of systems like statcheck and anti-plagiarism software, Marcus says it’s crucial that there’s still human oversight, to make sure the program is turning up legitimate red flags. In other words, we need humans to ensure that algorithms aren’t mistakenly keeping accurate science from being published.
Despite his caution, Marcus thinks programs can and should be deployed to keep sloppy or fraudulent science out of print. Researchers recently pored over images published in over 20,000 biomedical research papers, and found that about one in 25 of them contained inappropriately duplicated images.
“I'd like to see that every manuscript that gets submitted be run through a plagiarism detection software system, [and] a robust image detection software system,” Marcus says. “In other words, something that looks for duplicated images or fabricated images.”
Such technology, he says, is already in the works. “And then [we’d] have some sort of statcheck-like program that looks for squishy data.”
This article is based on an interview that aired on PRI's Science Friday.
Every day, reporters and producers at The World are hard at work bringing you human-centered news from across the globe. But we can’t do it without you. We need your support to ensure we can continue this work for another year.
Make a gift today, and you’ll help us unlock a matching gift of $67,000!