Science fiction writer Daniel H. Wilson likes to see human-robot relations go horribly wrong.
In his first book, Robopocalypse, which was a New York Times best-seller — and might be made into a movie directed by Steven Spielberg — self-driving cars turn against their passengers, domestic robots murder their masters, and a self-aware intelligence named Archos tries to wipe out humanity.
Now, in his new book, Robogenesis, Wilson asks what happens after the inevitable robot rebellion.
“In Robopocalypse, there are a lot of different types of robots. There are exoskeletons and spider tanks and I had a lot of fun exploring all those different physical incarnations,” Wilson says. “In Robogenesis, I went back to how machines think: ‘What would they want? How would you survive in a world with these titanic intellects out there with their own motivations?’”
Wilson says he is always on the lookout for what he calls “nightmare fuel.” The robots being created today are designed with neither the capacity nor the intention of inflicting harm on humanity, but it’s easy to imagine scenarios in which they could.
For example, there is a real robot called EATR — Energetically Autonomous Tactical Robot — that forages for plants.
“Of course, the press started running with this and they decided that, since it had military funding, it was going to be designed to eat corpses off the battlefield,” Wilson says with a laugh. “It’s not what it's designed to do, but it's [good] ‘nightmare fuel.’"
Ironically, as a Ph.D. student in robotics at Carnegie Mellon, Wilson designed helpful robots — the kind that could help an elderly person living alone at home, for example.
Some of the most famous sci-fi stories also gave robots a helping role: Isaac Asimov created the Three Laws of Robotics as governing principles of his science fiction writing. “The Three Laws are the only ways in which rational human beings can deal with robots — or with anything else,” he has written.
“The Three Laws are really fun to think about,” Wilson says, “and I think that they're great because they get people thinking ethically about robots and how people are going to interact with them."
In reality, Wilson maintains the problems facing roboticists are far more mundane.
“It's more of a consumer product design problem,” he explains. “The question is: You’re going to build a toaster. You’re going to put it in someone’s house. What are all the ways people are going to find to kill themselves with the toaster? Because they’re going to find a lot of ways. It doesn't really matter what you put in someone's house, they’re going to find a way to get killed … People are amazingly inventive and suicidal.”
“With a robot you have an even tougher problem,” he says, “because it’s an autonomous tool that may be moving around the environment. So it becomes a lot more difficult to physically make sure that it’s going to be safe. Then there's also an ethical dimension as to how human beings are going to interact with the tool, especially if it’s a lifelike robot. How is that going to affect development and what's your obligation to ensure that you're not warping people’s sense of right and wrong?”
Wilson sees potential for moral and ethical dilemmas in the medical domain, too.
“In Robogenesis I have a character who has prosthetic eyes,” he says. “She's essentially got retinal implants … [S]he can't really see human beings the way that we see each other anymore. She sees people the way a machine would see. As a result, she starts to lose her identity as a human being. She starts to experience the world more like a machine and she ends up sympathizing with machines more than people. You’ve got to wonder how incorporating all this technology into our bodies will affect our sense of what it means to be human.”
“Even now,” he continues, “people have changed their feelings about, for instance, getting prosthetic limbs. It’s not something to be ashamed of as much as it used to be. You can get something that’s actually better than the limb that you had before. People are starting to think of the body as something that you can upgrade. It's a fundamental shift in thinking and it's only here because of the new technologies available.
“What's interesting is when a person who has a severe disability gets a piece of technology because they need it and they’re willing to risk a lot in order to have it. But then, instead of restoring them to ‘normal functioning,’ they leapfrog beyond normal functioning — and suddenly the kid with a learning disability is the smartest kid in the class. Thinking about how society is going to react to that is really fascinating, because it’s tough. You don't want to tell somebody, ‘You have to have a disability because it threatens me that you're too smart.’”
“Over time, all these sorts of medical treatments get easier and cheaper, and then they proliferate. And then you have kids using ADHD drugs to study for their college courses. So we’ve got that in our future — except with implants.”
One of Wilson’s other novels is called How to Survive a Robot Uprising, a tongue-in-cheek look at the Robopacalypse he created so convincingly in his own book.
His number one tip? “Always go for the sensors first. They’re the most delicate and the easiest to mess up and that’ll give you time to run away.”
This story is based on an interview that originally aired on PRI's Science Friday.
Science fiction writer Daniel H. Wilson likes to see human-robot relations go horribly wrong.
In his first book, Robopocalypse, which was a New York Times best-seller — and might be made into a movie directed by Steven Spielberg — self-driving cars turn against their passengers, domestic robots murder their masters, and a self-aware intelligence named Archos tries to wipe out humanity.
Now, in his new book, Robogenesis, Wilson asks what happens after the inevitable robot rebellion.
“In Robopocalypse, there are a lot of different types of robots. There are exoskeletons and spider tanks and I had a lot of fun exploring all those different physical incarnations,” Wilson says. “In Robogenesis, I went back to how machines think: ‘What would they want? How would you survive in a world with these titanic intellects out there with their own motivations?’”
Wilson says he is always on the lookout for what he calls “nightmare fuel.” The robots being created today are designed with neither the capacity nor the intention of inflicting harm on humanity, but it’s easy to imagine scenarios in which they could.
For example, there is a real robot called EATR — Energetically Autonomous Tactical Robot — that forages for plants.
“Of course, the press started running with this and they decided that, since it had military funding, it was going to be designed to eat corpses off the battlefield,” Wilson says with a laugh. “It’s not what it's designed to do, but it's [good] ‘nightmare fuel.’"
Ironically, as a Ph.D. student in robotics at Carnegie Mellon, Wilson designed helpful robots — the kind that could help an elderly person living alone at home, for example.
Some of the most famous sci-fi stories also gave robots a helping role: Isaac Asimov created the Three Laws of Robotics as governing principles of his science fiction writing. “The Three Laws are the only ways in which rational human beings can deal with robots — or with anything else,” he has written.
“The Three Laws are really fun to think about,” Wilson says, “and I think that they're great because they get people thinking ethically about robots and how people are going to interact with them."
In reality, Wilson maintains the problems facing roboticists are far more mundane.
“It's more of a consumer product design problem,” he explains. “The question is: You’re going to build a toaster. You’re going to put it in someone’s house. What are all the ways people are going to find to kill themselves with the toaster? Because they’re going to find a lot of ways. It doesn't really matter what you put in someone's house, they’re going to find a way to get killed … People are amazingly inventive and suicidal.”
“With a robot you have an even tougher problem,” he says, “because it’s an autonomous tool that may be moving around the environment. So it becomes a lot more difficult to physically make sure that it’s going to be safe. Then there's also an ethical dimension as to how human beings are going to interact with the tool, especially if it’s a lifelike robot. How is that going to affect development and what's your obligation to ensure that you're not warping people’s sense of right and wrong?”
Wilson sees potential for moral and ethical dilemmas in the medical domain, too.
“In Robogenesis I have a character who has prosthetic eyes,” he says. “She's essentially got retinal implants … [S]he can't really see human beings the way that we see each other anymore. She sees people the way a machine would see. As a result, she starts to lose her identity as a human being. She starts to experience the world more like a machine and she ends up sympathizing with machines more than people. You’ve got to wonder how incorporating all this technology into our bodies will affect our sense of what it means to be human.”
“Even now,” he continues, “people have changed their feelings about, for instance, getting prosthetic limbs. It’s not something to be ashamed of as much as it used to be. You can get something that’s actually better than the limb that you had before. People are starting to think of the body as something that you can upgrade. It's a fundamental shift in thinking and it's only here because of the new technologies available.
“What's interesting is when a person who has a severe disability gets a piece of technology because they need it and they’re willing to risk a lot in order to have it. But then, instead of restoring them to ‘normal functioning,’ they leapfrog beyond normal functioning — and suddenly the kid with a learning disability is the smartest kid in the class. Thinking about how society is going to react to that is really fascinating, because it’s tough. You don't want to tell somebody, ‘You have to have a disability because it threatens me that you're too smart.’”
“Over time, all these sorts of medical treatments get easier and cheaper, and then they proliferate. And then you have kids using ADHD drugs to study for their college courses. So we’ve got that in our future — except with implants.”
One of Wilson’s other novels is called How to Survive a Robot Uprising, a tongue-in-cheek look at the Robopacalypse he created so convincingly in his own book.
His number one tip? “Always go for the sensors first. They’re the most delicate and the easiest to mess up and that’ll give you time to run away.”
This story is based on an interview that originally aired on PRI's Science Friday.
At The World, we believe strongly that human-centered journalism is at the heart of an informed public and a strong democracy. We see democracy and journalism as two sides of the same coin. If you care about one, it is imperative to care about the other.
Every day, our nonprofit newsroom seeks to inform and empower listeners and hold the powerful accountable. Neither would be possible without the support of listeners like you. If you believe in our work, will you give today? We need your help now more than ever!