"If you control the code, you control the world," security adviser Marc Goodman said in a 2012 a TED Talk. But what happens when humans no longer control the code?
Today, coding is being disrupted by something called "machine learning." With traditional coding, an engineer writes specific instructions for a computer program to follow. But with machine learning, a programmer "trains" the computer program to do its job by feeding it a bunch of data. Extremely complicated equations take care of the rest — even the programmers don't totally understand how the process works.
Are we relinquishing our power when we teach machines, instead of programming them? Jason Tanz, editor-at-large for WIRED magazine, took a deep dive into machine learning for the publication's June cover story,"The End of Code and Future of A.I."
“The idea for this story came about from a conversation I was having with Andy Rubin, who is one of the founders of Android and a big A.I. geek,” says Tanz. “He was saying that while he’s very excited about the coming age of machine learning, he was a little saddened by it because, as a programmer, he really loved getting under the hood and writing instructions, and having command over this world. Now with machine learning, it’s a much more abstracted kind of control — you can’t, as he put it, cut off a head and look inside and see how the brain works.”
Though he doesn’t believe coding will go away anytime soon, Tanz does believe things in the tech world are changing rapidly.
“The coding-based worldview — where everything is kind of understandable and you can break everything down into parts, and once you understand an algorithm you can tweak it and optimize it — that idea is going to go away more and more as machine learning takes up more and more of the computing that we do,” he says.
How will breakthroughs in artificial intelligence transform human activity and impact decision making in government and society? These are questions that Joshua Cooper Ramo has spent years examining. He's the author of "The Seventh Sense: Power, Fortune and Survival in the Age of Networks."
“We’re sort of entering an era where the initial age of networks that we experience is about to get much more complex,” says Ramo. “First of all, the networks are going to be instant — they’re going to be what we call ‘Zero Latency Networks,’ so things will happen very, very quickly. And part of the nature of that is things will be happening so fast that you’ll need to use A.I. and you’ll need to use machines to solve problems because we want more and more speed.”
When looking back, Ramo says some of the great inventions of the Industrial Revolution — trains, planes and ships, for example — were built to compress space and distance. In the modern era, our drive to create ever-faster networks caters to our desire to compress time.
“In our age, we need this kind of new sense for what it means to be constantly enmeshed. Part of that gets to this very important issue of, ‘Where is the human in the loop?’” he says.
While advancements in artificial intelligence can help humans manipulate speed and time, A.I. may also impact the very fabric of our democracy.
“We’re in the midsts of what I think is a profound network election in many ways, and the way in which people are thinking about the world is exactly this issue: The ability to make a distinction between what they see on their social network and what actually goes on in the real world is sort of disappearing," Ramo says.
The evolution of artificial intelligence may also have extreme consequences for the world of science.
“We live in a world with increasing, what I call in the book ‘black boxing’ of systems — things disappear into these algorithms in ways that we can’t understand,” Ramo says. “The Scientific Revolution was really about humans discovering the answers to questions ourselves with our own minds. Where we are today is a point where the machines may be able to solve scientific problems or even commercial problems or maybe economic problems better than any human can, and we won’t know how they got the answer.”
So are we just dumbed down robots? Ramo says that question can only be determined by looking at power structures.
“The guys who control the Google algorithm and who control the Facebook algorithm, they have more power arguably than any group in human history,” he says. “On what basis are they accountable? How do they make decisions? How are those algorithms programmed? The fundamental problem is most of us can’t understand that.”
As the recent controversy surrounding Facebook shows, algorithms aren’t always objective.
“Everything is a reflection of the ethics, somehow, of its creator, and that’s true for algorithms as much as anything,” says Tanz. “I think there’s real concern in the A.I. community that if there’s only a certain chunk of society that is creating A.I. and embedding it with their understanding of the world, that is going to have a massive influence on how technology develops, and therefore how society develops.”
This story first aired as an interview on PRI's The Takeaway, a public radio program that invites you to be part of the American conversation.
Every day, reporters and producers at The World are hard at work bringing you human-centered news from across the globe. But we can’t do it without you. We need your support to ensure we can continue this work for another year.
Make a gift today, and you’ll help us unlock a matching gift of $67,000!