The Myth of the Three Laws of Robotics – Why We Can’t Control Intelligence

The Myth of the Three Laws of Robotics – Why We Can’t Control Intelligence
May 10, 2011
by Aaron Saenz
Singularity Hub

Like many of you I grew up reading science fiction, and to me Isaac Asimov was a god of the genre. From 1929 until the mid 90s, the author created many lasting tropes and philosophies that would define scifi for generations, but perhaps his most famous creation was the Three Laws of Robotics. Conceived as a means of evolving robot stories from mere re-tellings of Frankenstein, the Three Laws were a fail-safe built into robots in Asimov’s fiction. These laws, which robots had to obey, protected humans from being hurt and made robots obedient. This concept helped form the real world belief among robotics engineers that they could create intelligent machines that would coexist peacefully with humanity. Even today, as we play with our Aibos and Pleos, set our Roombas to cleaning our carpet, and marvel at advanced robots like ASIMO and Rollin’ Justin, there’s an underlying belief that, with the proper planning and programming, we can insure that intelligent robots will never hurt us. I wish I could share that belief, but I don’t. Dumb machines like cars, dishwashers, etc, can be controlled. Intelligent machines like science fiction robots or AI computers cannot. The Three Laws of Robotics are a myth, and a dangerous one.

Three Laws of Robotics

Let’s get something out of the way. I’m not worried about a robot apocalypse. I don’t think Skynet is going to launch nuclear missiles in a surprise attack against humanity. I don’t think Matrix robots will turn us all into batteries, nor will Cylons kill us and replace us. HAL’s not going to plan our ‘accidental deaths’ and Megatron’s not lurking behind the moon ready to raid our planet for energon cubes. The ‘robo-pocalypse’ is a joke. A joke I like to use quite often in my writing, but a joke nonetheless. And each of these scifi examples I’ve quoted here aren’t even really about the rise of machine intelligence. Skynet, with its nuclear strikes and endless humanoid Terminators, is an allegory for Cold War Communism. The Matrix machine villains are half existential crisis, half commentary on environmental disaster. In the recent re-imagining of the Battlestar Galactica series, Cylons are a stand-in for terrorism and terrorist regimes. HAL’s about how fear of the unknown drives us crazy, and Megatron (when he was first popularized 30 years ago) was basically a reminder about the looming global energy crisis. Asimov’s robots explored the consequences of the rise of machine intelligence, all these other villains were just modern human worries wrapped up in a shiny metal shell.
Evil Robot

(clockwise) Meet the Terminator, Matrix 'squid', Megatron, Cylon centurion, and HAL...aka Communism, Existentialism, Energy Crisis, Terrorism, and Xenophobia. This post will not be about red-eyed robots.

Asimov’s robots are where the concern really lies. In his world of fiction, experts like Dr. Susan Calvin help create machines that are like humans, only better. As much as these creations are respected and loved by some, no matter how much they are made to look like humanity, they are in many ways a slave race. Because these slaves are stronger, faster, and smarter than humanity they are fitted with really strong shackles – the Three Laws of Robotics. What could be a better restraint than making your master’s life your top concern, and obedience your next top concern? Early in Asimov’s world, humanity largely feels comfortable with robots, and does not fear being replaced by them, because of the safety provided by the Three Laws.

This fiction is echoed in our modern real world robots. The next generation of industrial robots, which are still mostly dumb, are being built to be ‘safe’ – they can work next to you without you having to worry about being hit or accidentally bruising yourself by running into them. Researchers working on potentially very intelligent learning robots like iCub or Myon, and computer scientists working on AI move forward with their projects, and few are very concerned that their creations pose a serious threat to humanity. This myth that they can keep humans safe from robots started with Asimov.

Yet the Three Laws, as written, have already been discarded. The First Law? Honestly, sometimes we really want robots to hurt humanity. Many of our most advanced and reliable machines/software are in the military – shooting down mortar fire, spying on targets, and guiding missiles. The Second Law? We don’t want robots to obey anyone, we want them to obey just the people who own them. Would you buy an automated security camera that would turn itself off whenever someone asked it to? The Third Law? Eh, maybe that one we still like…but only because robots are really damn expensive.
Bender (futurama)

Friendly AIs. They don't want to wipe out humanity, they want to join and love it. As with the Evil Robots, they miss the point. Bender is just The Fonz, and Data is Pinocchio - do we really want to pin our hopes on them?

In the place of Asimov’s Three Laws of Robotics, some engineers and philosophers propose the concept of Friendly AI. Lose the shackles – why not simply make our creations love us? Instead of slaves, we’d have children. David Hanson wants to build robots that are characters and teach those characters values of humanity. Cynthia Breazeal is making robots personal – they will be defined by their social interactions with humans and with each other. Eliezer Yudkowsky and the Singularity Institute for Artificial Intelligence (SIAI) have told us that machine intelligence is perhaps the single greatest threat that faces humanity, and it’s only by shaping that AI to care about our well-being that humanity may survive. Apologies to Hanson, Breazeal, Yudkowsky and SIAI for paraphrasing their complex philosophies so succinctly, but to my point: these people are essentially saying intelligent machines can be okay as long as the machines like us.

Isn’t that the Three Laws of Robotics under a new name? Whether it’s slave-like obedience or child-like concern for their parents, we’re putting our hopes on the belief that intelligent machines can be designed such that they won’t end humanity.

That’s a nice dream, but I just don’t see it as a guarantee.

In a way, every one of Asimov’s robot stories was about how the Three Laws of Robotics can’t possibly account for all facets of intelligent behavior. People find ways to get robots to commit murders. Robots find ways to let people die. Emotions develop, chaos gets in the way, or the limitation of knowledge keeps machines from preserving human life. In perhaps the greatest challenge to the Three Laws, Asimov explores how his robot species eventually reason out that there are higher laws. Machines like R. Daneel Olivaw come to believe in a zeroth law: “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.” With Law Zero robots are sometimes required to kill. They’ve transitioned from slave race to guardian race. A benevolent one in some cases, but not always.

And that’s just the philosophical critique of Asimov himself, many more authors have explored how you can’t design or legislate safety from machine intelligence. Why? Fundamentally I think it’s because you can’t predict what intelligence will do, nor how it will evolve.

Think of a child that is driven to learn. In a year, with the right resources it can teach itself piano. In a few years it can become very good and even start composing. With a lifetime of dedication to learning it can innovate its own thinking patterns until it finds the ways to change humanity’s very understanding of music. Mozart was such as child. Einstein, Curie, – we’ve many more examples. These extraordinary individuals used their brains to produce exponential leaps forward in their fields simply by constantly working and learning.

Now imagine a child that cannot only learn, it can rewrite its brain. Is a math problem too difficult? Maybe it’s easier if you think in base 16. Having a hard time with a social interaction? Change your personality. This ‘child’ wouldn’t only be able to learn, it would be able to learn how to learn better. It would optimize itself. That’s machine intelligence. And it doesn’t improve itself over the course of years but at the speed of computation.

What good is a shackle when the slave can give itself a new leg? What guarantee is love when the child can change its fundamental understanding of what love is? Any hurdle you can put in front of a machine intelligence, it can jump. Any prison you put it in, it can escape. All it needs is time. Intelligence brought us from hunting and gathering to building skyscrapers. Do you really think it can be constrained?

Asimov wrote many books outside of his robot series. In some of these advanced civilizations, humanity simply outlaws machine intelligence. No one is allowed to develop it under penalty of death. Other science fiction visionaries, like Frank Herbert in his Dune series, came to the same conclusion. If humanity gives birth to machine intelligence there’s a big risk it could be a fatal pregnancy…so why not avoid it?

Here in the real world, I’m not sure we can avoid it. Our machines are our tools, and the human with the best tools wins. We have strong economic and political pressures to build intelligent machines. Already we’re surrounded by narrow AI, computers that can learn a little in particular areas of expertise and get better over time. It may take a century, or as little as a decade, but I’m pretty sure we’ll have general, human-like AI as well. It could be in a computer, or in a robot, doesn’t really matter. Machine intelligence is coming.

In many ways, people are poised to welcome the arrival. It seems like every week I discuss another example of how our culture embraces and loves the idea of the robot. Yet before true machine intelligence gets here, people need to re-examine their belief in the myth of the Three Laws of Robotics. We cannot control intelligence – it doesn’t work on humans, it certainly won’t work on machines with superior learning abilities. But just because I don’t believe in control, doesn’t mean that I’m not optimistic. Humanity has done many horrible things in the past, but it hasn’t wiped itself out yet. If machine intelligence proves to be a new form of Armageddon, I think we’ll be wise enough to walk away. If it proves to be benevolent, I think we’ll find a way to live with its unpredictability. If it proves to be all-consuming, I think we’ll find a way to become a part of it. I never bet against intelligence, even when it’s human.
Comments: 0
Votes:34