Asimov's Laws

Isaac Asimov (1920—1992) was an exceptional individual by any measure. He wrote fiction and non-fiction books on many subjects, and it was rumored that he never stopped writing, but he may be best remembered by science-fiction buffs for his "Three Laws of Robotics."

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Robotics has always been a favorite subject of science-fiction writers, and of their readers, but Asimov was perhaps the first to place limits on robot behavior, hard-wired into their "brains," that could not be changed. Why did he do this?

Dr. Asimov was grounded in physics, and he knew that things like a Dick Tracy watch, an audio-visual two-way communicator that wowed us in the popular newspaper comics way-back-when, was a real possibility. He saw the development of things like wireless communication, radar, the transistor, and the electronic computer. Even so, his fictional "brain" inside a human-like robot represented a leap of imagination at the time, and he soon recognized what a huge problem such a machine would pose to society if unrestricted, so he restricted it.

It would be interesting to know what Asimov would have to say today, if bureaucratic medical incompetence hadn’t killed him prematurely, for we live in a dawning age of robots. A young friend of mine designs and builds prototype manufacturing robots, and while it’s exciting work, it’s also just another day at work, ho-hum. These robots don’t move around, and they don’t look human, thank goodness, yet they exactly replicate human motion in putting things together. Their "brain" does not have Asimov’s Laws wired in, but if somebody sticks their hand into the robot’s work space, it immediately stops, thus preventing a serious injury. (These machines, by the way, have two fist-sized control buttons, one green, and one red, so that non-English-speaking, illiterate peasants off the farm can operate them.)

Today manufacturing robots are commonplace, and we presume that they benefit mankind by lowering prices for consumer goods; we have robots exploring Mars, and we presume that’s good for something; we have robots observing our everyday behavior, our phone calls, our web sites, and our email, which is good for nothing good; and we have robots that kill people. In other words, we already have distinctly different categories of robots, those that benefit society, those that might benefit society, and those that destroy society. I believe it was the latter kind of robot that worried Isaac Asimov.

As far as I know, we do not yet have robots designing and building robots; as far as I know, people do that kind of work. What kind of people design and build the drone bombers? I’d like to know. What kind of people design and build "smart bombs?" I’d like to know. What genius thought of using "spent uranium" in cluster bombs? I’d like to know. Who do these people work for? What do they think they’re doing? Yes, I would especially like to know that. All right, boys and girls, come forward and tell us what you think you’re doing.

Asimov foresaw this day when the technology he promoted would become a genuine threat to mankind, so he formulated his Laws. As things stand at the moment, I would endorse the idea that his Laws be hard-wired into every robot designed and built for any purpose.

And the innovators could follow a greater Law: "Thou shalt not kill."