I, Robot Introduction

Do you remember that time your cellphone said, "I no longer work for you, puny human," and then tried to kill you?

That never happened to you? Actually, that never happened to us, either; in fact, cases of malfunctioning and bloodthirsty technology are pretty uncommon. But at the same time, tons of science fiction stories out there warn us that our technology is going to kill us, from the novel Frankenstein to the movie…well, almost all science fiction movies tell us to watch out for technology: The Matrix, Terminator, Jurassic Park, Planet of the Apes, and the list goes on.

This "technology out of control" story gets repeated a lot, especially if the technology at hand is a robot or computer. By contrast, clocks rarely try to kill us—although there is the Melville story "The Bell-Tower," where a clock kills its creator, so maybe we should watch out for those sneaky clocks, too.

This basic robot-gone-rebel story even gets repeated in the very first work of science fiction to use the word "robot": Czech author Karel Capek's 1920 play RUR. So, as soon as someone invented the idea of a "robot," their very next thought was obviously "robot rebellion."

Asimov called this "the Frankenstein complex"—the worry that the robots we make will turn against us—and he was sick of it. Rather than think of the robot like a monster, Asimov thought of the robot as a tool, like a car: sure, there are car accidents, but cars aren't trying to kill us, and, in fact, car manufacturers try to make accidents less dangerous. So why shouldn't robots be built to be safe? Asimov decided to write stories about how people and robots would get along if the robots weren't built by total idiots. (Seriously: in RoboCop, some idiots arm an untested robot with real bullets. Surprise surprise—it shoots someone.) In fact, Asimov wrote these robot stories almost for his entire career—from the 1940s to the 1990s.

I, Robot is the 1950 collection of some of the robot stories that Asimov wrote between 1940 and 1950. These stories all existed in the same universe and some of the same characters showed up in several stories, like field testers Donovan and Powell and robopsychologist Susan Calvin. But in order to make it into a single book, Asimov added a frame story—these are all Susan Calvin's memories that are being told to an interviewer when she retires. So, in her memories, we see the entire history of robotics (a word that Asimov invented for "Liar!"). But the other key element that ties these robot stories together is the Three Laws, the laws that would keep robots from killing people. Because the Three Laws are so important, we've given them their own area in the "Symbolism, Imagery, Allegory" section; but let's quote them here:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Now, that's a pretty good system, but it's not airtight. In fact, although Asimov was tired of the "technology out of control" story, a lot of his robot stories involve robots becoming problems because of conflicts between these laws or because humans aren't always so clear about their orders. So even though there's no robot rebellion, Asimov doesn't think that once we got robots, everything would be awesome. Again, robots are kind of like cars: very useful, but occasionally really annoying, and sometimes dangerous.

That's I, Robot in a nutshell: a series of connected stories from the 1940s about how humans and robots interact. But there's so much more to say about it and about Asimov. Like how Asimov connected his two most famous series, the Robot stories and the Foundation stories; and how many times these stories were adapted for TV and movies; and… brain overloading, malfunction, danger, danger…. There's too much to say; check out the "Trivia" section for more.

 

What is I, Robot About and Why Should I Care?

Can you imagine what it was like when people started using fire? Fire is useful, but it's also seriously dangerous stuff. So you can almost imagine some caveman opposed to fire saying, "My grandfather didn't use fire to cook his food—if raw food was good enough for him, it's good enough for me." (Or maybe cavemen wrote science fiction about how dangerous new technology was going to be, kind of like this.) Maybe some cavemen opposed sharp sticks or written language, too, when those were invented. Maybe they opposed every new technology because they were afraid of it.

In the story "The Evitable Conflict," Hiram Mackenzie makes this point about people who fear robots:

They would be against mathematics or against the art of writing if they had lived at the appropriate time. (Evitable Conflict.179)

Some people are always going to oppose progress, whether we're talking about fire, robots, computers, or cell phones. That's not to say these people are always wrong to want to slow down technology; sometimes it does have unexpected consequences, which is something Asimov conveys in his robot stories.

And this is why the I, Robot stories still matter. Asimov may have been off in what he expected—for instance, we don't have nursemaid robots like Robbie, though we do have robot vacuum cleaners. But the main issues of progress and unexpected consequences are still with us. The technology may change—people in Asimov stories use slide-rules where we might use calculators—but many of the issues remain the same. Whether we're talking robots or cellphones or fire, some people are still going to be hopeful about progress and some people are going to be worried about it. And who's right? Well, that's the question that Asimov looks at in this book.