Study Guide

I, Robot Themes

By Isaac Asimov

  • Technology and Modernization

    I, Robot is a classic science fiction book in the sense that it takes some technology that doesn't exist and asks, "what would life be like if we did have this?" The big technology in I, Robot is robots, of course (not "I," which would be hilarious—"we've invented the first person pronoun! Oh no, it's attacking us!"); but there are also little glimmers throughout the book of new technology and the ways that people have adapted to that technology or not adapted. For instance, instead of cars we have "gyros"—but marriage and family life is still what it was in the 1940s. Asimov is interested in some technologies and not in others.

    Questions About Technology and Modernization

    1. Are robots different from other forms of technology in these stories? Or do these robots differ from technology in our real lives?
    2. Does the book take a stand on technology and modernization? Is it always presented as a good thing? Or what makes technology good or bad?
    3. How do different characters deal with technological progress? Does the book provide a full range of responses to technology?
    4. Besides robots, what other technologies does Asimov invent for this book? Are there any patterns to his invented technologies? (For instance, does he seem most interested in weapons or in forms of communication or transportation or medicine?)

    Chew on This

    I, Robot presents us with a very positive view of technological progress, both through the stories and through the dialogue of characters we trust.

    I, Robot is unconvincing as a defense of robotics because, while each story resolves an issue, each story shows us how robots could go wrong.

  • Morality and Ethics

    If you expected a lot of mindless adventure in I, Robot, then you were probably surprised by how much these stories focus on morality and ethics and how few explosions there are. In I, Robot, one of the central questions that robots and humans face is how to live a moral life—except for the robots, this question is already answered since they have the Three Laws to follow. In fact, the Three Laws almost ensure that the robots are better than people—unless the people follow the Three Laws also. Unfortunately, although they seem pretty simple, the Three Laws seem to trip up some robots that try to be moral, which is why US Robots has to keep Susan Calvin and other robopsychologists on the payroll.

    Questions About Morality and Ethics

    1. Are robots really more moral than humans? Are you convinced by Susan Calvin's argument about how the Three Laws map onto human moral systems in "Evidence"?
    2. The Three Laws are presented in a fairly clear matter—something is either allowed (and good) or not allowed (and bad); but what sort of wiggle room do the robots find in these laws? Is there any gray area in morality or is everything in this book easily divided up into good and bad?
    3. What's the relation between morality and some of the other themes of this book, like Fear and Choice? For instance, the robots don't have a choice about being moral—does that change what you think about their morality?
    4. In the real world, religion is often associated with morality; but religion doesn't seem to play a big role in this book. So is there a relation between morality and religion in this book? Is there an argument in this book about how those two are or aren't connected?

    Chew on This

    Morality in I, Robot is purely a matter of acting correctly—and not at all a matter of thinking, feeling, or believing correctly. So it doesn't matter what you think as long as you act correctly.

    Morality in I, Robot is connected to the positronic brain, which means that morality has a physical effect on the universe (or on the math that describes the universe). In other words, morality isn't an abstract concept, but a concrete part of the world.

  • Fear

    In I, Robot, one of the most common responses to new technology is fear. Asimov doesn't spend a lot of time in these stories looking at people who are afraid of robots but don't understand them, like labor unions and religious people. But we do spend a lot of time with people who understand robots, and even these people sometimes seem a little afraid of them, like Calvin (in "Little Lost Robot") and Powell and Donovan (in "Catch that Rabbit" and "Escape!"). So it would be tempting to say that fear of technology is silly, but fear is still a response that a lot of people have to new technology.

    Questions About Fear

    1. Is fear of robots ever reasonable? Or is it always an irrational response in these stories? If it is reasonable, what are people afraid robots will do? Is it just a fear that robots will kill all humans? Or is there something else that robots threaten?
    2. How do different characters respond to fear? It might be interesting to look at characters who don't know about robots (Mrs. Weston), people who do know (Calvin), and even robots themselves (like Robbie).
    3. Is fear ever a useful emotion? Does it ever alert people that something is wrong? Or does it just lead people to the wrong conclusions?

    Chew on This

    In I, Robot, many characters have fears about robots—both people who don't know about robots and those who do—but those fears are never legitimate. So while fear is understandable as a human emotion, it's not a helpful one in these stories.

    In I, Robot, fear is a very human emotion—which is why one of the first times we see fear is Robbie being afraid of Mrs. Weston. This lets us know that these robots are very human.

  • Foolishness and Folly

    In I, Robot, people can be quite foolish about robots. First, they can be foolish and think that robots are a threat (which is foolish because robots are not a threat… right?); second, some humans can be foolish in how they deal with robots—giving them dangerous orders or ambiguous orders that could be misinterpreted. If I, Robot had one lesson for us about foolishness, it's that we could avoid a great deal of trouble if only we thought a little harder before we spoke. Especially when we're speaking to robots. Be sure to take note.

    Questions About Foolishness and Folly

    1. We think most of the examples of foolishness in this book are human foolishness. But are robots ever foolish? For instance, is Speedy foolish when he's acting drunk, or is that something else? What about Brain, when he becomes a practical joker?
    2. Is foolishness a tragic flaw or more of a minor problem? Do foolish people get their way ultimately? Is there anything good about foolishness?
    3. Is foolishness related to ignorance? Or are all people likely to be equally foolish? For instance, Grace Weston doesn't understand robots, and she's foolish; but Calvin does know about robots, and she makes some foolish mistakes as well.
    4. Who is the smartest person and who is the most foolish person in this book? How does Asimov tell us who is smart and who is foolish?

    Chew on This

    Fools never win in I, Robot, which is why this book is ultimately a happy book rather than a tragic book. If fools won, this would be tragic.

    Foolishness is a human problem, which is equally spread among educated and uneducated people in these stories.

  • Language and Communication

    In I, Robot, there are several ways to communicate, but sometimes language leads to miscommunication. For instance, Robbie can't talk and has to communicate through body language (making a "C" in the air when he wants to hear "Cinderella"). Similarly, to communicate that he's a man, Byerley also uses body language (that is, he hits a guy). And when people do talk, we find that speech can sometimes be ambiguous or misinterpreted. So, for instance, Nestor-10 hides because he was told to get lost, which leads to a big problem for Calvin. Language can be useful to communicate, but sometimes doesn't work perfectly.

    Questions About Language and Communication

    1. We notice that body language is used to communicate, but are there other forms of communication besides language?
    2. While some people are worried about robots from the beginning, Calvin notes that real opposition begins when US Robots starts making talking, walking robots (Robbie.225). Why do you think people would oppose robots that could talk and walk?
    3. How do different technologies affect the way people communicate? Would these stories be different if the characters had our technology of communication (i.e., cellphones, the internet)?
    4. Robots can learn slang and are affected by the tones that people use. How does it affect our reading when robots are so sensitive to human speech?

    Chew on This

    I, Robot traces the evolution of robots from Robbie to the Machines. One way that robots grow is that they become better and more human in their speech, from non-speaking Robbie to cold-voiced Cutie to warm-voiced Dave, and on.

    Robots are sensitive to speech, which explains why Susan Calvin is often so cold and frigid—to avoid giving them orders that she doesn't mean to give.

  • Choices

    Robots are constrained by the Three Laws: they don't have a lot of choice about what they can do. For instance, if a robot sees someone in danger, it has to act. In I, Robot, it may seem like Asimov is setting up a distinction between robots and humans: robots have no choice, humans do. But at the end of the book, in "The Evitable Conflict," Susan Calvin points out that humans have always had limits on what they can choose. So it almost seems as if humans don't have much choice either. Does that seem right?

    Questions About Choices

    1. This is the big question: how much free will do individual humans have in these stories? For instance, when Milton Ashe is attracted to a pretty woman, is that a choice he has made? When Bogert is ambitious, has he made a choice to be ambitious?
    2. As a variation on that last question, how much free will does humanity have as a whole here? Do we lose the ability to make choices when the Machines start to help us? Do you agree with Susan Calvin's argument that humanity never had any real ability to control our destiny (Evitable Conflict.225)?
    3. What happens in these stories when human choices conflict? For instance, when Bogert's choice to become Director of Research conflicts with Lanning's choice to stay on as Director of Research. How do these conflicts get resolved?
    4. The Three Laws limit the robot choices in these stories; but what limits human choices? Donovan and Powell discuss rules of behavior and government laws certainly seem to limit certain options in these stories. Are there other ways that human choices are limited?

    Chew on This

    Since humans are constrained by certain rules, just like robots, I, Robot shows us how robotic we are—or how human our robots are.

    In I, Robot, both humans and robots have the ability to make choices; but once they understand what the best choice is, there's no reason for them to make choices.

  • Friendship

    "Friendship" is interesting in I, Robot because there are many close relationships in this book—but we're not sure that we'd want to call them friendships. For instance, Gloria calls Robbie a friend, but Robbie is programmed to be a good nursemaid, so is it right to call him a friend? (Or let's make the case simpler: imagine Robbie was a baby-sitter who was paid to hang out with and take care of Gloria; would it be accurate to call him a friend?) Likewise, one of the strongest partnerships in the book is between Powell and Donovan, and we think they're friends, but they spend most of their time fighting. Are they really friends? I, Robot presents us with several relationships like these and makes us answer the question: what does it mean to be friends?

    Questions About Friendship

    1. What is the best friendship in this book? How does Asimov let us know who is friends? Does he come out and tell us or does he show us in some way?
    2. There's not a lot of friendship in this book and not a lot of love affairs. How would you describe the relationships in this book? Partnerships? Companions?
    3. Do robots have the capacity to be friends? Who is the friendliest robot in this book? How does it affect our reading process to have robots be some of the main characters in this book? Does this book present an idea of what it means for people (or robots) to be friends?
    4. Notice that robots are referred to by names, not serial numbers, for most of the book. Does this make them seem friendlier and more human? Are there other ways that Asimov presents robots as friendlier than the average robot from other works of science fiction?

    Chew on This

    Friendship in I, Robot is a matter of equality and shared interests. And since robots have different interests and are so superior to humans, humans and robots can never be friends.

    Although friendship seems so important to us in our real lives, Asimov presents a world where friendship doesn't really matter all that much.

  • Power

    I, Robot is very interested in questions of superiority and domination—who has power, who should have power, and what people (or robots) do with that power. Robots certainly seem to have some power, since they are stronger than people; but people have a lot of power as their controllers. (No matter how strong a robot is, it still needs to follow human orders and not harm any humans.) Power in I, Robot also takes some different forms: there's simple strength, but then there's also power that comes from love (for instance, Mrs. Weston has power over Mr. Weston because he loves her) and also from respect (which Susan Calvin should have since she's the best robopsychologist). So power is an important theme here, and it comes in many forms.

    Questions About Power

    1. Is Calvin correct that all life dislikes domination (as she says in "Little Lost Robot")? Does all life want to be free? If so, does that mean that robots are alive? And does it change what we think about how the Three Laws limit robots' choices?
    2. How do different characters respond to power differently? Does power affect the way that characters interact with each other? How do characters deal with the power of others? Do they resist or submit?
    3. I, Robot shows several different forms of power, such as Calvin's scientific expertise and Major-General Kallner's military command. Does this book show us that one type of power is better than another?
    4. What's the relation between power and morality? Do some characters have power because they are moral? Or does morality make characters less powerful by constraining their choices?

    Chew on This

    I, Robot shows power of different forms—military command, scientific respect, physical strength, etc.—in order to make us question the power of robots, which is mostly physical.

    Robots are shown to be more powerful than humans and more moral throughout the stories, so that the reader isn't bothered by the idea of robots taking over at the end of the book.

  • Science

    There are a few different branches of science in this book—astronomy (Mercury, the Sun), physics, and chemistry (Runaround.168), but the most important science is the imaginary science of robopsychology—the understanding of how robots think. Even though we only get one robopsychologist in these stories (the best one), that science underlies a lot of what happens in these stories. Science is also important here because it's not just some abstract search for truth; it's an institutional matter, with people fighting for resources and respect. Asimov worked as a scientist, so he knew that science was conducted by people, with all the usual issues of fear and power that people carry with them. If only we could be as well-adjusted as the robots…

    Questions About Science

    1. Robots may be a good parallel for atomic science that was big in the 1940s (check out the "Symbolism, Imagery, Allegory" page for some thoughts on that); but are there other sciences and technology from that time that seem important in relation to robots? If you were to update these stories, what technology do you think would be a good parallel for robotics?
    2. How does science relate to the other important themes of the book, such as power? Are scientists more powerful than others in the book?
    3. How do different people deal with science and the unknown? For instance, Mrs. Weston doesn't understand robots, but does that stop her from having an opinion on them?
    4. How do people resolve scientific disputes? For instance, Bogert and Lanning disagree on the math in "Liar!"—how do they work out this disagreement?

    Chew on This

    I, Robot shows the behind-the-scenes act of science, such as Hyper Drive's research, so that we don't take the technology for granted.

    I, Robot focuses on the actions of a business, US Robots and Mechanical Men, Inc., which shows us that science is not some neutral human activity, but is also all tied up in business and politics and human prejudices.

  • Rules and Order

    I, Robot is full of rules that may or may not help people to live full lives: there are the robots' Three Laws, there are the government's rules (including banning robots first and then joining into one world government), and there are even unwritten rules of human behavior. Sometimes these rules work correctly to help people (like the Three Laws, which at least work some of the time), and sometimes these rules get in people's way. Does this book have a lesson about rules and order?

    Questions About Rules and Order

    1. How do rules and order interact with other themes, such as power? Are rules always supported by power? Or are some rules supported by morality?
    2. How do different characters relate to rules and order? Are there characters who support rules and others who disobey rules more often? For instance, Kallner seems like a rule supporting character—but then, he's one of the people who wanted the robots built with a modified First Law, so he doesn't totally support all rules. Are there any characters who obey all the rules?
    3. Are there any historical connections about rules and order that help us to understand these stories? For instance, the 1940s was a time where people were worried about the threat of Communist infiltration; does that help us understand their fears of being replaced by robots?
    4. We see several types of rules in this book, including government laws and military rules. What other types of rules do we see and when do we see them? For instance, with Powell and Donovan, since they are equals, do we hear a lot about orders and power? Or do they talk more about traditions and customs?

    Chew on This

    I, Robot gives us several stories where people have the choice to disobey rules and order, but no moral reason to.

    I, Robot gives us a system of rules and order that help people to survive with robots—the Three Laws. But we never see US Robots come up with these Three Laws, so we never get to ask the question: "Who comes up with the rules?" That makes the rules seem more neutral.