I, Robot is a classic science fiction book in the sense that it takes some technology that doesn't exist and asks, "what would life be like if we did have this?" The big technology in I, Robot is robots, of course (not "I," which would be hilarious—"we've invented the first person pronoun! Oh no, it's attacking us!"); but there are also little glimmers throughout the book of new technology and the ways that people have adapted to that technology or not adapted. For instance, instead of cars we have "gyros"—but marriage and family life is still what it was in the 1940s. Asimov is interested in some technologies and not in others.
I, Robot presents us with a very positive view of technological progress, both through the stories and through the dialogue of characters we trust.
I, Robot is unconvincing as a defense of robotics because, while each story resolves an issue, each story shows us how robots could go wrong.
If you expected a lot of mindless adventure in I, Robot, then you were probably surprised by how much these stories focus on morality and ethics and how few explosions there are. In I, Robot, one of the central questions that robots and humans face is how to live a moral life—except for the robots, this question is already answered since they have the Three Laws to follow. In fact, the Three Laws almost ensure that the robots are better than people—unless the people follow the Three Laws also. Unfortunately, although they seem pretty simple, the Three Laws seem to trip up some robots that try to be moral, which is why US Robots has to keep Susan Calvin and other robopsychologists on the payroll.
Morality in I, Robot is purely a matter of acting correctly—and not at all a matter of thinking, feeling, or believing correctly. So it doesn't matter what you think as long as you act correctly.
Morality in I, Robot is connected to the positronic brain, which means that morality has a physical effect on the universe (or on the math that describes the universe). In other words, morality isn't an abstract concept, but a concrete part of the world.
In I, Robot, one of the most common responses to new technology is fear. Asimov doesn't spend a lot of time in these stories looking at people who are afraid of robots but don't understand them, like labor unions and religious people. But we do spend a lot of time with people who understand robots, and even these people sometimes seem a little afraid of them, like Calvin (in "Little Lost Robot") and Powell and Donovan (in "Catch that Rabbit" and "Escape!"). So it would be tempting to say that fear of technology is silly, but fear is still a response that a lot of people have to new technology.
In I, Robot, many characters have fears about robots—both people who don't know about robots and those who do—but those fears are never legitimate. So while fear is understandable as a human emotion, it's not a helpful one in these stories.
In I, Robot, fear is a very human emotion—which is why one of the first times we see fear is Robbie being afraid of Mrs. Weston. This lets us know that these robots are very human.
In I, Robot, people can be quite foolish about robots. First, they can be foolish and think that robots are a threat (which is foolish because robots are not a threat… right?); second, some humans can be foolish in how they deal with robots—giving them dangerous orders or ambiguous orders that could be misinterpreted. If I, Robot had one lesson for us about foolishness, it's that we could avoid a great deal of trouble if only we thought a little harder before we spoke. Especially when we're speaking to robots. Be sure to take note.
Fools never win in I, Robot, which is why this book is ultimately a happy book rather than a tragic book. If fools won, this would be tragic.
Foolishness is a human problem, which is equally spread among educated and uneducated people in these stories.
In I, Robot, there are several ways to communicate, but sometimes language leads to miscommunication. For instance, Robbie can't talk and has to communicate through body language (making a "C" in the air when he wants to hear "Cinderella"). Similarly, to communicate that he's a man, Byerley also uses body language (that is, he hits a guy). And when people do talk, we find that speech can sometimes be ambiguous or misinterpreted. So, for instance, Nestor-10 hides because he was told to get lost, which leads to a big problem for Calvin. Language can be useful to communicate, but sometimes doesn't work perfectly.
I, Robot traces the evolution of robots from Robbie to the Machines. One way that robots grow is that they become better and more human in their speech, from non-speaking Robbie to cold-voiced Cutie to warm-voiced Dave, and on.
Robots are sensitive to speech, which explains why Susan Calvin is often so cold and frigid—to avoid giving them orders that she doesn't mean to give.
Robots are constrained by the Three Laws: they don't have a lot of choice about what they can do. For instance, if a robot sees someone in danger, it has to act. In I, Robot, it may seem like Asimov is setting up a distinction between robots and humans: robots have no choice, humans do. But at the end of the book, in "The Evitable Conflict," Susan Calvin points out that humans have always had limits on what they can choose. So it almost seems as if humans don't have much choice either. Does that seem right?
Since humans are constrained by certain rules, just like robots, I, Robot shows us how robotic we are—or how human our robots are.
In I, Robot, both humans and robots have the ability to make choices; but once they understand what the best choice is, there's no reason for them to make choices.
"Friendship" is interesting in I, Robot because there are many close relationships in this book—but we're not sure that we'd want to call them friendships. For instance, Gloria calls Robbie a friend, but Robbie is programmed to be a good nursemaid, so is it right to call him a friend? (Or let's make the case simpler: imagine Robbie was a baby-sitter who was paid to hang out with and take care of Gloria; would it be accurate to call him a friend?) Likewise, one of the strongest partnerships in the book is between Powell and Donovan, and we think they're friends, but they spend most of their time fighting. Are they really friends? I, Robot presents us with several relationships like these and makes us answer the question: what does it mean to be friends?
Friendship in I, Robot is a matter of equality and shared interests. And since robots have different interests and are so superior to humans, humans and robots can never be friends.
Although friendship seems so important to us in our real lives, Asimov presents a world where friendship doesn't really matter all that much.
I, Robot is very interested in questions of superiority and domination—who has power, who should have power, and what people (or robots) do with that power. Robots certainly seem to have some power, since they are stronger than people; but people have a lot of power as their controllers. (No matter how strong a robot is, it still needs to follow human orders and not harm any humans.) Power in I, Robot also takes some different forms: there's simple strength, but then there's also power that comes from love (for instance, Mrs. Weston has power over Mr. Weston because he loves her) and also from respect (which Susan Calvin should have since she's the best robopsychologist). So power is an important theme here, and it comes in many forms.
I, Robot shows power of different forms—military command, scientific respect, physical strength, etc.—in order to make us question the power of robots, which is mostly physical.
Robots are shown to be more powerful than humans and more moral throughout the stories, so that the reader isn't bothered by the idea of robots taking over at the end of the book.
There are a few different branches of science in this book—astronomy (Mercury, the Sun), physics, and chemistry (Runaround.168), but the most important science is the imaginary science of robopsychology—the understanding of how robots think. Even though we only get one robopsychologist in these stories (the best one), that science underlies a lot of what happens in these stories. Science is also important here because it's not just some abstract search for truth; it's an institutional matter, with people fighting for resources and respect. Asimov worked as a scientist, so he knew that science was conducted by people, with all the usual issues of fear and power that people carry with them. If only we could be as well-adjusted as the robots…
I, Robot shows the behind-the-scenes act of science, such as Hyper Drive's research, so that we don't take the technology for granted.
I, Robot focuses on the actions of a business, US Robots and Mechanical Men, Inc., which shows us that science is not some neutral human activity, but is also all tied up in business and politics and human prejudices.
I, Robot is full of rules that may or may not help people to live full lives: there are the robots' Three Laws, there are the government's rules (including banning robots first and then joining into one world government), and there are even unwritten rules of human behavior. Sometimes these rules work correctly to help people (like the Three Laws, which at least work some of the time), and sometimes these rules get in people's way. Does this book have a lesson about rules and order?
I, Robot gives us several stories where people have the choice to disobey rules and order, but no moral reason to.
I, Robot gives us a system of rules and order that help people to survive with robots—the Three Laws. But we never see US Robots come up with these Three Laws, so we never get to ask the question: "Who comes up with the rules?" That makes the rules seem more neutral.