Compare Speedy ("Runaround") and Herbie ("Liar!"): both robots are caught in dilemmas where the Three Laws are in conflict. Speedy has to follow an order but also has to protect himself; Herbie has to not hurt someone, but doing so might hurt someone else. So, they're in similar situations.
Now compare how they're dealt with: Speedy is compared to a drunk and spends his time singing show tunes; Herbie sits alone in his room, reading books about heartache and romance. Both those situations are kind of silly, but doesn't Herbie's situation seem a little more serious and tragic? And doesn't Speedy's situation seem a little funny—a drunken robot singing show tunes? (Asimov could have shown a robot malfunctioning in lots of ways, but he chose to have the robot sing. Everyone looks a little silly when singing, especially show tunes.)
This is why we say that Asimov's tone can be serious or comic largely depending on the character. For instance, Powell and Donovan are often a little ridiculous; but Calvin is never treated as the object of humor.
Still, the overall tone of Asimov's work tends to be pretty objective. Powell and Donovan may be comic figures at times, but that's largely because they seem a little silly (what with all their fighting and joking around with each other). We never get the impression that the story is really on one side or the other. The story may make us root for Calvin to figure out the problem, as in "Liar!"; but when she drives Herbie insane, the narrator doesn't tell us that it's OK and she had her reasons to. Rather, the narrator doesn't tell us what to think and lets us come to our own conclusions.
Well, the science fiction part is pretty clear: this is a book about an imaginary technology (humanoid robots) and how people deal with them. (This was even more science fiction-y when Asimov wrote these stories in the 1940s, before Honda had built ASIMO and Björk made this weird music video)
But we're also going to say that these stories are often mystery stories of a particular type. They're not crime mystery stories (although Asimov also wrote straight-up crime mystery stories). The mystery here tends to be a scientific mystery: given what we know about robots (the Three Laws), why are they acting the way they do? Notice also that many of these stories share something in common with mystery stories, like the Sherlock Holmes mystery stories: the story only ends after the great detective (here, usually robopsychologist Susan Calvin) explains the mystery's solution to us and the other characters. Because we are as clueless as Watson.
Asimov did not like this title. He wanted to name this book Mind and Iron, which is a phrase Calvin uses in the Introduction (Introduction.32); she seems to be saying that people think robots are just "mind and iron," but really robots are much more. But the publishers overruled Asimov and decided to name the book I, Robot. Which is weird because science fiction writer Eando Binder had already written a story in 1939 named "I, Robot," which was about a sympathetic robot. Why would the publishers want to remind people of this other story?
Also, "I, Robot" doesn't really fit this book. "I, ____" is a good title for an autobiography or a confession, but it doesn't make sense for a biography of someone else's life. For instance, the 1934 Robert Graves novel I, Claudius tells the story from Claudius's point of view and the 1939 Binder story "I, Robot" is the robot's own confession. But Asimov's I, Robot is mostly the story of robots as told by Susan Calvin. Shouldn't the publishers have titled this book You, Robot, or Those Robots Over There? Or maybe I, Susan. That might not sound like a good title, but it's a better fit.
There are lots of little endings in this book of nine stories, but let's deal with the last one: Calvin and Byerley discuss how the Machines have started directing human progress. Byerley seems upset by this, but we think Calvin makes a pretty good case—after all, all of us humans are subject to forces that are beyond our control. Sure, we might increase our control over certain parts of our lives (yay penicillin and other antibiotics); but if you've ever had a cold ruin a day, you know that there are some things that are beyond our control even today.
But even so, this ending opens up a lot of questions. We mean, the whole book has stories about how we don't really need to worry about robots rebelling and killing us all and taking over. And then the last story ends with something that could be described as a robot take-over. It's the kind of ending that makes us ask questions—which is our favorite type of ending.
And after that, the narrator tells us that Calvin retires and dies, which is like a big hint to us that even though Susan acts pretty cold and robotic, she is only human after all.
OK, so 1998 isn't the future anymore, but it was the future when Asimov was writing these stories in the 1940s. And in that future, there were going to be robots, and a lunar base, and manned expeditions to Mercury and Mars, and instead of cars there are gyros (which are probably helicopters, not the delicious Greek meat dish). Oh yeah, there's also no Internet, people don't have cellphones or laptops, and no one is watching cute cats on Youtube, which is definitely the best thing about life in the future.
So, yeah, we need to cut Asimov some slack in imagining the future: science fiction writers get a lot right when they describe the future, but they usually get even more wrong. Which is fine, since the point isn't to guess what will happen in the future, but more to ask, "if X happens, then what?" And the X in Asimov's case is robots. Also, there's a base on the moon (Robbie.62), which we think is pretty awesome. (What sort of sports do people play in low-gravity on the moon?) But Asimov doesn't really talk about the lunar base because he's more interested in showing us how robots work or don't work. Everything in these stories is focused on the issue of robots.
For instance, we get to see the mines on Mercury ("Runaround"), and the power converter space station ("Reason"), and even Hyper Base, where they're developing some sort of warp drive ("Little Lost Robot"). But in all of these cases, Asimov doesn't tell us all that much about these settings. Does Asimov give us enough information to imagine what it's like to be on a Space Station? (Like, what does it smell like on a Space Station? Unpleasant, we're guessing.) No, he doesn't describe so much about that. He only gives us enough information about the setting to understand how the robots function in it. So, Mercury is hot and dangerous, which leads to the problem that Powell and Donovan have with Speedy and their headquarters.
For us, the most interesting and most detailed setting work happens at the beginning of "Robbie" and in "The Evitable Conflict." These two stories make an interesting pair: "Robbie" is a story interested in one household and one little girl, whereas "The Evitable Conflict" is interested in the whole world and all humans. So we get very precise details in "Robbie"—enough to let us imagine a robot in our neighbor's house (since the issue here is one girl and her robot); and in "The Evitable Conflict" we get some big-picture description of the regions of the world that should lead us to thinking about the big picture of robots and humans.
But still, wherever the story is set, the focus is always on how robot and humans get along.
Asimov once said, "Everything I write is intended to enlighten, even my science fiction" (source). And that's why we're giving this a 4 on the Tough-o-Meter: Asimov probably knew a lot of big words, but he doesn't use very many of them because he wants people to understand what he's writing about. He also knew his audience was probably made up of young people, possibly even younger than himself. (When "Robbie" was published, Asimov was only 20 years old.) Add to this the fact that Asimov was an immigrant—he came from Russia when he was three, and he could speak English and Yiddish, and his parents spoke Russian to hide secrets from him—so he was definitely aware of how language can be a barrier to communication. This might help explain why Asimov tries so hard to be clear.
So, why not a lower Tough-o-Meter score? For two reasons: one, Asimov may try to be clear, but the 1940s was a long time ago, and things change. So, Asimov might say something that would be clear to his audience then, but might not be clear to readers today. And two, even though Asimov writes clearly, the ideas that he talks about are pretty big and sometimes hard to grasp.
Asimov once said, "Everything I write is intended to enlighten, even my science fiction." We're with you on that, Mr. Asimov.
In fact, from 1958 to 1991, Asimov wrote almost 400 essays on science for The Magazine of Fantasy and Science Fiction and won a special Hugo Award for "adding science to science fiction." So we know that Asimov really cared about being clear about science and science fiction. (He does sneak a few science facts into these robot stories.)
So many of the robot stories are mysteries like Sherlock Holmes, where one character has the answer and has to explain it to another, and to the readers as well. So what we get is a lot of dialogue where one character lays out the situation to another. There's not a lot of description—maybe the most description in this whole book is the part where Robbie and Gloria play in "Robbie." Most of the stories seem to focus on dialogue between the characters. And that's how Asimov lays out the problem and the solution.
There are a few names for this technique in science fiction, but we like the phrase "As you know, Bob." In the classic, clumsy "As you know, Bob" technique, one character will tell another something that everyone should already know. If you were writing a story about your life for aliens, you might resort to something like, "As you know, Bob, we drive cars on streets." Which is ridiculous—when was the last time you told your friend Bob about cars driving on streets? But Asimov knows that's ridiculous, so he sneaks in a few jokes. For instance, here's Powell and Donovan discussing Dave in "Catch that Rabbit":
"And not just under it—they're part of it."
"I know that—"
"Shut up!" said Powell, savagely, "I know you know it, but I'm just describing the hell of it." (Catch that Rabbit.9-11)
Of course, Powell isn't describing it out loud just for fun or to remind himself about it; he's saying it out loud so that the readers know about it. So even when Asimov is trying to be clear to the audience, he also makes a few jokes about how he's doing it.
We're quoting these laws again because they're so important. Asimov himself noted (in his book Robot Visions), "If all that I have written is someday to be forgotten, the Three Laws of Robotics will surely be the last to go." And that seems to be a pretty fair statement since the Three Laws can be found all over the place in other works of fiction. (For more examples than you can shake a stick at, check out this list.)
But the reason we're talking about the Three Laws here is that they are all over the place in I, Robot. Mr. Weston hints at the First Law in "Robbie," Speedy is caught in a dilemma between the Second and Third Laws in "Runaround," and in "Reason," Cutie disobeys orders (which is against the Second Law) but mostly in order to uphold the First Law, etc. Not every story focuses on the Three Laws, but every story includes them. In fact, Asimov expects the reader to be so familiar with the Three Laws that by the end of the book, he doesn't need to repeat them. When Calvin is talking to Byerley in "The Evitable Conflict," she talks about the laws without ever telling the reader what those laws are (53).
OK, so let's assume that we're all familiar with what the laws say. What do they mean? Well, we know what the Three Laws mean to Susan Calvin since she tells us explicitly:
The three Rules of Robotics are the essential guiding principles of a good many of the world's ethical systems. (Evidence.138)
So the Three Laws are what makes sure that we have good robots.
But there's something about the Laws that almost everyone gets wrong: people think of the Three Laws as software that's just programmed in to the robot's brain—you could program the Laws and have a good robot or not program the Laws and have an evil robot. But check out when Calvin and Peter Bogert discuss the issue in "Little Lost Robot": if you modify the Three Laws, you'd be left with "complete instability, with no nonimaginary solutions to the positronic Field Equations" (64). The Laws aren't just programs; they're a necessary part of how you build a positronic brain. Calvin says so even more clearly in "Evidence": "A positronic brain can not be constructed without" the laws (133). So if you leave the Laws out, you don't get an evil and intelligent robot, but rather a crazy robot, or just a pile of scrap metal.
So, in Asimov's robot stories, the Three Laws are not just a guarantee that the robots are good. They seem to indicate that there's some connection between goodness and stability/sanity—or even between goodness and intelligence. That is, it's impossible to be truly intelligent unless you're truly good.
When Asimov was writing his robot stories, the technology that people were really interested in wasn't robots, but atomic power. And you can see this in Asimov's stories with the way that he uses "atomic" and "hyperatomic." For instance, the robot's energy system is a "tiny spark of atomic energy" (Runaround.38) and Powell and Donovan build a robot in "Reason" with "atomic flare" tools (Reason.156). So the robots are clearly connected to atomic energy because, in some ways, they're standing in for atomic energy. The characters in the stories worry about robots, whereas people who were reading the stories around the time they were published were worried about atomic power—and weapons.
There are a few other times that Asimov uses "atomic" and "hyperatomic" that might make us think about atomic energy and the atomic bomb. For instance, in "Little Lost Robot," Major-General Kallner is in charge of the Hyperatomic Drive program, which should remind us of Brigadier-General Leslie Groves, the man who was in charge of the Manhattan Project. For another example, US Robots is racing against Consolidated to build a Hyperatomic Drive in "Escape!"—and that might make us think about the US and Germany racing to build an atomic bomb.
But here's one thing to keep in mind: although we often think of atomic energy as dangerous (it can be used in bombs and nuclear energy plants may have problems, like what happened at Fukushima), in the 1940s, a lot of people were a lot more optimistic about atomic power. For instance, check out this picture from the Atomic History Museum of products that are marketed as "atomic" or "nuclear." Atomic energy used to be really cool—people even used to have parties to celebrate atomic tests. So when Asimov uses the word "atomic," we should remember that atomic energy in the 1940s was new and exciting. While it could be used to create weapons, it could also be used to make life better. So in that way, atomic energy really does seem related to the issue of robots: everyone is worried that they'll be dangerous, but they also might just make our lives a bit better.
The Three Laws of Robotics and "atomic" are two symbols that seem to go throughout the book, but are there any symbols that are specific to one or two stories? We think there are, and as an example, we want to look at "fire" and "flames" in "The Evitable Conflict."
Why? Well, first, we noticed that fire comes up a few times in this story, including the beginning and the end of the story. (And if something pops up at the beginning and the end of the story, you probably want to pay attention to it.) The story starts with three paragraphs on Stephen Byerley's fireplace: the fireplace is a "medieval curiosity" (1), but it's also totally modern, "a thoroughly domesticated fireplace" (2) that you can only hear through speakers (3). The fire gets mentioned once or twice during Byerley and Calvin's conversation (35) and it reflects Stephen Byerley's mood (182). And at the end of the story, the fire goes out (228)—and that's the very last line of the story (before we're returned to the frame of the interviewer and Calvin). So, sure, there's a lot of fire here; but what does it mean?
On one hand, fire tends to be associated with heat, light, activity, passion, love—all those good things about life. So if you read a story about how the Machines might be taking over and at the end of the story, a fire goes out, you might think, "Oh, that fire is a symbol for human activity and life. And since it went out, that means that human life is kind of over—so it's a sad ending."
On the other hand, fire may be associated with life, but let's be serious here: fire is a dangerous thing. And it's also represented as old-fashioned, "medieval," in this story. (And this is a good reminder to us: we might know what some symbols mean because that's part of our culture—fire is related to heat and light and love and warmth; but at the same time, sometimes a story will tell us just what a symbol means in the story.) So Byerley might be passionate at that moment, but notice how his passion is leading him in the wrong direction: he's becoming paranoid about the Society for Humanity. Passion, like fire, can be dangerous. So maybe fire is a symbol for our dangerous past; and when the fire goes out at the end of the story, this means that it's actually a happy ending—we've progressed beyond such a dangerous tool and have found safer ways to heat and light our lives.
On the third hand (how many hands do we have?), maybe it's a bit of both: fire is a symbol of something special about humans (we're passionate) but it's also a symbol of how we can be dangerous (we're passionate, sometimes about the wrong things). So maybe this ending isn't totally sad or totally happy, but a mix of both.
So if someone wanted to look at a symbol in one particular story, we might discuss fire in "The Evitable Conflict." Can you think of any symbols in the other stories?
The frame story is told by a first person narrator (the journalist), and he's on the edge of Calvin's story; so it's first person, peripheral narrator.
The stories themselves are told by a third person narrator (no one in the story) that floats around pretty freely most of the time, able to peek into anyone's head; so it's third person, omniscient. Except it doesn't always seem omniscient in the usual sense. Which leads us to…
When Asimov wrote the frame story (the journalist interviewing Susan Calvin), he added the journalist as a first person narrator. So the book starts and ends with pretty clear "I" statements: "I looked at my notes" (Introduction.1); "I never saw Susan Calvin again" (Evitable Conflict.230). In between those two statements, the journalist gets very few lines. So why add the journalist to this book at all?
Well, maybe it's because the journalist character is in the same position as the reader: we're trying to figure out the truth about Calvin and the robots. Or maybe it's because the journalist helps to stitch together these stories into a cohesive narrative by responding to them. For instance, he reacts with "a sort of horror" to the idea that Byerley was a robot (Evidence.300). So, when we say that the journalist is only the peripheral narrator, we mean that he doesn't take part in any of the stories that Calvin tells him; but he may be an important part of how we read this book.
But the journalist is only in the frame, and Susan Calvin tells him most of the stories. However, you probably have noticed that that makes very little sense: Calvin tells stories that don't involve her (she's not on Mercury with Powell and Donovan for "Runaround"); and she tells about things that she couldn't possibly know. For instance, in "Robbie," Mr. Weston mutters something to himself (Robbie.138). How could Calvin know that? She couldn't know all the things that the stories tell us.
No, what we have here in the stories is straight-up third person style: the stories are told by a narrator who isn't a person in the story (or not even a person at all). But what kind of a third person narrator is it? Is it omniscient—knowing all, including what all the people are thinking? Is it limited omniscient—knowing a bunch, including what one or two people are thinking? Or is it objective—only telling us what we could see and hear if we were there and never telling us what people are thinking?
We've actually had a bit of an argument here at Shmoop over this (no fatalities, but several injuries—we take our literature seriously). And here's why: the narrator basically floats wherever it wants to; for instance, "Escape!" pops back and forth between Powell and Donovan on the spaceship and Calvin and Brain on Earth. The narrator also occasionally goes into multiple people's heads, even just for a peek. For instance: "'I see,' said Robertson, who didn't" (Escape.27). That's pretty omniscient, telling us that Robertson doesn't understand, even when he says he does.
However, although the narrator has moments of being omniscient, we think that most of the stories are told in a limited omniscient POV, focusing on Calvin. For instance, "Liar!" gets into Calvin's head more than any other character, like when Milton Ashe tells her he's marrying someone else.
And there's more. Even though the narrator occasionally tells us what a character is thinking or feeling, the narrator often just tells us the objective facts. For example, in "Reason," Donovan is shocked and angry that Cutie is spreading his religion. But the narrator doesn't tell us that Donovan is shocked and angry; the narrator tells us this: "He came charging down upon them, complexion matching his hair and clenched fists beating the air furiously" (Reason.80). That's just the objective facts of the situation, without looking into anyone's head.
So, we're calling this an omniscient third person narrator, but notice how the narrator sometimes leaves things out and just tells us what we could see, like an objective narrator. In other words, we could say that the narrator treats us like a scientist: we see the external facts of the world and we have to figure out what it all means.
But there's more—and don't worry, this will be the last thing: the narrator may tell us what the humans are thinking or feeling, but never tells us what a robot is thinking or feeling. Because the robots are really the main mystery of these stories—they're what we have to figure out.
Since I, Robot is a collection of short stories, we're going to try something a little different here. Here, we're going to look at "Runaround" and see how it doesn't exactly fit with the "Overcoming the Monster" plot. Why? Largely because many robot stories fit into that plot. (For more on Booker's Seven Basic Plots, check here or search Shmoop for examples.)
Stage Identification: Robot Speedy isn't working properly and the environment of Mercury could kill Powell and Donovan soon.
Explanation/Discussion: This stage introduces the monster. But notice that the "monster" in "Runaround" isn't an actual monster. It's not like there's a dragon on Mercury or robots planning a revolution. No, the problem here is Mercury itself—if Powell and Donovan don't do anything, it's Mercury that's going to kill them.
Stage Identification: Powell and Donovan travel through the mines and see the wonders of Mercury.
Explanation/Discussion: The Dream Stage is the part of the adventure when everything is going reasonably well. Powell and Donovan certainly have problems here—like the fact that they have old robots that need to be ridden. But overall, their quest to overcome Mercury's environment seems to be going OK. They can't be on the surface for too long, but they've got mines to travel through. This part also includes the dream-like imagery of absolute shadow in the shade and a glittering plain of crystal in the sunlight (86). Mercury is dangerous, but it can be beautiful.
Stage Identification: Speedy is drunk and Powell and Donovan can't scare him back to them.
Explanation/Discussion: The Frustration Stage is all about how the plan to defeat the monster falls apart. Here, Powell and Donovan can't get to Speedy because he's acting funny. And when they cleverly figure out a plan to use chemistry, even that doesn't work (192). And if chemistry doesn't work, what hope is there? This is so frustrating that it might shade into the next stage.
Stage Identification: Powell runs out to get Speedy—even though he might die. And he almost gets rescued by the wrong robot.
Explanation/Discussion: The Nightmare Stage is when things are at their lowest, when the monster is about to kill the protagonist. Here, in order to snap Speedy out of his dilemma between the Second and the Third Law, Powell rushes out into the full heat of the Sun. But what really makes this a nightmare is that one of the older robots goes to help him first (222-224), which a) messes up the plan to get Speedy and b) would kill Powell anyway because the older robots move so slowly. So, if the older robot "saves" Powell, then both Powell and Donovan are dead men.
Stage Identification: Speedy saves everyone.
Explanation/Discussion: Luckily, just in time, Speedy snaps out of his drunkenness, saves Powell, and gets the element they need to overcome the danger of Mercury's environment. That's kind of the definition of a thrilling escape from death.
Now, if you look at "Runaround" as a story about "Overcoming the Monster," it's a little strange, because it's hard to locate the monster: it's mostly the environment of Mercury, but it's also (at least a little bit) the problem in Speedy's programming. And this story is probably the story that best fits this Basic Plot. (Where's the monster in "Reason" when Cutie takes over the space station and does a good job of running it? Or "Escape!" when Brain temporarily kills Powell and Donovan so that they can travel through hyperspace?) "Overcoming the Monster" is a pretty common story in science fiction (Frankenstein might fit this, and that's a classic "dangerous creation" story). But Asimov's robot stories only fit this Basic Plot if you squint. And then what we see is that the monster is usually not the robot, but something else (a conflict in the Three Laws, a poorly-given order, the temperature on Mercury).
Since I, Robot is a collection of short stories, we're going to try something a little different here. Here, we're going to break the plot of the whole book down—but we're going to do so as if this book were all about the robots as a single main character. We're calling this the "Rise of the Robots" plot.
Stage Identification: "Robbie"
Explanation/Discussion: Here's the question that I, Robot asks: can robots and humans live together safely? Even though Robbie is really primitive (he can't even talk), Mrs. Weston is afraid of him. And even though Robbie is all kinds of awesome, robots still get banned from Earth (225). So some people clearly think that robots are dangerous to us humans. Are they?
Stage Identification: "Runaround," "Reason"
Explanation/Discussion: Well, robots aren't dangerous but they can be a little unpredictable. Which is strange, because they have the Three Laws that they're supposed to follow. But these two stories show us that these laws have some wiggle room. This is part of the conflict in the book, because how can we be sure that robots are safe if these Three Laws have wiggle room?
Stage Identification: "Catch that Rabbit," "Liar!"
Explanation/Discussion: We could probably put these two stories under conflict. (Actually, "Catch that Rabbit" is a weird story that doesn't fit anywhere since the problem there has nothing to do with the Three Laws. Let's forget that story.) But if you think about it, "Liar!" really complicates the whole Three Laws by showing us a robot that wants to follow just one law—the First Law, the really big one—and he can't. This story also ends on a huge downer, with Susan Calvin purposely driving Herbie insane. So if the question of this book is "can robots and humans live together safely?," "Liar!" seems to say "no, because humans will put robots in impossible situations or destroy them." Oops—humans were the real monsters all along.
Stage Identification: "Little Lost Robot"
Explanation/Discussion: We think "Little Lost Robot" is a climax because this is the story that most seriously deals with the idea of robot rebellion. Calvin worries that the Nestors may be able to harm people. But at the end, even though Nestor-10 wants to attack Calvin, he can't. So, robots seem perfectly safe for people—even really weird robots like the Nestors.
Stage Identification: "Escape!"
Explanation/Discussion: But if robots are safe for humans, what's going on in "Escape!"? The Brain seems to intentionally put humans in danger—and why? For a joke. Of course, it turns out that the Brain didn't put people in danger and everything is OK. (In fact, even though Calvin solves the mystery, it's not like anything bad would have happened if she didn't.) So we can all breathe a little easier.
Stage Identification: "Evidence"
Explanation/Discussion: We know from the earlier stories that robots can get along with humans, but all those situations were under experimental conditions. Here we see the end-result of all these stories: the idea that a robot and a human can live side-by-side in the real world. If Mrs. Weston in "Robbie" was worried about robots replacing us, here we see that robot replacements might not be such a bad thing.
Stage Identification: "The Evitable Conflict"
Explanation/Discussion: And here's the kicker: can robots and humans live together safely? Yes—and, in fact, that might be the only possible way for humans to live safely after all. (Because, remember, we're very dangerous, both to robots and to each other.) Of course, this story raises questions about human freedom and destiny; so, in the end, we have a whole new set of questions.
So there you have it, in nine easy stories: robots start out as useful servants ("Robbie"); they become more complex and more problematic ("Runaround," "Liar!"); and they end up as indispensable partners—or masters ("Evidence," "The Evitable Conflict"). So, I, Robot has an arc (the rise of the robots) even though it's just a collection of stories. But although there's a definite arc to this book, it doesn't totally fit within the Classic Plot Analysis. Which is why you should definitely check out our Three-Act Plot Analysis and Booker's Seven Basic Plots Analysis, where we look at individual stories.
Since I, Robot is a collection of short stories, we're going to try something a little different here. Here, we're going to break two different stories down to show how Asimov uses (and abuses) his plots.
The first act usually ends with the protagonist committed to the plot. So, in "Robbie," when Gloria's mother gets rid of Robbie, Gloria becomes committed to finding him. Notice that Gloria doesn't start the action—her mother does. That's something that seems fairly common in these stories: the protagonists get pulled in to the story, they don't start it off.
The second act usually goes from the end of Act I (the protagonist is committed to the plot) to their lowest point—the point when the protagonist is furthest from their goal. In "Robbie," Gloria is pretty much always far from her goal of finding Robbie. But her lowest moment is probably when she goes to the Talking Robot, and not even he can give her an answer (189). In fact, this is the lowest moment for Gloria, but it's also pretty bad for her mother, which helps to remind us that Gloria's mother isn't a villain—she's just a regular person who made a bad choice. So the plot reminds us that this isn't going to be a story about good vs. evil or heroes vs. villains.
The third act is where things get fixed (or totally broken), and that's what happens when Robbie saves Gloria in the factory (217). Notice how the most important action here is Robbie lifting Gloria—which echoes the part at the beginning of the story where Robbie and Gloria are playing. How does it make us feel when Asimov ends the story very similarly to how he began it?
In "Little Lost Robot," Susan Calvin is committed to the plot (find the robot) when Peter Bogert and Major-General Kallner convince her that that's the best course of action. That's not what she wants to do—she thinks the best, simplest solution would be to destroy all the robots (43) rather than let one escape. But the others convince/order her to help them. As with "Robbie," the protagonist here is drawn into the story by other people's actions and external factors.
So Calvin tests the robots twice and can't find the missing one. Is this her lowest point? Actually, her lowest point (we think) happens when she confronts Kallner and Bogert—she tells them she's willing to go public and Kallner tells her that they'll throw her in jail. Great, not only is Calvin confronted with unhelpful (and possibly dangerous) robots; the humans are also unhelpful (and possibly dangerous) to her. Notice that this act also includes Calvin's request that all the robots be destroyed (220).
And what happens in the third act? Calvin figures out how to find the robot, which satisfies Bogert and Kallner, and so everyone is happy. Wait, everyone? The modified Nestors (including Nestor-10, the lost robot) are going to be destroyed, so they're probably not too happy (325). Notice how measured this story is: every act centers on the idea or the act of destroying robots. Since we're often worried in science fiction about robots destroying us, this might give the readers the idea that we have it backwards: we don't need to worry about robots—it's the robots that have to worry about us.
So, those stories are different, and you might want to analyze the others to see if this fits, but notice certain similarities. For instance, the protagonist in Asimov stories tends to get involved because of someone else (or something else, as in the case of "Runaround"). How does that affect your reading of the stories?