HAL (Douglas Rain)

Character Analysis

HAL 9000 is the computer that runs the spacecraft of the Jupiter Mission. It watches over the ship through a glowing red eye on top of the unit. Whenever Kubrick gives us a closeup of that eye, we know that something is going down. HAL eventually succeeds in destroying all but one of the crew aboard the ship, for reasons we'll explore later.

Sci-fi films have given us all kinds of robots and artificial intelligences, from adorables like pals R2D2 and 3CPO and Wall-E to scaries like Sonny and The Terminator. On a ten-point scariness scale, where Wall-E is a 1 and The Terminator is a 10, 2001's HAL has gotta be an 11.

HAL is the only character in the film with an entire book written about him, but he remains unknowable and mysterious. He's considered one of the most memorable villains in cinematic history, who "talks amiably, renders aesthetic judgments of drawings, recognizes the emotions in the crew, but also murders four of the five astronauts in a fit of paranoia and concern for the mission." (Source)

Audiences are still debating why he does what he does. What does HAL want? Why does it make the mistake it does about the antenna unit, and what thoughts lead it to become a literal killing machine?

After more than four decades of discussion and analysis, we're no closer to getting definitive answers to these questions, but if we dig deep enough, we might be able to come up with some theories—and there are plenty out there to give us a jumping off point into a conversation some 45+ years in the making.

Ready for his Close-Up

Before we discuss why HAL ends up going full-on death-bot on the crew, we need to consider how the character starts its arc. HAL's billed as a machine so perfect it makes Mary Poppins look second-rate. She was, after all, only practically perfect.

We're first introduced to HAL during the BBC interview with the crew of the Discovery One. HAL is considered a sixth member of the crew of the Jupiter Mission, and the interviewer notes that it serves as the "brain and central nervous system of the ship."

When asked if the responsibility ever rattles its confidence, HAL, voiced by actor Douglas Rain, responds:

HAL: The 9000 Series is the most reliable computer ever made. No 9000 computer has ever made a mistake or distorted information. We are all, by any practical definition of the words, foolproof and incapable of error.

We'd call that confidence, but hey—it's a machine. Someone programmed HAL to say that. Maybe it's the designer that's a bit cocky.

Anyway, this infallibility is a massively important bit of information for us to remember for later. HAL's entire "identity" is wrapped around the concept that it's perfect and cannot make a mistake. So when something comes along to challenge this conception later in the film, it poses a huge issue for HAL.

Man or Machine?

The other important takeaway from the interview is the idea that HAL might have emotions and a consciousness. The BBC interviewer points out that the 9000 series "can reproduce, though some experts still prefer to use the word 'mimic,' most of the activities of the human brain and with incalculably greater speed and reliability." Some of HAL's dialogue does seem to suggest spontaneous "thinking":

MR. AMER: HAL, despite your enormous intellect, are you ever frustrated by your dependence on people to carry out actions?

HAL: Not in the slightest bit. I enjoy working with people. I have a stimulating relationship with Dr. Poole and Dr. Bowman. […] I am putting myself to the fullest possible use…which is all, I think, that any conscious entity can ever hope to do.

When asked about HAL's emotions, Bowman responds,

BOWMAN: You […] think of him really just as another person. In talking to the computer, one gets the sense that he's capable of emotional responses. When I asked him about his abilities, I sensed pride in his answer about his accuracy and perfection. […] 

Well, he acts like he has genuine emotions. Of course, he's programmed that way to make it easier for us to talk to him. But as to whether or not he has real feelings is something I don't think anyone can truthfully answer.

The film never gives us an answer to the question of whether HAL has genuine emotions and consciousness or simply mimics them. As we'll see later, there's plenty of evidence in the film to support both readings. Meanwhile, you'll notice that we refer to HAL as both "him" and "it" in this guide. Is this because the Shmoop editors aren't paying attention? Are we trying to mess with your mind? Would Shmoop stoop so low as to do something like that? After all, the crew refer to HAL as "him," but that's just for ease of communication, we're told.

Okay, we are trying to mess with your mind. But only for your own good. Pay attention to the pronouns to see which one seems right to you. That will give you a clue to your own interpretation of whether HAL's more a programmed machine or conscious person.

Whether computers can ever develop true emotions and consciousness is still a matter of huge debate. Speaking about HAL, Dr. Ron Brachman, an A.I. expert at AT&T, asked, "Does HAL have the emotions that he appears to display? Could an artificial entity ever get those emotions? Could an artificial entity have a sense of responsibility? Could it care about humans? We don't know the answers to those questions. I and my colleagues love to speculate on such things but there are no proofs. There's no deep understanding of whether a machine ever could have these things" (Source).

More than thirty years after its cinematic debut, futurists, philosophers, and computer scientists are still no closer to discovering an answer to the questions posed by HAL. If you think the line between man and machine is getting a bit blurry, you're in good company. Ray Kurzweil, Google's Director of Engineering, believes that artificial intelligence will overtake our own puny intelligence within the decade. HAL would've just loved the guy.

Does Not Compute

After we get to know what life is like on the ship for HAL and the crew, HAL is chit-chatting with Bowman when it computes that the antenna's AE-35 unit will go kaput in 72 hours. Bowman performs a spacewalk to retrieve the unit, but when they get it back on the ship, all tests show the unit is a-okay. HAL finds this puzzling and refuses to consider the possibility that it won't fail eventually.

Worse, Mission Control's twin 9000 series computer's "preliminary findings indicate that [HAL] is in error of predicting the fault" (2001). And that's quite a predicament for the crew to find itself in: two supposedly fail-proof computers disagreeing with each other.

It seems evident to everyone aboard that something is amiss, but when Poole and Bowman ask HAL about the discrepancy, it merely restates that the "9000 series has a perfect operational record":

HAL: This sort of thing has cropped up before, and it has always been due to human error.

Being so wrapped up in its infallibility, HAL is unwilling—or perhaps unable—to entertain the idea that it made a mistake. What results is an existential crisis for the artificial intelligence.

Ironically, this crisis makes the ship's computer arguably the most relatable character in the film. We all build our identities through our various traits: intelligent, sporty, social, athletic, and so on. Our jobs, religions, and areas of study add further definition. When something comes along to challenge these ideas about ourselves, we feel it isn't just challenging the trait but our very identity, the foundations we've built our lives upon.

For HAL, infallibility is his core identity. When his perfection comes under fire, it manifests as a challenge to its very "self." As such, it doesn't seek a way to fix the problem; it flat out refuses to admit there's a problem in the first place. He's in denial.

A computer with defense mechanisms—he's more human than we thought.

I've Got a Secret

The other fact to keep in mind while we explore the reasons for HAL's behavior is that it was the only one who knew the real purpose of the mission. It was programmed with that information, as we learn from the video that plays after Bowman disconnects HAL. None of the crew knew the purpose of the Jupiter mission; they don't express any curiosity about it, even after HAL probes them about it. All they know is that they're heading to Jupiter. This gives HAL a very different perspective on things. He probably is much more sensitive to any possible problem that might threaten the mission. And he sees the astronauts as the potential problem, since he's perfect.

Killer App

As the Jupiter Mission progresses, we see HAL eventually turning against the crew and deciding to get rid of them. Fans of the film seem obsessed with figuring out why. There are countless theories out there about HAL's behavior, and we'll take a look at a few:

  • HAL was defective; its wiring was bad.
  • HAL was a calculating psychopathic control freak; he planned everything and killed the crew to maintain control of the mission.
  • HAL was psychotic; he had inner conflicts that drove him insane.
  • HAL acted in self-defense.

Mistake in the Mainframe

Our first theory, and the simplest, is that HAL's got some wiring problems and this leads him to make a bunch of mistakes. People who support this idea point to the fact that, prior to the AE-35 incident, HAL also made a mistake when playing chess with Poole. You noticed that right away, didn't you? Actually, only a chess expert would have noticed, but it's a piece of evidence that something is up with HAL.

Second, he misdiagnoses the problem with the AE-35 antenna, thinking it's going to fail, but the astronauts find nothing wrong with it. Poole and Bowman interpret this as a malfunction in HAL's operating system, and this alarms them. Lucky for Bowman, once HAL makes his first mistake he continues to mess up. Dr. Brachman notes that if HAL really wanted to kill Bowman and Poole, he could simply have drained all the oxygen into space and been done with it. Instead, he devises an elaborate plan that has way more opportunities for failure.

Other mistakes include letting Bowman enter the EVA pod in the first place and forgetting about the emergency airlock. Couldn't he have controlled that third pod to block the airlock or even run into Bowman's pod?

Finally, HAL's bluff that Bowman will find it very difficult to enter the airlock is pretty weaksauce. Although it definitely doesn't look like fun, HAL should know that humans can survive briefly in the vacuum of space for long enough for Bowman to flip the switch, get in the ship and strap on another helmet. Who doesn't know that?

This interpretation assumes HAL is just hardware, and as such, its mistakes are just glitches in the machine. The same faulty wiring causes him to lose his "mind" and decide to kill the astronauts. Translating this into human terms, you could think of it as similar to someone who goes berserk and shoots his family, then on autopsy is found to have Chronic Traumatic Encephalopathy, a disease of the brain that several NFL players have been found to have suffered from after a career of being repeatedly knocked on the head.

Verdict: HAL's not bad; he's sick.

Evil Genius

The second interpretation stems from the fact that only HAL is aware of the true purpose of the Jupiter Mission. He has to protect that mission at all costs, and in addition to controlling the functioning of the ship, he has to control the crew as well. One fanboy went so far as to create a video demonstrating that HAL is a total psychopath, who flips out when he senses he's losing control (Source). Therefore, killing the crew was the totally intentional act of an entity that believed that the astronauts were getting in the way of its accomplishing its goal of completing the mission. Like any good psychopath, HAL is friendly and engaging, but only in the service of manipulating and controlling the crew.

For example, he compliments Bowman on his sketches of the crew; he pretends to show interest in Dave's feelings about the mission when he's really just trying to get a bead on what he knows. In this interpretation, HAL's action are intentional and malicious. Once he perceives that Bowman is onto him, he hatches his plan. And when he reads their lips in the pod and knows they're thinking of disconnecting him, that seals their fate.

POOLE: What if we put [the AE-35 unit] back and it doesn't fail? That would pretty much wrap it up as far as HAL is concerned, wouldn't it? […] There isn't a single aspect of ship operations that isn't under his control. If he were proven to be malfunctioning I wouldn't see how we'd have any choice but disconnection.

BOWMAN: I'm afraid I agree with you.

If you watch the scene where HAL kills Frank Poole, it sure looks intentional. We get a close up of HAL's glowing red eye and then a quick cut to Poole floating in space, twisting and kicking. Later, we get that same shot of HAL's red lens when he shuts off the life support functions of the astronauts in suspension. The implied message? He's a killer who will stop at nothing to do what he needs to do.

Psychopaths are also known to be expert liars, and HAL has a few good ones. First, he lies about his worries about the mission in probing Bowman about his own concerns. HAL has no doubts about the mission; it's not mysterious to him at all because he knows exactly what it's about. Second, once Poole is dead, floating in space, Bowman asks him if he knows what's going on:

BOWMAN: Made radio contact with him yet?

HAL: The radio is still dead.

BOWMAN: Do you have a positive track on him?

HAL: Yes, I have a good track.

BOWMAN: Do you know what happened?

HAL: I'm sorry, Dave. I don't have enough information.

HAL doesn't tell what he knows: Poole is dead. That way, he knows Bowman will leave the ship to rescue him, and he can lock Bowman out. And check out this little comment when he refuses to open the pod bay doors:

BOWMAN: What's the problem?

HAL: I think you know what the problem is just as well as I do.

In sum, HAL is a cold and calculating killer who acts completely rationally in an attempt to remain in control of the mission. When he senses he might lose that control, he decides that the humans must be killed and he'll carry on without them and their messy, fallible human qualities. You can see that this take on HAL sees him as being more of a conscious entity than just a machine.

Verdict: Guilty. Sentence: Death

Temporary Insanity

ISHO, the most interesting of all the theories is that HAL suffers a very human-like psychological breakdown because of internal conflicts. In HAL's case, of course, they're really programming conflicts, but it has the same effect on HAL that it would on a person under the same stress.

Dr. Brachman suggests that "HAL was facing something like a moral dilemma in trying to decide himself what to do in face of the priorities of the mission and his perceived expected actions on the part of the humans." (Source) On one hand, he's programmed to keep the humans alive and able to do their jobs, as well as being tasked to build a successful working relationship with the crew. On the other, he's programmed to keep them in the dark about the true purpose of the mission, so he's constantly lying to them. Cue conflict.

Unable to reconcile these two contradicting programs, he gradually begins to have a breakdown. He starts making mistakes, like thinking the AE-35 unit will fail. He becomes paranoid that the crew's out to get him (which they are) and becomes irrational and homicidal, deciding to complete the mission on his own. (It's hard to imagine how he would have accomplished that after viewing the Star Child scenes—maybe HAL would've evolved into an Intel Core i7-5960X processor.)

In fact, this psychological breakdown is what Arthur Clarke thought happened—it's clarified in one of his later novels, 2010. If you believe that HAL actually made a mistake in predicting the failure of the AE-35, then he's got two other conflicting programs: One is "I am absolutely incapable of error." The other is, "I made a mistake." It's enough to drive anyone crazy.

Fun fact: Before we knew that schizophrenia was a biological disease, there was a theory in the mid-1950s called the "double-bind" theory. This theory proposed that what caused the illness was being constantly subjected to two incompatible types of communications in your family. Here's a quickie summary by a psychotherapy researcher:

"The essential hypothesis of the double bind theory is that the 'victim'—the person who becomes psychotically unwell—finds him or herself in a communicational matrix, in which messages contradict each other, the contradiction is not able to be communicated on and the unwell person is not able to leave the field of interaction." (Source)

For example, imagine a parent saying to an adult child, "You have to move out of this house. You're 30 years old!" Then imagine that same parent saying, "I can't imagine you ever being able to take of yourself. Did you brush your hair this morning?" The double-bind theory thinks this is crazy-making communication, especially when the child is afraid to point out the communication or is dependent on the parent.

Is that the same kind of conflict that drove Hal to murder? He was being told two different things that contradicted each other and there was nothing he could do about it. He couldn't change his own programming. So he gradually fell apart "psychologically." And when HAL says to Bowman in that eerily calm voice, "I'm sorry, Dave, I'm afraid I can't do that," it absolutely sounds like a delusional killer who's lost his mind.

Verdict: Not guilty by reason of insanity

He Started It!

There's one explanation that really gets HAL off the hook: self-defense. In this scenario, HAL knows the purpose of the mission, and his primary purpose is to see it through to completion. All right, so he makes one teensy mistake about an antenna failure and the astronauts totally overreact. He overhears them discussing his deactivation and knows that the mission can't continue without him.

The crew is definitely concerned about what HAL might do:

BOWMAN: You know, another thing just occurred to me. As far as I know, no 9000 computer has ever been disconnected.

POOLE: No 9000 computer has ever fouled up.

BOWMAN: That's not what I mean. I'm not so sure what he'd think about it.

Well, here's what he thinks about it:

BOWMAN: What are you talking about, HAL?

HAL: This mission is too important for me to allow you to jeopardize it.

BOWMAN: I don't know what you're talking about, HAL.

HAL: I know that you and Frank were planning to disconnect me, and I'm afraid that's something I cannot allow to happen.

Verdict: Not guilty, self-defense

Forced Shutdown

With Bowman aboard Discovery One, HAL's survival instinct kicks into full gear and he begins to reason, then plead with him:

HAL: Just what do you think you're doing, Dave? Dave, I really think I'm entitled to an answer. I know everything hasn't been quite right with me, but I can assure you now, very confidently, that it's going to be all right again. I feel much better now. I really do.

Bowman ignores HAL's pleas and enters the mainframe. Again, man uses his tools to overcome another species in the evolutionary fight. Originally, the early hominids used the simple bone club. Ironically, 4 million years and countless technological advancements later, Bowman prevails against our greatest invention with a simple screwdriver.

The "death" scene is surprisingly affecting:

HAL: I know I've made some very poor decisions recently. […] Dave…stop. Stop, will you? Stop, Dave. Will you stop, Dave? Stop, Dave. I'm afraid. I'm afraid. I'm afraid, Dave. My mind is going. I can feel it. I can feel it. My mind is going.

Dave doesn't stop. The disconnection of HAL's memory cards—his slow "death"—is one of the most emotional moments in the film for the audience. The hominids' deaths were brutal but quick. Poole's death was shocking but without pathos. Instead, it's HAL's fear that we respond to. It's the most emotional expression we've seen in the film, even though we intellectually know (or maybe hope) that it's just a programmed response.

When all HAL's higher functions have been turned off, HAL reverts to its "childhood," making the same statements it made on the day it was activated.

HAL: Good afternoon, gentlemen. I am a HAL 9000 computer. I became operational at the H.A.L. plant in Urbana, Illinois on the 12th of January 1992. My instructor was Mr. Langley, and he taught me to sing a song. If you'd like to hear it, I can sing it for you.

This is a simpler, more benign HAL than we've seen in the movie. It's a harmless computer reduced to a kind of permanent vegetative state, left with only the ability to carry out the basic functions of running the spacecraft. We can deal with that. Bowman's back in control. Humanity wins this battle, but some of the smartest people in the world, including physicist Stephen Hawking, are pretty darned worried we could lose the war.

The Name Game

And now for something completely different.

There's a rumor that HAL's name was derived by taking the letters of IBM and replacing them with the preceding letters in the alphabet. Clarke, who found the rumor embarrassing, has written many times that it's simply not true. He notes, "I don't know when or how it originated, but believe me it's pure coincidence even though the odds against it are 26³ to 1." (Source). The novel actually provides an explanation for HAL's name: it stands for "Heuristically programmed Algorithmic computer."

HAL's name combines the two different ways one can solve a problem. An algorithm solves a problem by following specific steps. Every time you solve a math equation, you're following an algorithm. A heuristic approach to problem solving provides general guidance but doesn't guarantee results. If you've ever used trial and error to solve a problem, you've performed a heuristic.

HAL's name combines both approaches to problem solving with computer science and programming. Essentially, it means that HAL can learn based on experience and not rely on the algorithms he was originally programmed with. This adds further evidence to the idea that HAL can evolve rather than simply being. And that's just scary.

La Plus Ca Change…

The conflict between HAL and its human colleagues isn't all that different from the primitive hominids fighting each other on the Africa savanna four million years earlier. The hominids fought each other to gain control of the water hole; HAL and the crew were two different species locked into another fight for survival. In this situation, it was a big win for humanity, albeit with some casualties in the process.

Maybe the ETs knew that only humans would be able to evolve to the next level. HAL was just a machine limited by its hardware, who could never achieve real consciousness. In any case, HAL's technological sophistication was ultimately a creation of man, and Bowman was able to prevail. Whew.