Computers: Binary Data

    Computers: Binary Data

      Ten's a great number. It must be, otherwise we wouldn't use it so often, right? Right. Thanks to the metric system, even things that used to be measured in base-12 and base-60 are now in base-10.

      Outside of the U.S.

      Except for time.

      And the Super Bowl (when it isn't worried about being called a loser).

      But other than that, pretty much everything gets measured using ten as the base. Let's get one thing straight: when we talk about base-10 counting, we're talking how every place in a number has ten options for what number could be there (0 – 9). If that number goes beyond those ten options, it needs to add another place. Base-10 is the good, old-fashioned numbers you're used to working with in math.

      The ones position of any number goes from 0 to 9. When it reaches the next quantity of 10, the tens position of the number increments. After every ten sets of 10, the hundreds position of the number increments. We could keep going, but…we won’t. We'll just say that going from 19 to 20 is the perfect example. The tens position increments from 1 to 2 and the ones position starts over at 0, giving room to increment up to 9 again.

      Computers don't use decimal (base-10), like…ever. Instead, they usually use binary (base-2) and hexadecimal (base-16).

      The earliest computers were really just jumbo calculators that rocked at solving complex math problems a person would have a hard time completing. Just like their humble predecessors, modern computers break absolutely everything down into numbers, even things that don’t sound super number-y. We're talking about

      • lines of code.
      • photographs. 
      • videos.
      • chemical compounds. 
      • maps.

      Everything they work with is broken down into numbers in some way or another.

      In base-2, instead of working with ten possible numbers, the computer gets two: 0 and 1. That's it. Each digit is known as binary digit (a bit). In binary, the number sequence goes like this for the first ten numbers:

      Decimal0123456789
      Binary01101110010111011110001001

      Incrementing numbers goes exactly the same way as with decimals. Once all digits have been used in the ones place, the next position to the left increases by one. You just have fewer digits to work with, is all.

      Understanding why computers use binary leads down to the fundamental problem early computer scientists faced: how to communicate with a machine. The part of a computer that processes information is made of electrical components. The most basic of those components is a wire. Any individual wire can either be electrically charged or un-charged. Going a step further, if a wire's charged, it's "on." If it isn't, that wire's "off." (It doesn't always have to be that black and white, but you get the idea.)

      Since those options are (sort-of) the only two, you can represent each one with a binary symbol of 0 (off) or 1 (on).

      Computers are just giant masses of electronic components that can only be charged or uncharged, when you get down to it. That means every piece of data has to be broken down into some binary representation before a computer can understand it.

      Even though computers can only think in binary (charged or uncharged, on or off), computer scientists still need to write and share computer concepts, which becomes more than a little tedious if you're only working in binary. If writing one million in base-10 takes seven characters while writing the same value in binary takes 20 (almost three times as many, in case you were wondering), nobody would go into computer science if they didn't get a shorthand way of handling big masses of 0s and 1s.

      That's where hexadecimal, a.k.a. base-16, comes in. It's way simpler to translate binary numbers to hexadecimal (all the cool kids call it "hex" for short) because, unlike decimal, base-16 is a power of two.

      There's just one teeny, tiny problem: hex numbers need 16 digits. In our Arabic-based number system, we only have ten. To fill in for the six missing digits, computer scientists borrow the first six letters in the alphabet. That makes the 16 hex digits 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F.

      Decimal9101112131415161718
      Hexadecimal9ABCDEF101112

      It looks like a whole lot of work to convert into hexadecimal, and…it is, but hex numbers take up a clean four bits of space in the computer, which makes them

      • much easier to work with than decimal numbers in the computer.
      • much easier to read than a ridiculously long binary number.

      TL;DR: The computer thinks in binary, but because binary numbers are so annoying to write out, computer scientists use hexadecimal numbers when reading computer information. Whether you're dealing with numbers, code or images, though, everything eventually gets broken down into binary.

      Fancy.