Negative Numbers in Binary

You are encouraged to post any questions about the reading on the course discussion board!

We've dealt with a lot of material related to binary without addressing something you may have been wondering from the beginning: how are negative binary numbers represented in hardware?

Review: Binary to decimal

Recall how we calculate the decimal value of a binary number. We simply multiply the value of each digit by the power of 2 that digit represents. For example, 101 = (1 * 2^2) + (0 * 2^1) + (1 * 2^1) = 5 (in decimal).

Note how this system doesn't account for negative numbers! Since we are only adding positive values together, we will only end up with positive values (or 0). We call this interpretation of binary an unsigned interpretation - none of the values in our interpretation have a different sign! What we will explore next are different signed interpretations that allow us to represent both positive and negative values.

Negative numbers

To start exploring negative numbers in binary, we will first introduce an attempt at encoding negative numbers called Sign and Magnitude. While this is a fairly intuitive encoding, we will explain some of the limitations of it before exploring a slightly better encoding - Two's Complement. Two's complement is the predominate encoding for negative numbers in computers today, and we will use it for our computer too!

These are just two possible ways to represent negative numbers in binary. You could come up with many other ways of representing negative numbers, and each would have its own set of benefits and downsides. Beyond just thinking about negative binary numbers, choosing how data is encoded is a recurring decision people have to make in their own systems/projects/etc.

Sign and Magnitude

The first encoding we might come up with is known as “Sign and Magnitude”. In this representation, we assign the most significant bit to indicate whether the value is positive or negative, and the rest of the bits represent the value. Some examples are:

binary  decimal
  0000        0
  0010        2
  1010       -2
  1101       -5
  1000       -0 (wut?)

This system is nice because it is fairly intuitive, but there are two key issues with it. The first is that we have two representations of 0 (a “positive” 0 and a “negative” 0). Not only is this a bit confusing, but it also wastes a representation and will make checking for 0 in hardware much more complicated. The second issue is that our addition process we used for positive numbers no longer works with negative numbers. For example:

carry:  0 0 1 0 0
_________________
          0 0 1 0     (2)
      +   1 0 1 0     (-2)
      ___________
          1 1 0 0     (-4) <- we really want this to be 0!

This means that we would have to implement two versions of addition - one for positive numbers and one for negative numbers, which introduces a ton of headaches for us as hardware designers!

Two's Complement

Instead, we will use a system called “Two's Complement” to represent negative numbers. Two's complement works by interpreting binary numbers in the same way that you would for positive numbers, but giving the most significant digit a negative weight. For example, if our numbers are 4 digits wide, then the binary number 1110 has the value -2:

1110 = (1 * -2^3) + (1 * 2^2) + (1 * 2^1) + (0 * 2^0) = -8 + 4 + 2 + 0 = -2

Note how the most significant digit (index 3) has a negative weight!

Here's a full list of 4 digit binary represenations and their unsigned and signed (two's complement) values:

binary  decimal(unsigned)   decimal(two's complement)
  0000                 0                            0
  0001                 1                            1
  0010                 2                            2
  0011                 3                            3
  0100                 4                            4
  0101                 5                            5
  0110                 6                            6
  0111                 7                            7
  1000                 8                           -8
  1001                 9                           -7
  1010                10                           -6
  1011                11                           -5
  1100                12                           -4
  1101                13                           -3
  1110                14                           -2
  1111                15                           -1

Now we only have 1 representation for 0 (who knew this would be something we'd have to worry about?) and it turns out that now we can use the same addition process for both unsigned and signed values!

carry:  1 1 1 0 0
_________________
          0 0 1 0     (2)
      +   1 1 1 0     (-2)
      ___________
          0 0 0 0     (0) <- ahhhh much better
        ^ remember we ignore this carry bit

Another bonus of two's complement is that it is very easy to flip the sign of a number. If we have a value x, all we have to do to get a value with the opposite sign of x is to negate all of the bits (turn 0s into 1s and vice versa) and then add 1. Simple as that! Here's an example, we can use the operation ~ to flip all of the bits in a value:

Formula: -x = ~x + 1

Example: x = 5 (assume 4-bit numbers)
x      = 0101
~x     = 1010
~x + 1 = 1011

1011 = (1 * -2^3) + (0 * 2^2) + (1 * 2^1) + (1 * 2^0) = -8 + 2 + 1 = -5

Notice how we can choose to interpret a binary representation as either an unsigned number or as a signed two's complement number. When implementing the ALU, you may also notice that you never specify the sign (positive or negative) of the inputs to your chip. In fact, because our operations are the same for unsigned and signed binary numbers, our computer doesn't need to know which it is working on! Higher level languages that use negative values (such as Java) translate those values into their binary represenations, and the hardware just operates on 1s and 0s. This is a really cool and powerful abstraction.

The road ahead

In lecture and the project, we will finish up our Arithmetic Logic Unit, providing the first major component of our computer! While we've only focused on addition so far, we will make it clear how we can cleverly use addition to implement a host of other operations.

Then we will shift our focus to dealing with time in circuits (you'll note we've avoided this issue so far). There are two fundamental questions we will try to answer next: How can we define time in hardware? And how can we develop circuits that have some knowledge of this time?