Reading 4: Negative Numbers in Binary

We've dealt with material related to binary without addressing something you may have been wondering from the beginning: How are negative binary numbers represented in hardware?

Review: Binary to Decimal

Recall how we calculate the decimal value of a binary number. We multiply the value of each digit by the power of 2 that digit represents. For example, 101 = (1 * 22) + (0 * 21) + (1 * 20) = 5.

Notice that this system doesn't account for negative numbers. Since we are only adding nonnegative values together, we will only end up with nonnegative values. We call this interpretation of binary an unsigned interpretation . None of the values in our interpretation have a negative sign. What we will explore next are different signed interpretations that allow us to represent both positive and negative values.

Negative Numbers

To start exploring negative numbers in binary, we will first introduce an attempt at encoding negative numbers called Sign and Magnitude. While this is a fairly intuitive encoding, we will explain some of the limitations of it before exploring a slightly better encoding: Two's Complement. Two's complement is the predominate encoding for negative numbers in computers today, and we will use it for our computer too.

These are just two possible ways to represent negative numbers in binary. You could come up with many other ways of representing negative numbers, and each would have its own set of benefits and downsides. Beyond just thinking about negative binary numbers, choosing how data is encoded is a recurring decision we often need to make.

Sign and Magnitude

The first encoding we'll explore is called Sign and Magnitude. In this representation, we assign the most significant bit to indicate whether the value is positive or negative, and the rest of the bits represent the value. here are some examples are:

Binary Decimal
0b0000 0
0b0010 2
0b1010 -2
0b1101 -5
0b1000 -0

This system is nice because it is fairly intuitive, but there are two key issues with it. The first is that we have two representations of 0 (a positively signed 0 and a negatively signed 0). Not only is this confusing, but it also wastes a representation and will make checking for 0 in hardware more complicated. The second issue is that our addition process we used for nonnegative numbers from the previous reading no longer works with negative numbers. For example:

Carry:  0 0 1 0 0
_________________
          0 0 1 0     (2)
      +   1 0 1 0     (-2)
      ___________
          1 1 0 0     (-4) ← This should be 0

This means that we would have to implement two versions of addition—one for positive numbers and one for negative numbers—which introduces headaches for us as hardware designers.

Two's Complement

An improved number representation for representing negative numbers as well is called Two's Complement. Two's complement works by interpreting binary numbers in the same way that you would for positive numbers, but giving the most significant digit a negative weight (instead of just a sign, like for Sign and Magnitude). For example, if our binary numbering system is four digits wide, then the binary number 0b1110 has the value -2:

0b1110 = (1 * -23) + (1 * 22) + (1 * 21) + (0 * 20) = -8 + 4 + 2 + 0 = -2

Notice how the most significant digit (index 3) has a negative weight.

Here's a full list of 4 digit binary represenations and their unsigned and signed (Two's Complement) values:

Binary Decimal (Unsigned) Decimal (Sign and Magnitude) Decimal (Two's Complement)
0b0000 0 0 0
0b0001 1 1 1
0b0010 2 2 2
0b0011 3 3 3
0b0100 4 4 4
0b0101 5 5 5
0b0110 6 6 6
0b0111 7 7 7
0b1000 8 -0 -8
0b1001 9 -1 -7
0b1010 10 -2 -6
0b1011 11 -3 -5
0b1100 12 -4 -4
0b1101 13 -5 -3
0b1110 14 -6 -2
0b1111 15 -7 -1

Now we only have one representation for 0, it turns out that now we can use the same addition process for both unsigned and Two's Complement values.

Carry:  1 1 1 0 0
_________________
          0 0 1 0     (2)
      +   1 1 1 0     (-2)
      ___________
          0 0 0 0     (0) ← Much better
        ↑ We ignore this carry bit

Another benefit of Two's Complement is that it is simple to flip the sign of a number. If we have a value x, all we have to do to get a value with the opposite sign of x is to negate all of the bits (turn 0s into 1s and vice versa) and then add 1. Here's an example, we can use the operation ~ to flip all of the bits in a value:

Formula: -x = ~x + 1

Example: x = 5 (assume four-bit binary numbering system)
x      = 0b0101
~x     = 0b1010
~x + 1 = 0b1011

0b1011 = (1 * -23) + (0 * 22) + (1 * 21) + (1 * 20) = -8 + 2 + 1 = -5

Notice how we can choose to interpret a binary representation as either an unsigned number or as a signed Two's Complement number. When implementing the ALU, you may also notice that you never specify the sign (positive or negative) of the inputs to your chip. In fact, because our operations are the same for unsigned and signed Two's Complement numbers, our computer doesn't need to worry about which it is working on. Higher level languages that use negative values (such as Java) translate those values into their binary represenations, and the hardware just operates on 0s and 1s. This is a cool and powerful abstraction.

The Road Ahead

In lecture and in Project 3, we will be learning and working on the Arithmetic Logic Unit (ALU), providing the first major component of our computer. While we've only focused on addition so far, we will explore how we can cleverly use addition to implement several other operations.

Then, we will shift our focus to dealing with timing in circuits. There are two fundamental questions we will answer next: How can we define time in hardware, and how can we develop circuits that have knowledge of this notion of time?