Storing Data Using Memory

You are encouraged to post any questions about the reading on the course discussion board!

You may have heard the term “memory” used when referring to your computer and wondered what exactly it means. In this class, you are actually going to be building memory for us to use in our computer! The focus of this reading is to detail the abstraction of what memory is, and in class we will dive deeper into the details of how to implement it. Think of this reading as providing info for the client's or user's view of memory, and what we will do in class and in the projects is the implementator's view.

Getting Specific: What Type of Memory?

In this course we will be talking about a very specific type of memory known as Random Access Memory or RAM. RAM is used for data storage for programs, and is constantly being read from and written to by the CPU. This memory is a bit different but related to other types of storage - caches, disks, SSDs, etc. You don't need to worry how RAM is different from these other storage mechanisms, we just point it out so that you know in the future when you hear the term RAM, you've actually spent some time learning about and implementing it!

One Large Array

The way we interact with memory is almost exactly the same as the way we interact with an array! There are a bunch of sequential slots for storing data at different indices, or addresses, and we can read or write data to them in any order. The fact that we can access the slots in any order is why it is called random access memory.

Addresses

Addresses are the name we give to the indices of memory. That is, we specify a slot in memory to interact with by providing an address to the memory chip. This address is a fixed width, and only refers to one slot in memory!

The address slots all have the same width. In most computers, they are the length of one byte, equivalent to 8 bits. In our computer, we will be interacting with memory that uses 16 bits in each slot! This is because it simplifies a lot of the memory functionality we have to implement.

Size Limitations

Just like everything in hardware, there is a limit to how much memory we can store in our computer. The theoretical limit of how much memory we can have is determined by the address width! Since each address is unique, the number of addresses we can represent with our width is the maximum number of slots we can have for data (since each slot will need its own address).

Remember the formula used to determine how many different values a binary number with N bits has? Now it has become even more relevant! For an address width of N bits, we can have 2^N slots in our memory. Note that this is the theoretical limit - that is, we can have up to 2^N slots in memory. In many computers, the number of possible addresses is often way bigger than the size of memory we would use, but we will not worry about those details in this course.

Note that the address width is not the same as the slot width. That is, we could have an address width of 2 bits (only 4 addresses!), and a slot width of 16 bits. The address width determines the number of slots we have, but it doesn't impose limitations on how many bits are used to store the data in each of those slots.

Quick Example

Let's walk through a quick example of what memory looks like to hopefully make some of the concepts we've mentioned a bit more concrete (no pun intended). For our example, addresses will be 3 bits wide, and the data stored at each address will be 4 bits wide. If our address is 3 bits wide, that means we have 2^3 = 8 different addresses/slots for data in our memory example. Remember these parameters are not the same as the ones we are using in the computer we are building.

Let's say our memory starts out looking like the following:

Address Value in Memory
000 1000
001 0110
010 0000
011 1101
100 0001
101 0110
110 1010
111 1100

If we access the value memory[3] we are asking for the value that is stored in memory at address 3 (011 in binary). This would correspond to the value 1101.

We might also want to update a value in memory. If we perform the update memory[6] = 9, then our updated memory would look as follows:

Address Value in Memory
000 1000
001 0110
010 0000
011 1101
100 0001
101 0110
110 1001
111 1100

Hopefully it is now clear that memory really does just work like an array! The big difference is that we use addresses to refer to the slots in memory, which have a fixed width.

Blocks of Memory

The last thing we will talk about is this notion of blocks within memory. Often times it helps to think of memory in different “chunks”, or what we will refer to as blocks. We will use this notion of blocks to help us build our memory chip!

For our purposes, when we divide memory into blocks, they will always be the same size, and that size must be a power of 2. The reason they need to be a power of 2 is that our overall memory size is going to be a power of 2 (remember that for N width addresses the size of our memory is 2^N). If our block size is also a power of 2, it means we can evenly divide up memory into blocks without overlaps or extra space.

For example, suppose our address width is 8, which means our memory size is 2^8 = 256 slots. Let's say we choose a block size of 64 slots. That means we have 256 / 64 = 4 blocks, broken down as follows:

Block Number Address range
0 0-63
1 64-127
2 128-191
3 192-255

One thing that is pretty neat is that given any address, we can really easily find its block number and its index within a block! Let's look at the above table again, but this time using binary instead of decimal.

Block Number Address range
00 00000000-00111111
01 01000000-01111111
10 10000000-10111111
11 11000000-11111111

Wow those look really similar actually! Notice how for each block, the lower address and the upper address are the same except for the 2 most significant bits. And notice how the two most significant bits correspond to the block number! The rest of the bits tell us which of the 64 slots we are referring to within that block.

This can be generally broken down as follows: given an address width of N and a block size of B, the N - log_2(B) most significant bits correspond to the block number, and the log_2(B) least significant bits of the address (or the rest of the bits) indicate which of the B slots we are referring within that block.

That was a little mathy, so let's look at it in terms of our example. In our example, N = 8 and B = 64. So log_2(B) = log_2(64) = 6. That means the N - B = 8 - 6 = 2 most significant bits correspond to the block number, and the log_2(B) = 6 least significant bits indicate which slot within the block we are referring to. This is exactly what we observed in the table above!

The Road Ahead

In lecture we will begin to talk about how we will implement memory in our computer, which will rely heavily on blocking to make the task more manageable. Once we have memory built, we will start to tie everything we've done so far together by implementing the CPU!