History of Processors

Dongkai Xu


The computer is a complicated, well-tuned invention which has thousands of different components. When we think about computers nowadays, what comes to our minds is a combination of a screen, a keyboard, a mouse, and some sorts of processing units. Encapsulated inside the processing units are millions of special parts working together, and their cores are the central processing units. CPUs are the basic units doing simple arithmetic and logic computations which are responsible for carrying out any necessary instructions. Modern CPUs are capable of doing billions of computations in a single second, but just in a hundred years ago no individual can afford a personal computer. So how are all of this developed?

The very first models of computers used vacuum tubes as switches and amplifiers. These models all shared the same property: they were huge in size, which destined that no individual could ever afford or build any. Vacuum tubes were used for the first half of the twentieth century, but when the need for a smaller, more affordable computing unit urged, people started to develop different technologies, like transistors. Transistors are capable of controlling the current used for amplification and modulation, or in other words simulating 1s and 0s. Back in 1823, the 14th element, Silicon, was discovered by Baron Jons Jackob Berzelius. Through studies, scientists identified Silicon’s excellent semi-conducting properties, which is the primary reason why Silicon is used to build transistors. The first transistor, however, was built in 1947 by John Bardeen, Walter Brattain, and William Shockley, and they received the Nobel prize in physics for this achievement. Five years later, in 1952, the concept of integrated circuit was introduced by British radar engineer Geoffrey Dummer. Integrated circuits are packets of transistors, circuits, and other important components, so they are the building block of computer hardware. They were later developed by Jack Kilby and Robert Noyce, and successfully demonstrated in 1958, which marked the beginning of the computing age.

In 1965, Gordon E. Moore published the famous Moore’s Law, which predicted that the number of transistors in a dense integrated circuit would double about every two years for the next several decades. This can also be interpreted as the processing speed of CPUs will double about every two years, since more transistors means more power. Many questioned this prediction, but to everyone’s surprise, Moore’s Law is still relatively valid in present days. In 1963, Frank Wanlass invented the complementary metal-oxide-semiconductor, which enabled the production of extremely dense integrated circuits with high performance. The invention of dynamic random-access memory technology by Robert Dennard in 1967 made it possible to create individual transistor memory, which led to low-cost, high-capacity memory. New technology now made it possible to produce transistors with a single electron, or a single atom placed in a Silicon crystal. These transistors are only nanometers in diameter, which made it possible to continue Moore’s Law in the near future. There were roughly 5000 transistors on integrated circuits in 1972, and now there are more than 20000000000!

This only serves a very brief history of computer processors. Aside from the innovations mentioned above, they are many more: just find a motherboard and look at the various components on it! With the development of the nano-technology, I believe Moore’s Law will continue for the next two decades. The problem is to lower the cost of those transistors to production level, and once this is resolved, we will have computers faster than ever before. That’s why this joke is valid: if one wants to brute force an EXP problem, they should just wait indefinitely since the computation power will double every year!

References

  • https://www.computerhope.com/
  • https://en.wikipedia.org/wiki/Moore%27s_law