Propositional Logic gives us tools for precisely describing and analyzing logical statements. In the first of many such connections, we will see that a proposition (logic) can be thought of as a circuit (computation) and vice versa!
Real-world problems are usually given to us in English. In order to apply our new tool, we must first “formalize” the problem by translating the statement into Propositional Logic. Then, we can mechanically write down a circuit that implements the statement or analyze it using a truth table.
In this topic, we will introduce a new way to compare propositions. Two propositions that are not the same can still be “equivalent”. In logic, this means that knowing one is true (or false) tells you that the other is true (or false). In computation, equivalent circuits correspond to different ways of computing the same thing.
We will learn two ways of determining whether propositions are equivalent. One is via a case analysis (truth tables). The other is via a “proof”, which is a sequence of simple steps each of which applies a well-known equivalence rule. Each approach provides us with unique advantages, so both are valuable.
In this topic, we extend the logic we learned in Topic 1, giving it the ability to talk about objects and their properties and to make claims about what facts hold for all, some, or no objects in a given domain. The result is called “Predicate Logic” (or “First-Order Logic”), and it is the logic in which most mathematics and computer science takes place.
In this topic, we learn a new way to reason about facts in our two logics. Previously, we saw how to show that various facts are equivalent. Here, we will see how to infer new facts from known ones. This includes equivalence as a special case (when we can infer either fact from the other) but is vastly more powerful.
We will begin by writing our proofs formally, which makes them easy to check. Formal proofs are built up using inference rules, and we will learn rules for both Propositional and Predicate Logic. Finally, we will begin by learning how to write English proofs by translating our formal proofs into English. As the course continues, we aim to get more confortable writing proofs directly in English, without working formally.
In the remainder of the course, we will look at different kinds of mathematical objects that show up frequently in computer science. In addition to learning about their properties, this will give us settings in which to practice the proof techniques we learned in Topic 4.
In this topic, we look at important properties of numbers. All data in a computer is stored as numbers and the only operations computers can do on their own are arithmetic calculations. For that reason, numbers arise everywhere in computer science, and we need to understand them well in order to make computers do interesting things for us.
The next application area we will study is set theory. Sets arise in almost every area of computer science. They also arise in practical programming. In Java, for example, Set and the closely related Map interfaces are two of the most widely used parts of the standard library. In this topic, after defining sets, we will look at important relationships sets can have with one another and important operations that can be performed on sets.
In this topic, we learn our final inference rule. It takes advantages of the special structure of the natural numbers to allow us to prove for-all claims over the natural numbers in a new and interesting way.
Next, we look at how to define more complex data and functions using recursion. Most interesting data and functions in computer science are defined this way. We will see numerous examples, both as part of this topic and later ones.
Whenever we meet new kinds of mathematical objects, we need to also learn appropriate ways to reason about them. For recursively defined data and functions, the most natural way to do so is using a new form of induction, called structural induction, which is one of the most widely used tools in CS.
In this topic, we begin our study of theoretical computer science by looking at two ways of defining “languages”, which are simply sets of strings. We will later connect these ways of defining languages to different types of computing machines, whereupon the fact that one way of defining languages has more expressive power tells us that one type of machine is more powerful than another.
Next, we see how to define a language by describing a “machine” that can recognize strings in the language. We will look at a few different kinds of machines and not only relate them to each other but also connect them to the ways of defining languages that we learned in Topic 9. This will eventually lead us to showing that our simple machines are not sufficiently powerful to define some languages we have seen before.
Are there problems that computers cannot solve? In the last topic, we saw that there are certain languages that cannot be described by any regular expression. What about a similar question for Java? Are there languages that cannot be recognized by any Java program?
Surprisingly, the answer is yes. We will discuss an important sense in which there are “more” languages than there are Java programs. Hence there must be some languages that are not described by any Java program. These languages are called “uncomputable” or “undecidable”. (To make this argument precise, we will need to define what it means for one infinite set to have “more” elements than another.) We will also see Turing's famous example of an uncomputable language: the Halting Problem.
That's all, folks!