This course is about logic. Specifically, this course is about mathematical logic and the role it plays in computer science. Logic in this sense is a field of mathematics, like algebra or number theory. In fact, you can find a number of logicians hanging out in the pure math department and by taking this course, you won't be allowed to take their version of a mathematical logic course, PMATH 330.
Because you have to in order to get your degree. Of course, another way of asking this question is "Why are we making you take this course?" I suspect more than one student has wondered this and I think that it's a question worth thinking about.
First, we need to answer the obvious question: What is logic? A very dictionary definition of logic will tell you that it is the study of reasoning, inference, and deduction. The word logic comes from the Greek word logykos, which means "pertaining to reason" and you may not be surprised to learn that the study of logic stretches all the way back to Ancient Greece with Aristotle.
Consider the following argument.
Is this argument convincing? Is it valid? Is it true?
The above two arguments have the same structure:
That is, we have obtained $q$ from some established facts $p$ and the rule "if $p$ then $q$". The idea here is that we set up some rules and some facts and we can begin to derive some new statements. An interesting question is: what kinds of rules and facts would we need in order to make the kinds of arguments we want to make?
This is what we consider the syntactic notion of truth. We say $S$ is true if we can prove it from basic facts $A_1, \dots, A_n$ by following some rules. In other words, truth is determined by manipulating elements of statements according to some rules.
Note that our arguments above are valid only in the structural sense. That is, the syntax of the argument is valid, but it says nothing about the truth of those statements.
What if we do want to consider an argument based on the truth of our basic facts rather than the structure of an argument?
This is what we call the semantic notion of truth. We say $S$ is true if all interpretations of $S$ that are true for all facts $A_1, \dots, A_n$ must also be true for $S$. In other words, there is no way to interpret $S$ as false without also falsifying an axiom, so $S$ must be true.
Mathematical logic is concerned with the question of whether these two notions of truth are the same. In doing so, it leads us to some interesting questions outside of computer science. For instance, are all things that we consider as "true" actually provable? Is everything we can prove actually true?
Suppose we wanted to determine whether some statement of mathematical fact was true. If it's true, we should be able to prove it. If it's false, we should be able to find a counterexample. It seems like, at the very least, if we had enough information, we should be able to encode it and let a computer handle the mechanical proof part of it. After all, computers are very good at that sort of thing.
This was exactly the kind of question that logicians were interested in back in 1900. Is it possible to encode the rules of mathematics as logical statements and just have a machine do the hard work of coming up with proofs for it, if we have enough rules? That certainly sounds like the kind of thing we want computers to do nowadays. As we'll see later in the course, it turns out that we can't. In fact, it is this unfortunate development that leads directly into the birth of computer science as a discipline.
This connection is not just a historical note. The relationship between syntax and semantics is one that's central to computer science, as you'll see when you take CS 241. A program (and in fact, all data) is just a sequence of symbols. The meaning and interpretation of the program occurs when it is actually run.
Our goal, as in most theoretical computer science courses, is to formalize all of these notions and study them using mathematical tools. We will formalize the notions of proof and truth, syntax and semantics, and so on and ultimately show that the two notions coincide, and the consequences that arise from this fact.
There are roughly four broad topics that this course will cover.
Definition 1.1. A proposition is a declarative statement that is either true or false.
For example,
Not every sentence is a proposition.
Definition 1.2. An atomic proposition cannot be broken down into smaller propositions. A compound proposition is not atomic.
We represent propositions by formulas which are made up of symbols. There are three kinds of symbols that we use for formulas.
We assign propositional variables to represent atomic propositions and we can represent compound propositions by using formulas which combine atomic propositions as propositional variables with connectives.
Suppose we have the following propositions.
Definition 1.3. We will consider the following connectives.
This set of connectives is due to George Boole in 1854, the namesake for Boolean algebra. Note that this is not an exhaustive list of how to formulate these connectives in natural language. English (and other languages) have a variety of ways to express each of these connectives without using the words "not", "and, "or", "implies", and so on.