The rules for conjunction were rather straightforward. We will see things get a bit more complicated once we move on to other connectives, starting with implication. The implication elimination rule is $$ \frac{\alpha \quad (\alpha \rightarrow \beta)}{\beta} \rightarrow\!e.$$ This rule is called modus ponens, which is Latin for "mode that affirms", but was very funny to us back in the mid-2000s when people still liked to say "pwn" out loud a lot. In classical deduction, this is the only rule that is used. We implicitly applied this rule in the very first example of an argument that I showed, way back in the first class, where I introduced the idea of arguments of the form $\{p,(p \rightarrow q)\} \vdash q$.
The introduction rule for implication is a bit different and we will need a new idea or two for this.
Example 7.1. We will show that $\emptyset \vdash (p \rightarrow (q \rightarrow p))$.
| |||||||||||||||||
| 5 | $(p \rightarrow (q \rightarrow p))$ | $\rightarrow\!i$ 1-4 | |||||||||||||||
So, what just happened here? First of all, notice that we don't have any premises in this sequent. Okay, so that's fine, but what about the box?
The box indicates a subproof. We introduce an assumption and use it for our subproof, but after we are finished with the subproof, we may no longer use the assumption. The box "traps" the assumption inside the proof so that our assumption and anything we derive based on the assumption doesn't get out and contaminate the rest of the proof.
However, note that within the subproof, we may use anything that came before it, as long as the bottom of the proof box is open. This is why we "copied" $p$ on line 3. In the slides, this is called the reflexivity rule, $\frac \alpha \alpha$. Remember, anything that has already shown up can be used again, and this is true even when not considering proof boxes. However, when we are inside a proof box, one way to distinguish what and what is not available to use is to think about what is "in scope", to use some more programming terminology.
This notion of the subproof is baked right into the definition of the implication introduction rule: $$\frac{\boxed{\begin{matrix}\alpha \\ \vdots \\ \beta\end{matrix}}}{(\alpha \rightarrow \beta)} \rightarrow\!i$$
How should we interpret this rule? Remember that the implication $(\alpha \rightarrow \beta)$ is read "if $\alpha$, then $\beta$". One way of considering the $\rightarrow\!i$ rule is that we must make an assumption: if $\alpha$, then... and our subproof concludes with $\beta$. So one can consider the subproof as reasoning out if $\alpha$, then $\beta$.
Example 7.2. We will show that implication is transitive. That is, $\{(p \rightarrow q), (q \rightarrow r)\} \vdash (p \rightarrow r)$.
| 1 | $(p \rightarrow q)$ | premise | |||||||||
| 2 | $(q \rightarrow r)$ | premise | |||||||||
| |||||||||||
| 6 | $(p \rightarrow r)$ | $\rightarrow\!i$ 3-5 | |||||||||
One thing to keep in mind is that for an assumption, we may assume any formula we wish. This is very much like what we do when we are trying to prove something. We make an assumption, and try to prove something based on that assumption. It may work out and it may not and it may have even be dumb and useless. However, we have the choice of what to assume and we can assume anything, whether it turns out to be helpful or not.
Example 7.3. We will show that $\{((p \wedge q) \rightarrow r)\} \vdash (p \rightarrow (q \rightarrow r))$.
| 1 | $((p \wedge q) \rightarrow r)$ | premise | ||||||||||||||||||
| ||||||||||||||||||||
| 7 | $(p \rightarrow (q \rightarrow r))$ | $\rightarrow i$ 2-6 | ||||||||||||||||||
Perhaps a bit unexpectedly, the disjunction rules are also less obvious than we might have first assumed. The disjunction introduction is pretty straightforward: $$\frac{\alpha}{(\alpha \vee \beta)} \vee\!i \quad \frac{\beta}{(\alpha \vee \beta)} \vee\!i$$
The disjunction elimination rule, on the other hand, is more complicated. Unlike with conjunction, if we have $(\alpha \vee \beta)$, we aren't guaranteed that both of them are true, so we don't just get $\alpha$ or $\beta$ for free. What we need to do is, for both $\alpha$ and $\beta$, work under the assumption that either one is true and prove a common conclusion. Because of this, this rule is also called proof by cases. What does this rule look like then? $$\frac{\begin{matrix} \\ \\ (\alpha \vee \beta) \end{matrix} \quad \boxed{\begin{matrix}\alpha \\ \vdots \\ \gamma \end{matrix}} \quad \boxed{\begin{matrix}\beta \\ \vdots \\ \gamma \end{matrix}}}{\gamma} \vee\!e$$
Example 7.4. Just like with $\wedge$, we will take this opportunity to show that $\vee$ is also commutative. Specifically, we will show $\{(p \vee q)\} \vdash (q \vee p)$.
| 1 | $(p \wedge q)$ | premise | ||||||
| ||||||||
| ||||||||
| 6 | $(q \vee p)$ | $\vee e$ 1,2-3,4-5 | ||||||
Example 7.5. While we're at it, we can also show distributivity. That is, $\{(p \wedge (q \vee r))\} \vdash ((p \wedge q) \vee (p \wedge r))$.
| 1 | $(p \wedge (q \vee r))$ | premise | |||||||||
| 2 | $p$ | $\wedge e$ 1 | |||||||||
| 3 | $(q \vee r)$ | $\wedge e$ 1 | |||||||||
| |||||||||||
| |||||||||||
| 10 | $((p \wedge q) \vee (p \wedge r))$ | $\vee e$ 3,4-6,7-9 | |||||||||
Example 7.6. And let's do associativity as well, since it is surprisingly complicated. We will show that $\{((p \vee q) \vee r)\} \vdash (p \vee (q \vee r))$.
| 1 | $((p \vee q) \vee r)$ | premise | |||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||
| 12 | $(p \vee (q \vee r))$ | $\vee e$ 1, 2-8, 9-11 | |||||||||||||||||||||||||||
Recall that our understanding of the inference rules of natural deduction should be grounded purely on what the symbols mean for a proof and not what they mean in the context of semantic truth. For negation, we arrive at an interesting question of what it means to negate a formula. The classical answer is that if we have a proof of $\neg \varphi$, then we can conclude $\varphi$ is "false". However, in the world of proofs, there is another possible point of view. What if proving $\neg \varphi$ simply means that $\varphi$ doesn't have a proof?
Notice that we haven't framed our rules at all in that way, since, having been beaten over the head with the notion of semantic truth both earlier in this course and earlier in life, we have still been implicitly understanding these inference rules through the lens of classical logic, which assumes true and false. Until now, it hasn't made much of a difference that we could reframe our rules as rules about proof rather than about outcomes of proofs.
For now, we will continue down our current path, keeping the alternative interpretation at the back of our minds.
Our first rules having to do with negation are actually about double negation. $$\frac{(\neg(\neg \alpha))} \alpha \neg \neg e \quad \frac \alpha {(\neg (\neg \alpha))} \neg \neg i$$
This is pretty easy to see since we are all familiar with being careful not to speak in double negatives all the time. However, strictly speaking, double negation introduction is not technically an inference rule in our system. This is an example of what we call a derived rule. The idea is that if we have a proof $\Gamma \vdash \alpha$, then in essence, we can get to $\alpha$ from $\Gamma$. Then instead of reproducing the entire proof, we can treat this proof as an inference rule $\frac \Gamma \alpha$.
So what if we wanted to introduce or eliminate a single negation? We will need to make use of another new idea, at least in our current context of proofs.
Recall that a contradiction is a formula that is never satisfiable. Recall also that there was a logical identity, $(\alpha \wedge (\neg \alpha)) \equiv F$ which was called "contradiction". The negation elimination rule is based on the idea of contradiction. $$\frac{\alpha \quad (\neg \alpha)}{\bot} \neg e$$
Here, the symbol $\bot$ denotes a contradiction. Because of this, you can also think of this rule as bottom introduction, $\bot i$. This rule has a counterpart, bottom elimination. $$\frac \bot \alpha \bot e$$
What does this rule mean? Anytime a contradiction appears in our proof, we may use it to infer any formula we wish. This is another instance of the idea of being able to conclude anything arising from a contradiction appearing. We'll see it in action by proving the implication identity.
Example 7.7. We will show $\{((\neg p) \vee q)\} \vdash (p \rightarrow q)$.
| 1 | $((\neg p) \vee q)$ | premise | ||||||||||||||||||
| ||||||||||||||||||||
| ||||||||||||||||||||
| 11 | $(p \rightarrow q)$ | $\vee e$ 1, 2-6, 7-10 | ||||||||||||||||||
This brings us to our final negation rule, the negation introduction. $$\frac{\boxed{\begin{matrix} \alpha \\ \vdots \\ \bot \end{matrix}}}{(\neg \alpha)} \neg i$$
Again, we can make sense of this rule by considering what we would do if we wanted to try and prove something. First, we make the assumption that $\alpha$. We reason through it and arrive at some kind of contradiction. Therefore, it must not have been the case that $\alpha$ and we can conclude $(\neg \alpha)$.
Example 7.8 We will show $\{(r \rightarrow (p \wedge q)), (\neg q)\} \vdash (\neg r)$.
| ||||||||||||||||||||
| 7 | $(\neg r)$ | $\neg i$ 1-6 | ||||||||||||||||||