As stated earlier, most theorems are implications. This corresponds to the inference rule $\rightarrow$-introduction: \[\frac{\boxed{\begin{matrix}p \\ \vdots \\ q\end{matrix}}}{p \rightarrow q} \rightarrow\!i\] What this rule means is that in order to prove such statements, we assume $p$ and then use that assumption to show that the conclusion $q$ is true. Again, we use a box to show that $p$ exists as an assumption inside the box, since it's not one of our given facts.
Let's continue our proof from above. Since we dealt with the universal quantification, we are left with the statement $a \mid b \rightarrow a \mid bc$. This means we need to assume $a \mid b$ is true and we can use this assumption to complete our proof.
Let $a$, $b$, and $c$ be arbitrary integers. We assume that $a \mid b$. So there exists an integer $n$ such that $a \cdot n = b$. Then \begin{align*} bc &= (a \cdot n) \cdot c &\text{substitution} \\ &= a \cdot (n \cdot c) &\text{associativity} \end{align*} We note that $n \cdot c$ is an integer, since the integers are closed under multiplication. Therefore, there exists an integer $n'$ (which is our new name for $n \cdot c$) such that $a \cdot n' = bc$. Therefore, $a \mid bc$.
From this, we can see that a proof of $p \rightarrow q$ involves
Here, step (2) is a bit thin on details, but this is where the clever combining of mathematical facts and knowledge happens, so there's not much we can say regarding the structure of the proof.
You may have noticed we did something sneaky in the above proof. When we applied the definitions for divisibility, in a sense, we substituted $a \mid b$ for the statement $\exists n (a\cdot n = b)$. This means that we were working with an existentially quantified statement. Here's what happened
The inference rule for using an existentially quantiifed statement is \[\frac{\begin{matrix} \quad \\ \quad \\ \exists x P(x) \end{matrix} \quad \boxed{\begin{matrix} P(u), \text{sub $u$ for $x$} \\ \vdots \\ Q \end{matrix}}}{Q} \exists e\] In other words, if we want to use an existentially quantified statement $\exists x P(x)$, we simply choose a specific element $u$ and work with the statement $P(u)$. Notice that
Like the other rules we saw earlier, the chosen $u$ exists only within a specific scope. But we can use this fact to arrive at a proof of some other statement $Q$.
However, this is only half the story, because we end up having to arrive at a proof for an existentially quantified statement:
That is, we showed that there exists an appropriate integer and we use this to construct an existentially quantified statement. This is the inference rule \[\frac{P(t)}{\exists x P(x)} \exists i\] In essence, this is a proof that looks like this:
Let's go back to induction and wrap up our examination with the inductive case.. Here, we have a sentence $\forall k (P(k) \rightarrow P(\operatorname{succ}(k)))$. What we need to do then is to show for an arbitrary element, say $t$, that $P(t) \rightarrow P(\operatorname{succ}(t))$. So now, we are left with an implication that we need to come up with a proof for. Then our inference rule says that we need to assume $P(t)$ and use that assumption to construct a proof for $P(\operatorname{succ}(t))$.
We can now put all of this together.
Let's prove that addition on the natural numbers is associative. We say that an operation $\circ$ is associative if $m \circ (n \circ p) = (m \circ n) \circ p$ for all $m, n, p$. In other words, the order in which we "execute" the operation on three elements doesn't matter. String and list concatenation are other examples of operations that are associative, while subtraction is an example of an operation that is not associative.
For all natural numbers $m, n, p$, $m + (n+p) = (m+n)+p$.
We will prove this by induction on $m$.
Base case. If $m = z$, then we have \begin{align*} m + (n + p) &= z + (n + p) &\text{premise} \\ &= n + p &\text{addition} \\ &= (z+ n) + p &\text{addition} \\ &= (m+ n) + p &\text{premise} \end{align*}
Inductive case. Let $k$ be an arbitrary natural number and assume for all natural numbers $n$ and $p$ that $k + (n + p) = (k + n) + p$. Let $m = \operatorname{succ}(k)$. We have \begin{align*} m + (n + p) &= \operatorname{succ}(k) + (n + p) &\text{premise} \\ &= \operatorname{succ}(k + (n + p)) &\text{addition} \\ &= \operatorname{succ}((k + n) + p) &\text{inductive hypothesis} \\ &= \operatorname{succ}(k + n) + p &\text{addition} \\ &= (\operatorname{succ}(k) + n) + p &\text{addition} \\ &= (m + n) + p &\text{premise} \end{align*}
A common use for induction that you may have encountered before is for proving properties of sums or sequences of integers. Consider the following.
For every $n \in \mathbb N$, $\sum \limits_{i=0}^n i = \frac{n(n+1)}2$.
If you've been taught this formula before, then you've probably been told the story of Gauss solving this problem when he was a child—a story which has been embellished over the last century and a half. However, his teachers in this story, in assigning $n = 100$, gave him an easy out since he only needed to consider one instance of the problem, so his solution does not use induction. We don't have that luxury, since we want to prove that this formula holds for all $n$.
We will prove this by induction on $n$.
Base case. If $n = 0$, then $\sum \limits_{i=0}^0 i = 0$ and $\frac{0 \cdot (0+1)}2 = 0$. Therefore, $\sum \limits_{i=0}^n i = \frac{n(n+1)}2$ holds for $n = 0$.
Inductive case. Let $k \in \mathbb N$ be arbitrary and assume that $\sum \limits_{i=0}^k i = \frac{k(k+1)}2$. We will show that $\sum \limits_{i=0}^{k+1} i = \frac{(k+1)(k+2)}2$. Then, we have the following: \begin{align*} \sum_{i=0}^{k+1} i &= \sum_{i=0}^k i + (k+1) \\ &= \frac{k(k+1)}2 + (k+1) & \text{by ind. hyp.} \\ &= (k+1) \cdot \left(\frac k 2 + 1 \right) \\ &= (k+1) \cdot \left(\frac k 2 + \frac 2 2 \right) \\ &= \frac{(k+1)(k+2)}2 \end{align*}
Thus, $\sum \limits_{i=0}^n i = \frac{n(n+1)}2$ holds for $n = k+1$. Therefore, $\sum \limits_{i=0}^n i = \frac{n(n+1)}2$ holds for all $n \in \mathbb N$.
Notice that even though we were proving a fact about a sum, we were really performing induction on the structure of the natural numbers. This is typically how induction is learned—as a property of the natural numbers. But recall that the structure of the proof mirrors the structure of the recursive definition of the natural numbers. This suggests that induction can be applied to other recursive structures, beyond just the natural numbers.