So now that we know that our desired integers have to be out there somewhere, we need to show that they're the only ones. Recall that uniqueness is proved as a statement of the form \[\forall y (P(y) \rightarrow x = y)\]
This gives us an idea of how to prove that our integers are unique: we assume that we have two solutions, and then arrive at the conclusion that these two solutions were the same to begin with.
Let $n$ and $d \gt 0$ be arbitrary integers. By Lemma 4.4, there exist integers $q$ and $r$ such that $n = d \cdot q + r$ and $0 \leq r \lt d$. We want to show that $q$ and $r$ are unique.
Suppose that there exist integers $q_1, r_1, q_2, r_2$ such that $n = d \cdot q_1 + r_1$ and $n = d \cdot q_2 + r_2$ with $0 \leq r_1, r_2 \lt d$. Without loss of generality, let $r_1 \geq r_2$ and consider the following system of equations \begin{align*} n &= q_1 \cdot d + r_1 \\ n &= q_2 \cdot d + r_2. \end{align*} Subtracting gives us $0 = (q_1 - q_2) \cdot d + (r_1 - r_2)$. Rearranging this equation gives us $(q_2 - q_1) \cdot d = r_1 - r_2$. Since $(q_2 - q_1)$ is an integer, we have $d \mid (r_1 - r_2)$ by definition of divisibility. Since $0 \leq r_1 - r_2 \lt d$, we have $r_1 - r_2 = 0$ and therefore $r_1 = r_2$. Then from above, this gives us $(q_2 - q_1) \cdot d = 0$. Since $d \gt 0$, we have $q_2 - q_1 = 0$ and therefore $q_1 = q_2$. Thus, we have shown that $q$ and $r$ are unique.
What does the statement "without loss of generality" mean? Consider the scenario in which we used it: when we considered $r_1$ and $r_2$, we said "without loss of generality" and set $r_1 \geq r_2$. We were really pre-emptively dealing with the question: We said $r_1 \geq r_2$, but what if $r_2 \geq r_1$ instead?
It's true: we had one of two situations on our hands: $r_1 \geq r_2$ or $r_2 \geq r_1$. That is, the particular fact we were working with was a statement of the form $p \vee q$.
How does one make use of a statement of the form $p \vee q$? Unlike a statement of the form $p \wedge q$, we can't say that $p$ is definitely true or $q$ is definitely true. We know that at least one of them is true but we don't know which. So we have to assume each of $p$ and $q$ and show that from either assumption, we can construct an appropriate proof. This is sometimes known as proof by cases. Here, $p$ and $q$ are our cases. There's an inference rule for this: \[\frac{\begin{matrix} \\ \\ p \vee q \end{matrix} \quad \boxed{\begin{matrix}p \\ \vdots \\ r \end{matrix}} \quad \boxed{\begin{matrix}q \\ \vdots \\ r \end{matrix}}}{r} \vee\!e\] What is this saying? Suppose we want to prove some proposition $r$ by using the proposition $p \vee q$. What we need to do is consider the $p$ case and show that we can arrive at $r$ and then consider the $q$ case and show that we can also arrive at $r$.
In other words, we do the following:
So what we needed to do to prove our statement was to consider two cases: $r_1 \geq r_2$ and $r_2 \geq r_1$.
But what does "without loss of generality" mean here? We make the observation that if we carried out a proof for the other case, the proof would look exactly the same! The only things that would change would be the positions of $r_1$ and $r_2$. It's not very interesting to see the exact same proof twice, so mathematicians invented some language, "without loss of generality", to say to the reader that, look, the other case will basically give you the same proof but with the names swapped, so we'll only do it once.
You might recall that all of the numbers we discussed in the Division Theorem actually have names (which is why we use certain letters to name them):
We are particularly interested in divisors. Why? Because the divisors of a number are exactly those that divide it. One thing that is of particular interest is when certain numbers share divisors. This is one way that we can relate numbers with each other.
Let $a,b,c$ be integers. If $c \mid a$ and $c \mid b$, then $c$ is a common divisor of $a$ and $b$. The integer $d$ is the greatest common divisor of $a$ and $b$ if $d$ is a common divisor of $a$ and $b$ and every common divisor of $a$ and $b$ divides $d$. Equivalently, $|c| \leq |d|$ for all common divisors $c$ of $a$ and $b$. The greatest common divisor of $a$ and $b$ is denoted $\gcd(a,b)$.
From the Division Theorem, we get the following result about the gcd.
For all integers $m$ and $n$, there exist integers $a$ and $b$ such that $\gcd(m,n) = a \cdot m + b \cdot n$.
This result is commonly referred to as Bézout's lemma or Bézout's identity, attributed to Étienne Bézout, a French mathematician in the mid-1700s. We'll continue to do so for lack of a better snappy name, but like many other results you'll encounter, the result was known before.
Working backwards, we quickly find that the result is found due to Bachet de Méziriac, another French mathematician in around 1600. But in fact, we can go even further back: a constructive proof of this result dates back to about 500 AD via the Indian mathematician Āryabhaṭa and the first mention of the result can be found due to Euclid in about 300 BC (who we will be seeing a lot of).
Though there is a constructive proof due to Āryabhaṭa found in the textbook, the proof that we'll go through is a non-constructive proof. When we proved the Division Theorem, we showed the existence of integers $q$ and $r$ by showing explicitly how to compute them. Here, we'll use a different tactic, a proof by contradiction.
Let's outline the general logical foundations for this approach. First, we have to ask ourselves what it means to prove that a statement is false. With other propositions we've seen, like $p \wedge q$ or $p \rightarrow q$, we construct their proofs using proofs of $p$ and $q$, the constituent parts of the proposition. However, for a proposition like $\neg p$, we only have the proposition $p$ and it's clear that a proof of $p$ would not let us prove $\neg p$.
Are there things that we know definitely can't be true? Yes! Consider the proposition $p \wedge \neg p$. This proposition can't be true because it can never be the case that both $p$ and $\neg p$ are true. Any proposition of this form is called a contradiction.
For any proposition $p$, the proposition $p \wedge \neg p$ is said to be a contradiction, and is denoted $\bot$.
The approach to proving $\neg p$ is show that it cannot be the case that $p$ can be proven. What does that look like?
The inference rule for this is \[\frac{\boxed{\begin{matrix}p \\ \vdots \\ \bot \end{matrix}}}{\neg p} \neg i\] In other words, a proof of $\neg p$ is a proof of the implication $p \rightarrow \bot$. One way to think of this is that $\neg p$ is a refutation of $p$.
If $m = n = 0$, then any $a$ and $b$ works. So without loss of generality, we have non-zero $m$ or $n$. Consider all integers of the form $am + bn$ for all integers $a$ and $b$. Let $d$ be the least positive integer of this form and let $a$ and $b$ be such that $d \in S$.
First, we will show that $d$ is a common divisor of $m$ and $n$. Suppose it isn't. That is, we assume that $d$ isn't a common divisor of $m$ and $n$. Without loss of generality, say $d \nmid m$. Then by the Division Theorem, there exist integers $q$ and $r$ such that $m = q \cdot d + r$ and $d \gt r \gt 0$. Then we have \begin{align*} r &= m - q \cdot d \\ &= m - q \cdot (am + bn) \\ &= (1-qa)m + (-bq)n \end{align*} This gives us that $r$ is an integer of the form $am+bn$. But since $0 \lt r \lt d$, this contradicts our assumption that $d$ was the least positive integer of that form. Therefore, $d$ must be a common divisor of $m$ and $n$.
Now that we know $d$ is a common divisor, it remains to show that $d$ is the greatest common divisor of $m$ and $n$. So consider any common divisor $c$ of $m$ and $n$. Then there exist integers $s$ and $t$ such that $m = cs$ and $n = ct$. This gives us \begin{align*} d &= am + bn \\ &= acs + bct \\ &= c \cdot (as + bt) \end{align*} Therefore, $c \mid d$ and therefore $c \leq |d|$. Thus, $d = \gcd(m,n)$.
Here, it's worth noting a subtle assumption that we made. Our assumption in the proof above was that $d$ was not a common divisor of $m$ and $n$. This is a proposition of the form $\neg p$. We then used this assumption to arrive at a contradiction. According to our rules, this means that we now have a proof of $\neg \neg p$.
Are $p$ and $\neg \neg p$ the same thing? In classical logic, this is the case. But consider the following interpretation for inference rules. We've said that an inference rule says that if we have proofs of the statements on the top, this allows us to construct a proof for the statement on the bottom. This is the intuitionist or constructivist interpretation of logic.
If we think a few steps more, we see that in this interpretation, we can view this as a computation. An inference rule describes consuming proofs of a certain type in order to construct a proof of a certain type. Since the proofs are necessary ingredients for the computation and construction of a new proof, we really need to have such proofs in hand.
So the question we need to think about is this: Is having a proof of $\neg \neg p$ good enough to use as a proof of $p$? What we really have is a proof that it can't be the case that there's a proof of $\neg p$, which is a proof that it can't be the case that there's a proof of $p$. But we are still without a proof of $p$.
Another way is to consider the question of the law of excluded middle, which is the statement $p \vee \neg p$. In classical logic, this is obviously true—in fact, it's an axiom. It must be the case that $p$ is either true or false. But the intuitionist interpretation of this statement is that it must be the case that we have a proof of $p$ or a proof of $\neg p$. This is not true—there are many statements for which we have no proof either way.
So a proof system for classical logic is one that includes an inference rule for LEM, while intuitionists would reject it. \[\frac{}{p \vee \neg p} \mathrm{LEM}.\] Using LEM, one can then derive the rule \[\frac{\neg \neg p}{p} \neg\neg e,\] which has the name "proof by contradiction".