Savitch's theorem shows us that we don't need a lot of space in order to deterministically simulate nondeterministic computations. An obvious corollary of Savitch's theorem is that $\mathbf{PSPACE} = \mathbf{NPSPACE}$.
Theorem. (Savitch) For any function $f : \mathbb N \to \mathbb R$, where $f(n) \geq n$, $$\mathbf{NSPACE}(f(n)) \subseteq \mathbf{SPACE}(f^2(n)).$$
Proof. Let $N$ be a nondeterministic Turing machine that decides a language $A$ in space $f(n)$. We want to construct a deterministic TM that decides $A$.
Consider the procedure $\mathrm{Yield}(c_1,c_2,t)$, which takes as input two configurations of $N$, $c_1$ and $c_2$, and an integer $t$. $\mathrm{Yield}(c_1,c_2,t)$ accepts if $N$ can go from $c_1$ to $c_2$ in up to $t$ steps on some branch of computation and rejects if not. We assume that $t$ is a power of 2 for convenience. We describe $\mathrm{Yield}(c_1,c_2,t)$:Then we can construct the following deterministic machine that simulates $N$:
Here, we do a few things. First, we modify the machine $N$ so that it erases the tape before it accepts, so there is a unique accepting configuration. Then, we choose some constant $d$ such that $N$ has no more than $2^{d \cdot f(n)}$ configurations that use $f(n)$ tape and this provides an upper bound on the running time of $N$.
What is the space complexity of this machine? When $\mathrm{Yield}$ calls itself recursively, it stores the current stage number and values $(c_1,c_2,t)$ on a stack so that they can be restored once the recursed instance ends. Each level of recursion uses $O(f(n))$ space, but each level of recursion divides the size of $t$ in half. Since $t$ starts at $2^{d \cdot f(n)}$, the depth of recursion is $O(\log 2^{d \cdot f(n)})$ which is $O(f(n))$. This gives us a bound of $O(f^2(n))$. $$\tag*{$\Box$}$$
We can consider $\mathbf{PSPACE}$-completeness in the same way as $\mathbf{NP}$-completeness.
A language $B$ is $\mathbf{PSPACE}$-complete if $B$ is in $\mathbf{PSPACE}$ and every $A \in \mathbf{PSPACE}$ is polynomial time reducible to $B$.
You may notice that we use polynomial time reduciblity rather than space reducibility. This is because the notion of showing hardness is to show that a problem is as difficult as some other problem. In doing so, we want the reduction to be easy relative to the problem. If computing the reduction were difficult, it would show nothing about the hardness of the problem itself. Later on in the course, we'll see examples of reductions where even polynomial time is too difficult to show hardness.
Just as we did with $\mathbf{NP}$-completeness, we need to find a first problem that every problem in $\mathbf{PSPACE}$ reduces to.
A quantified boolean formula allows the use of the universal ($\forall$) and existential ($\exists$) quantifiers. A fully quantified formula is a formula in which every variable is within the scope of a quantifier. A fully quantified formula is always either True or False.
The problem $\mathrm{TQBF}$ is: Given a fully quantified boolean formula $\varphi$, is $\varphi$ True?
Theorem. $\mathrm{TQBF}$ is $\mathbf{PSPACE}$-complete.
Before we show that $\mathrm{TQBF}$ is $\mathbf{PSPACE}$-complete, we'll explore what it means for a problem to be in $\mathbf{PSPACE}$. For instance, problems in $\mathbf P$ can be thought of as "problems that are efficiently solvable", while a problem in $\mathbf{NP}$ can be thought of as "a problem with solutions that can be efficiently verified".
For $\mathbf{PSPACE}$, the characterization of problems can be thought of as "a problem with an optimal winning strategy when viewed as a 2-player game". Here, we mean game in a loose sense, where we have two players trying to accomplish some goal according to some rules.
In fact, many classical games do fall under this definition, like chess or checkers. Of course, these aren't exactly the same, since normal games tend to be finite. For instance, chess has a finite number of states, since there are 32 pieces on an 8 $\times$ 8 board and we can just make a hypothetical lookup table of every board state and store the best move in response to that state. Instead, theoreticians generalize these games so that they're played over an $n \times n$ space, and within that setting it's possible to study the complexity of such a game. There are numerous results on the complexity of such games and this is what it means when you see a paper on arXiv that says that, say, Super Mario Bros. is PSPACE-complete.
For illustrative purposes, we'll look at a much simpler and far less popular game: the formula game. Let $\varphi = \exists x_1 \forall x_2 \exists x_3 \cdots Q x_k[\psi]$, where $Q$ is either a $\forall$ or $\exists$. Two players $A$ and $E$ take turns selecting the values of variables $x_1, \dots, x_k$. $A$ chooses values for variables bound to $\forall$ and $E$ chooses values for variables bound to $\exists$. The two players take their turns in the order of the quantifers of $\varphi$. When everyone is done, we strip off the quantifiers and check the value of $\psi$ on the assignment that was chosen during play. If $\psi$ is True, then $E$ wins and if $\psi$ is False, $A$ wins.
Consider the formula $$\varphi_1 = \exists x_1 \forall x_2 \exists x_3 [(x_1 \vee x_2) \wedge (x_2 \vee x_3) \wedge (\overline{x_2} \vee \overline{x_3})].$$ In this case, $E$ goes first, then $A$, then $E$. However, on closer inspection, we see that $E$ has a winning strategy: set $x_1 = 1$ and then do the opposite of whatever $A$ on the next move.
More formally, a player has a winning strategy if the player always wins when both players play optimally.
If we modify $\varphi_1$ to get $$\varphi_2 = \exists x_1 \forall x_2 \exists x_3 [(x_1 \vee x_2) \wedge (x_2 \vee x_3) \wedge (x_2 \vee \overline{x_3})],$$ then $A$ has a winning strategy for $\varphi_2$.
The problem $\text{FORMULA-GAME}$ is: Given a formula $\varphi$, does Player $E$ have a winning strategy for $\varphi$?
Theorem. $\text{FORMULA-GAME}$ is $\mathbf{PSPACE}$-complete.
Proof. We don't even need to show a reduction, because we will show that $\text{FORMULA-GAME}$ and $\mathrm{TBQF}$ are actually the same problem.
Consider what a formula $\varphi = \exists x_1 \forall x_2 \exists x_3 \cdots [\psi]$ means. It is True when there is a value for $x_1$ exists such that for any setting of $x_2$, a setting of $x_3$ exists..., where $\psi$ is True under the setting of the variables. But this is just the same as the game: Player $E$ has a winning strategy if $E$ can make a choice in response to any choice that $A$ makes so that $\psi$ evaluates to True.
The same reasoning applies no matter what order the quantifiers are in, which is what $\mathrm{TBQF}$ asks; $E$ would still have a winning strategy whenever $\varphi$ is True. $$\tag*{$\Box$}$$