CMSC 28000 — Lecture 24

Here's another kind of question we can ask of Turing machines. Since they are so powerful, we may want to ask whether or not the language of a TM can be recognized by a less powerful model of computation, like a DFA. Such a language is defined \[ \mathsf{REG}_{\mathsf{TM}} = \{ \llbracket M \rrbracket \mid \text{$M$ is a TM and $L(M)$ is a regular language} \}.\] If we know this, then maybe we could come up witha simpler or more efficient way to solve this problem. As it turns out, this is not decidable either.

$\mathsf{REG}_{\mathsf{TM}}$ is undecidable.

Let $R$ be a TM that decides $\mathsf{REG}_{\mathsf{TM}}$. Construct a TM $S$ that does the following:

  1. On input $\llbracket M,w \rrbracket$, where $M$ is a TM and $w$ is a string, construct the TM $M'$:
    1. On input $x$, if $x$ has the form $0^n 1^n$, accept.
    2. If $x$ doesn't have this form, run $M$ on $w$ and accept if $M$ accepts $w$.
  2. Run $R$ on $\llbracket M' \rrbracket$.
  3. If $R$ accepts, accept; if $R$ rejects, reject.

How does this work? The TM $M'$ accepts all strings in $\{0^n 1^n \mid n \geq 0\}$, which we know is context-free. If $M$ doesn't accept $w$, then $L(M') = \{0^n 1^n \mid n \geq 0\}$. If $M$ accepts $w$, then $M'$ also accepts every other string and in this case $L(M') = \Sigma^*$, which is regular. In other words, the language of $M'$ is \[L(M') = \begin{cases} \Sigma^* & \text{if $M$ accepts $w$}, \\ \{0^n 1^n \mid n \geq 0 \} & \text{if $M$ does not accept $w$}. \end{cases}\] In this way, if there's a way to figure out whether $M'$ recognizes a regular language, then we can come up with a way to decide $A_{\mathsf{TM}}$.

Something that can be confusing is the choice to accept strings of the form $0^n 1^n$. What do these strings have to do with $M$ or $w$? The answer is that they have nothing to do with $M$ or $w$! The choice of strings of the form $0^n1^n$ is totally arbitrary. We could have chosen to accept all palindromes or strings of the form $0^n1^n0^n$.

The point of this construction is to create a situation where the language accepted by the machine $M'$ has one of two possiblities: either it accepts everything (when $M$ accepts $w$) or it only accepts some things (when $M$ doesn't accept $w$). Here, we have chosen the "some" things to be a non-regular set of strings. This means that if we have a way of checking whether the language of $M'$ is regular or not, then we have a way to solve the acceptance problem.

At this point, we begin to see a possible pattern emerge: if we want to ask about whether a Turing machine accepts this or that set of strings, we can come up with an easy way to test for membership. Simply construct a Turing machine that collapses to one language or another. Perhaps we can generalize this idea...

Rice's Theorem

So far, it seems like every question we ask about Turing machines is undecidable. As it turns out, this is not just an educated hunch, but a theorem. First, we'll try to formalize the notion of a "property".

A property $P$ of a Turing machine is a semantic property if it depends on the language recognized by $M$ and not the syntactic structure of $M$. In other words, if $L(M_1) = L(M_2)$, then $M_1$ and $M_2$ have the same semantic properties.

A property $P$ can be expressed as a language consisting of exactly the encodings $\llbracket M \rrbracket$, where $M$ has property $P$. A semantic property $P$ is said to be non-trivial if there exists a TM $M_1$ such that $\llbracket M_1 \rrbracket \in P$ and a TM $M_2$ such that $\llbracket M_2 \rrbracket \not\in P$.

Then we have the following theorem.

All non-trivial semantic properties of Turing machines are undecidable.

In other words, for any given Turing machine $M$, we can't decide any properties about $L(M)$ except for properties that are true for either exactly all or exactly none of the languages recognized by Turing machines. The proof is surprisingly simple, since it follows the basic template that we've been using so far.

To show this, we let $P$ be a property (that is, a language of encodings of Turing machines that have the property) and assume that it is decidable. Let $R_P$ be a Turing machine that decides $P$. We will reduce TM membership to property testing.

Let $T_\emptyset$ be a Turing machine that rejects every string. So $L(T_\emptyset) = \emptyset$. Without loss of generality, we assume that $\llbracket T_\emptyset \rrbracket \not\in P$. Otherwise, we can just use $\overline P$ instead. Since $P$ is non-trivial, there exists a Turing machine $T$ with $\llbracket T \rrbracket \in P$.

We construct a Turing machine that decides $A_{\mathsf{TM}}$ by being able to distinguish between $T_\emptyset$ and $T$.

  1. On input $\llbracket M,w \rrbracket$, construct the TM $M_w$:
    1. On input $x$, simulate $M$ on $w$. If it rejects, reject. If it accepts, go to the next step.
    2. Simulate $T$ on $x$. If it accepts, accept.
  2. Use $R_P$ to determine whether $\llbracket M_w \rrbracket \in P$. If $R_P$ accepts, accept. If $R_P$ rejects, reject.

So $M_w$ simulates $T$ if $M$ accepts $w$. This gives us \[L(M_w) = \begin{cases} L(T) & \text{if $M$ accepts $w$}, \\ \emptyset & \text{otherwise}. \end{cases}\] Thus, $\llbracket M_w \rrbracket \in P$ iff $M$ accepts $w$.

Rice's theorem was proved by Rice in 1951 (the result appeared in a journal later in 1953). It is interesting to note that a similar result was discussed by Turing in his original 1936 paper about undecidability.

In essence, what Rice's theorem says is that if we want to know something about the result of a program, there's no silver bullet to figure this out just by reading the source code.

Decision problems on formal languages

We've seen that we can ask questions about Turing machines, like membership, emptiness, and equality. We can also ask these same questions about less powerful models, like DFAs and CFGs and get corresponding decision problems for them. Let's consider such problems.

While we'll see that a lot of problems are decidable, there are some very surprising examples of problems which are undecidable. Many decidability properties that we'll be discussing were shown by Rabin and Scott in 1959 and Bar-Hillel, Perles, and Shamir in 1961.

Here are the problems we're interested in:

Here, "language device" just means some representation of the language. Remember that a language is a mathematical object that exists abstractly—it's a set of strings. In order to compute something about a language, we need a representation. Such a representaiton needs to be finite, so unless our language is finite, we can't just list all the strings in it.

Representations of languages are the objects that we've been working with all quarter: automata, regular expressions, grammars, and so on. So for every device (deterministic finite automata, nondeterministic finite automata, regular expressions, context-free grammars, pushdown automata, Turing machines, and so on), we can ask each of the above questions.

And as we've seen in other computer science classes, how we get an answer for each question will depend on the representation, even if the two representations are "equivalent". This is why the questions are framed in terms of representation rather than language class—the thing we're doing computation on is the representation.

Membership

The simplest problem we can ask about a language device and a word is whether the word belongs in the language. Again, this is the language membership problem: Given a device $A$ and a word $w$, is $w \in L(A)$?

We define the following language for the DFA acceptance problem: $$ A_{\mathsf{DFA}} = \{\llbracket B, w \rrbracket \mid \text{$B$ is a DFA that accepts $w$}\}.$$

$A_{\mathsf{DFA}}$ is a decidable language.

We can do the same thing we did for $A_{\mathsf{TM}}$: simulate the DFA. Recall that such a machine sets up three tapes:

  1. One tape containing the description of the DFA $\llbracket B \rrbracket$.
  2. One tape containing the input string $w$.
  3. One tape containing the current state.

In fact, this is even easier since we only need to keep track of how far we've read on the second tape.

We can ask the same question for NFAs: $$ A_{\mathsf{NFA}} = \{\llbracket B, w \rrbracket \mid \text{$B$ is an NFA that accepts $w$}\}.$$

$A_{\mathsf{NFA}}$ is a decidable language.

There are a few ways to approach this. One can consider simulating the NFA directly and it's not hard to come up with a scheme to do this. But we can also transform our NFA into a DFA and just run the Turing machine for $A_{\mathsf{DFA}}$ on it.

Now, we'll turn our attention to context-free languages. Again, from our Turing machien simulation, we can very easily get a Turing machine for \[A_{\mathsf{PDA}} = \{\llbracket P, w \rrbracket \mid \text{$P$ is a PDA, $w \in L(P)$}\}.\]

$A_{\mathsf{PDA}}$ is a decidable language.

What about grammars? Consider the following language. $$ A_{\mathsf{CFG}} = \{\llbracket G, w \rrbracket \mid \text{$G$ is a CFG that generates $w$}\}.$$

$A_{\mathsf{CFG}}$ is a decidable language.

This is also decidable; we even gave an efficient algorithm for it (CYK)! So we can pretend that we have a CYK Turing machine.

Emptiness

Another common problem you can ask is whether your language device actually describes any strings at all. This is the emptiness problem: Given a device $A$, is $L(A) = \emptyset$?

We will consider the following language $$ E_{\mathsf{DFA}} = \{ \llbracket A \rrbracket \mid \text{$A$ is a DFA and $L(A) = \emptyset$}\}. $$

$E_{\mathsf{DFA}}$ is a decidable language.

To solve this problem, we simply treat it like a graph problem. A DFA accepts a word if there is an accepting state that is reachable from the initial state. Then we can use a similar idea to the graph connectedness algorithm from last week for this problem. The TM looks like this:

  1. Mark the initial state of $A$.
  2. Repeat until no new states get marked:
    Mark any state that has a transition coming into it from any state that is already marked.
  3. If no accepting state is marked, accept. Otherwise, reject.

We can also show that the emptiness problem for CFGs is also decidable. Again, it'll look a bit different from what we've done with DFAs. First, here's the language $$E_{\mathsf{CFG}} = \{\llbracket G \rrbracket \mid \text{$G$ is a CFG and $L(G) = \emptyset$} \}.$$

$E_{\mathsf{CFG}}$ is a decidable language.

In a way, we can apply the same kind of idea from the DFA case. Of course, a grammar isn't a graph, so it's not exactly the same. However, in essence, what we want to do is check that there's a derivation that will generate some terminal beginning from the start variable. To do this, we check each variable to see if it can generate a string of terminals. We begin by marking terminals and determining whether each variable can generate a string of variables and terminals that are all marked. Once the algorithm is finished, we can simply check whether the start variable is marked or not.

  1. On input $\llbracket G \rrbracket$ where $G$ is a CFG, mark all terminal symbols in $G$.
  2. Repeat until no new variables get marked:
    Mark any variable $A$ where $G$ has a rule $A \to U_1 U_2 \cdots U_k$ and each symbol $U_1, \dots, U_k$ has already been marked.
  3. If the start variable is not marked, accept; otherwise reject.

Containment

Another common problem is if we have two language devices, whether we can compare them somehow. The most basic variant of these is asking whether one is contained in the other. This is the containment problem: Given two devices $A$ and $B$, is $L(A) \subseteq L(B)$?

Consider the following language, $$ C_{\mathsf{DFA}} = \{ \llbracket A,B \rrbracket \mid \text{$A$ and $B$ are DFAs and $L(A) \subseteq L(B)$} \}.$$ Of course, this is another decidable property, so we have the following theorem.

$C_{\mathsf{DFA}}$ is a decidable language.

Again, we make use of the machine that we constructed in the previous theorem. First, let's consider what it means for $L(A)$ to be contained in $L(B)$. We want to check whether every word in $L(A)$ is also in $L(B)$. However, an equivalent way of checking this property is to check whether there are any words in $L(A)$ that are not in $L(B)$. So we construct a DFA $C$ such that $$ L(C) = L(A) \cap \overline{L(B)}$$ and check whether or not $L(C) = \emptyset$. Luckily, we've just shown how to check the emptiness of a DFA.

  1. On input $\llbracket A,B \rrbracket$, construct the DFA $C$ as described above.
  2. Run the TM from Theorem 24.5 on the input $\llbracket C \rrbracket$ to check if $L(C) = \emptyset$.
  3. If the machine accepts, accept. Otherwise, reject.

Solving containment lets us solve other similar problems. For instance, we can solve the equality problem: Given two devices $A$ and $B$, is $L(A) = L(B)$?

The language $$\mathsf{EQ}_{\mathsf{DFA}} = \{ \llbracket A,B \rrbracket \mid \text{$A$ and $B$ are DFAs and $L(A) = L(B)$} \}.$$ is a decidable language.

Of course, we have the opportunity to make use of the machine that we constructed in the previous theorem. Recall that for two sets $S$ and $T$, $S = T$ if and only if $S \subseteq T$ and $T \subseteq S$. This fact combined with the containment machine we constructed gives us a rather simple algorithm for checking equality.

  1. On input $\llbracket A,B \rrbracket$, run the TM from the previous theorem on the input $\llbracket A,B \rrbracket$ to check if $L(A) \subseteq L(B)$ and run it again on $\llbracket B,A \rrbracket$ to check if $L(B) \subseteq L(A)$.
  2. If both machines accept, accept. Otherwise, reject.

Then this allows us to solve the universality problem: Given a device $A$, does $L(A) = \Sigma^*$?

The language \[U_{\mathsf{DFA}} = \{\llbracket A \rrbracket \mid \text{$A$ is a DFA and $L(A) = \Sigma^*$} \} \] is a decidable language.

Again, this is fairly straightforward based on the results we've already shown. One approach is to construct a simple DFA that recognizes $\Sigma^*$ and test whether $\Sigma^* \subseteq L(A)$. Another approach is the check whether $\overline{L(A)} = \emptyset$.

But what about context-free languages? Suppose we have two context-free grammars $G_1$ and $G_2$. Then $L(G_1) \subseteq L(G_2)$ if and only if $L(G_1) \cap \overline{L(G_2)} = \emptyset$, just as above. But there's a problem with this: context-free languages aren't closed under intersection and complement, so we're not guaranteed to end up with a context-free language after this process.

Maybe we can try something more clever? Unfortunately, even if we tried harder, we won't be able to solve this problem, as it turns out to be our first encounter with an undecidable problem. And since containment for CFGs is undecidable, this means that equality and universality are also going to be undecidable.

Well, that's the claim at least, but can we prove it?