When we talk about Turing machines, it's helpful to talk about the state that they're in. For Turing machines, the "state" of the machine is really a combination of three settings, which together are referred to as a configuration. A configuration specifies the particular state of a Turing machine during computation. More specifically, a configuration is defined by the state, the tape head position, and the contents of the tape.
For a state $q$ and two strings $u, v \in \Gamma^*$, we write $uqv$ for the configuration where the current state is $q$, the current tape contents is $uv$ and the tape head is located on the first symbol of $v$.
First, we'll define a Turing machine for the language $L = \{a^n b^n \mid n > 0\}$. The operation of the machine can be described as follows:
A formal definition of this Turing machine was presented in class.
Next, we have the language of palindromes $L_{pal} = \{ x a x^r \mid x \in \Sigma^* , a \in \Sigma \cup \{\varepsilon\} \}$. We can describe the machine as follows:
Next, we'll move on to a language that is not as simple as something that can be recognized by a PDA. This example can be found as Example 3.12 in the textbook. The language that we are considering is $$L_e = \{\# x_1 \# x_2 \# \cdots \# x_\ell \mid x_i \in \{a,b\}, x_i \neq x_j, 1 \leq i,j \leq \ell, i \neq j \}.$$ This language is the list of strings separated by $\#$ and each string is different. The machine that decides this language works by comparing each word $x_i$ with every word $x_j$ for $j > i$. A description of the operation of the machine is as follows:
We still haven't explicitly linked the notion of computation as we understand it today with the Turing machines that we have been studying so far. All of the problems we've studied until now appear to be formal language problems. That is, we've been constructing Turing machines in order to decide various languages. However, we can look at these languages from the computational point of view. For instance, we can consider a Turing machine that decides the language $L_{pal}$ to be an algorithm for determining whether a given word is a palindrome.
In fact, it turns out we can view many computational problems as language inclusion problems. We say that a problem that results in an answer of yes or no is a decision problem. A decision problem $P$ has an associated language $L_P$. Then an input $w$ corresponds to a yes for problem $P$ if and only if $w \in L_P$.
Implicit in the correspondence between problems and languages is the assumption that all of our inputs are strings. We can always encode objects into string form. Suppose we have an object $O$. Then the string representation or encoding of $O$ is denoted by $\langle O \rangle$. If we have several objects $O_1, O_2, \dots, O_k$, then we denote their encoding into a single string by $\langle O_1, O_2, \dots O_k \rangle$. We assume that our objects can be encoded into a reasonable encoding which a Turing machine can decode.
Suppose we wanted to know whether a given undirected graph is connected. An undirected graph is connected if every node can be reached from every other node. In other words, we want a Turing machine that decides the language $$ L = \{ \langle G \rangle \mid \text{$G$ is a connected undirected graph} \}. $$
Now, we need to consider a proper encoding for graphs. Recall that a graph is a pair $G = (V,E)$, where $V$ is a set of nodes or vertices and $E$ is a set of edges, which are pairs of vertices. Then we can encode a graph $G = (V,E)$ by as follows: $$ \langle G \rangle = (v_1,v_2,\dots,v_n)((v_{e_1,1},v_{e_1,2}),(v_{e_2,1},v_{e_2,2}),\dots,(v_{e_m,1},v_{e_m,2})). $$ where $v_i \in V$ are vertices. This particular encoding is a list of all of the vertices in the graph followed by a list of all of the edges. Now, it is not too difficult to see how a Turing machine can decide $L$.