You are here

Computability and Turing Machine

1988

The World within the World, 1988

Way back in 1900, in the age of innocence before the revolution in mathematics and logic brought about by Kurt Gödel, David Hilbert delivered a famous address to the International Congress of Mathematicians when they met in Paris, in which he posed what he considered to be the twenty-three most challenging unsolved problems of mathematics. Several of the conundrums that he set during that address have influenced the development of whole areas of mathematics. Many of Hilbert’s problems remain unsolved today. Hilbert’s last problem was envisaged as a step towards the completion of the formalist programme for mathematics. It challenged mathematicians to find a systematic method to determine whether or not any mathematical statement is true or false. At that time Hilbert assumed that such a procedure must exist. It was only a question of finding it. One can see that if such a method were found, then it would also serve to define the scope and content of mathematics for the constructivists. However, Hilbert’s view that such an automatic method of decision does exist was not universally shared. Some years later, G. H. Hardy expressed the view that.

There is of course no such theorem, and this is very fortunate, since if there were we should have a mechanical set of rules for the solution of all mathematical problems, and our activities as mathematicians would come to an end.

However, Hardy did not seem to base his scepticism upon a belief that such an all-deciding theorem was unattainable in principle, but merely upon the belief that it would be beyond the wit of mortal man to find and use it. Hence, Gödel’s discovery that there can exist no method for deciding the truth of all statements was a great shock to mathematicians. Nevertheless, despite this irreducible undecidability at the heart of mathematics, there still remained a remnant of Hilbert’s original dream which might be salvaged. Even though some statements made in the language of an axiomatic system must be undecidable, there might still exist a systematic method for finding all the decidable statements and distinguishing the true form the false one. If so, one might be able to determine the relative degree of incompleteness arising from different sets of axioms.

It was soon demonstrated by Alonzo Church and Emil Post at Princeton, and by Alan Turing at Cambridge, that even these more moderate goals are unattainable in principle. Church found an abstract method for determining whether a particular mathematical operation can be computed for every one of the numbers to which it is applied. He was then able to show that it is possible to construct mathematical formulae which cannot be evaluated or determined true or false in a finite number of steps. Any computer would keep running forever in a vain attempt to complete the calculation. For these examples there exists no method for determining whether or not they can be proved. There exist unsolvable problems.

Working alone in Cambridge, Alan Turing established the same conclusion as Church, but by a method that was to have a host of wider implications for the future invention and development of computers. Unlike Church, Turing developed his disproof of Hilbert’s conjecture around the conception of a hypothetical machine which would decide the truth of statements by a set of well-defined sequential operations. Subsequently, these paradigms have become known as Turing machines. During World War II Turing made important contributions to Allied cryptography by pioneering the construction of real mechanical devices which would try vast numbers of combinatorial alternatives in the search for a correct decoding of intercepted German communications.

A Turing machine is the essence of any computer. It consists of a memory tape of indefinite length, and a processor which manifests the current state of the machine. That current state is determined by a combination of the previous state and the last instruction which told it how to change. A real computer will possess all manner of fancy accoutrements—monitors, graphics, software, keyboards, and so forth—but these play no role in the logical capability of the machine. They are aids to its use. No real computer possesses greater problem-solving capability than that of Turing’s idealized machine.

All a Turing machine can do is take a list of natural numbers and transform it into another list of natural numbers. The operation of the machine just matches a number in the first list with one in the second list. If the operation of interest is multiplication by two, then all the numbers in the second list will be double those in the first. Unfortunately, there exist operations which are infinitely more complicated because there exist infinite sets which are infinitely bigger than the infinite set of natural numbers (for example all those numbers, called irrational numbers, like ‘pi’ and the square root of two, which cannot be expressed exactly as vulgar fractions—one natural number divided by another). They cannot be systematically paired with the natural numbers. Such infinite sets are called uncountably infinite.

Turing concurred with Church by showing that there exist mathematical problems which cannot be decided in a finite number of logical computations by one of his idealized machines. Such unperformable operations are called non-computable functions, and their existence is a consequence of the existence of uncountably infinite sets. As an example, suppose the function G (N) is applied to the natural numbers N = 1, 2, 3, … and chosen either to be equal to the value of the Nth computable function plus one, or to zero if the Nth computer program fails to stop with an answer in a finite number of steps after N is inputted. We see that G cannot be a computable function, because if the Nth input program did compute it then we would have the impossible result that

G (N)= G (N) + 1.

Another set of non-computable examples are the so-called “Busy Beaver” functions B (N), defined as the largest number produced in the output of any program less than N bits in length. The Busy Beaver function grows in size faster than any function that could possibly be computed. Hence, it is not computable. In practice, it is the hardest mathematical problems that are most easily shown to be unsolvable, because they can most easily defeat the computer.

Although it cannot compute everything that is fed to it, most computer scientists believe that a Turing machine is capable of computing anything that can be solved in a finite time by any physical sequence of realizable operations: that is, a problem is solvable if it is solvable by a Turing machine. This is called the Church—Turing hypothesis. Until recently this hypothesis was regarded as telling us something about mathematics, of the sort that all the possible ways that one could dream up of computing things are at root equivalent because they can all be reduced to the capability of a Turing machine. Yet, it is clearly telling us something deeper about the structure of the physical world, and the fact that we find mathematics so beautifully adapted to its workings.

David Deutsch, an Oxford physicist, has recently argued that the defining feature of the computable functions which the Church—Turing hypothesis maintains can be computed by a Turing machine is that they can be realized in Nature. If we had two ‘black boxes’, one containing some real physical process and the other an idealized Turing machine, then identical outputs are possible from the two boxes for the same inputs. We could not tell from the result alone in which box it had been computed. This idea draws the link between mathematics and the laws of Nature very tight. The laws of Nature happen to allow us to build exact physical models which execute the arithmetical operations of addition, subtraction, division, and multiplication. There is no known reason why Nature should simulate the rules of arithmetic; no reason why the laws of Nature should be constrained by, or linked to, the computational ability of mathematical algorithms. It is fortunate that Nature does run in this simple fashion. Were it not so, then no physically realizable electronic device of silicon and metal could compute the operations of addition, subtraction, division, and multiplication.

These simple arithmetical operations would be non-computable functions which we would be unable to perform or employ in constructive proofs. All we could achieve by our mathematics would be non-constructive existence proofs that told us that they existed, in much the same manner that we can deduce the existence of undecidable propositions. For example, if our computers could do no more than carry out the geometrical constructions which we can achieve on paper by the use of a straight edge and a pair of compasses (that is, drawing straight lines and arcs of circles), then we would be unable to trisect an angle computationally, even though the concept of a trisected angle would be familiar to us and the existence of some line which did trisect an angle might be provable non-constructively. The other side of this apparent match between Nature and arithmetical computation is that it is unexpected and unproven, except by experience, and so we should not be surprised, although we certainly would be, if there were found a physical process in Nature which simulated the computation of a function that could not be computed by a Turing machine.

This correspondence between physical realizability and computability seems to require something like the quantum picture of reality to be true. Certainly, the classical world of Newtonian physics does not possess the Church–Turing property. Energy does not come in discrete countable quanta in the non-quantum view of the world. The continuum of states that exist for every classical physical system prevents the existence of a one-to-one correspondence between computability by a countably infinite sequence of operations and the laws of Nature. However, Deutsch has argued that it may be that all finite systems in Nature can be simulated by a “quantum computer”. The prescription for such a hypothetical device can be drawn up in general terms, and it appears to have the potential to carry out computations faster than any Turing machine, although it cannot compute any function that cannot be computed by a Turing machine. Most striking of all, quantum computation appears to require an objective quantum reality to exist. This is only possible in the ‘Many Worlds’ interpretation of the quantum theory, and offers the hope that there may one day exist an experimental resolution of the problem of quantum ontology.

The World within the World (Oxford-New York: Oxford University Press, 1988), pp. 262-265.