**Links to**: [[Church-Turing]], [[Lambda calculus]], [[Recursivity]], [[Alonzo Church]], [[Computer]], [[Computation]], [[Computability]], [[Mechanism]], [[Cellular automata]], [[Architecture]], [[Undecidability]], [[Entscheidungsproblem]], [[Decision]], [[The being of “mere” machines and “mere” propositions]], [[Iteration]], [[Translation]], [[Transduction]], [[Machine]]. In order to show that, for example, computational irreducibility is a thing we can identify and refer to, we need a computer. But what does that mean? Just like we need words to refer to things, something which adds an abstract layer to a very fuzzy and context-dependent thing, we need formal systems to add a layer of abstraction to be able to triangulate—or chunk—elements we are interested in parsing. We need to formalize something about reality in order to reveal something about non-formal reality. This is the cool dialectic happening in the evolutionary phenomenon that is: having pebbles, having tally-counting style stuff, then formalizing that into symbols, in order to count more pebbles and other objects, then moving that onto more paper and surfaces, abstracting more, having groups of people called _computers_ do a lot of this stacked-up math, and then, magically: developing an architecture that imitates, by cleaning up/coarse-graining/chunking aspects of this into a mechanism. This is a Turing machine. Is it the formalization of _recursion_ (or [[Self-reference]])? It’s certainly one of the first places where we can say, with certainty, that you can feed a program to a program, and put that elsewhere, and it will most likely behave in the exact same way (mind bit flipping, bit rot, etc.). What about water and dust, and other things like this? These things are very-very-very-very-very unique, individuated computers. What is the use of calling them computers? That we can understand them in the context of a line of developments. To say everything computes is boring, to say to compute is to chunk is to parse is to coarse-grain aspects of reality we are interested in: that is a stack of differences that creates a novel, interesting computation. If you don’t have the formal system, you cannot access the meta-level: you need to see the computation in the schematic sense, coarse, the diagram, to see the whole thing anew. %% Look at Christopher Moore teaching course #Zf Self reference is the origin of recursion LLMs are token-producing mechanisms, not Turing machines Look at all machines, like also Mills and tubes etc Sand %% >Turing introduced his abstract Turing machines in a famous article entitled “On Computable Numbers, with an Application to the _“Entscheidungsproblem_” (published in 1936). Turing referred to his abstract machines simply as “computing machines" – the American logician Alonzo Church dubbed them “Turing machines” (Church 1937: 43). “On Computable Numbers” pioneered the theory of computation and is regarded as the founding publication of the modern science of computing. In addition, Turing charted areas of mathematics lying beyond the reach of the UTM. He showed that not all precisely-stated mathematical problems can be solved by a Turing machine. One of them is the _Entscheidungsproblem_ – “decision problem”. > >B. Jack Copeland, p. 4 in “Computation”. In: Floridi, Luciano, ed. _The Blackwell guide to the philosophy of computing and information_. John Wiley & Sons, 2008. >**Misunderstandings of the Church–Turing Thesis: The Limits of Machines** > >A myth has arisen concerning Turing’s work, namely that he gave a treatment of the limits of mechanism, and established a fundamental result to the effect that the UTM can simulate the behavior of _any_ machine. The myth has passed into the philosophy of mind, theoretical psychology, cognitive science, Artificial Intelligence, and Artificial Life, generally to pernicious effect. For example, the _Oxford Companion to the Mind_ states: “Turing showed that his very simple machine . . . can specify the steps required for the solution of any problem that can be solved by instructions, explicitly stated rules, or procedures” (Gregory 1987: 784). Dennett maintains that “Turing had proven – and this is probably his greatest contribution – that his Universal Turing machine can compute any function that any computer, with any architecture, can compute” (1991: 215); also that every “task for which there is a clear recipe composed of simple steps can be performed by a very simple computer, a universal Turing machine, the universal recipe-follower” (1978: xviii). Paul and Patricia Churchland assert that Turing’s “results entail something remarkable, namely that a standard digital computer, given only the right program, a large enough memory and sufficient time, can compute any rule-governed input–output function. That is, it can display any systematic pattern of responses to the environment whatsoever” (1990: 26). Even Turing’s biographer, Hodges, has endorsed the myth: > >>>Alan had . . . discovered something almost . . . miraculous, the idea of a universal machine that could take over the work of any machine. (Hodges 1992: 109) > >Turing did not show that his machines can solve any problem that can be solved “by instructions, explicitly stated rules, or procedures,” and nor did he prove that the UTM “can compute any function that any computer, with any architecture, can compute” or perform any “task for which there is a clear recipe composed of simple steps.” As previously explained, what he proved is that the UTM can carry out any task that any _Turing machine_ can carry out. Each of the claims just quoted says considerably more than this. > >If what the Churchlands assert were true, then the view that psychology must be capable of being expressed in standard computational terms would be secure (as would a number of other controversial claims). But Turing had no result entailing that “a standard digital computer . . . can compute _any_ rule-governed input–output function.” What he did have was a result entailing the exact opposite. The theorem that no Turing machine can decide the predicate calculus entails that there are rule-governed input–output functions that no Turing machine is able to compute – for example, the function whose output is 1 whenever the input is a statement that is provable in the predicate calculus, and is 0 for all other inputs. There are certainly possible patterns of responses to the environment, perfectly systematic patterns, that no Turing machine can display. One is the pattern of responses just described. The halting function is a mathematical characterization of another such pattern. > >B. Jack Copeland, pp. 10-1 in “Computation”. In: Floridi, Luciano, ed. _The Blackwell guide to the philosophy of computing and information_. John Wiley & Sons, 2008.