**Links to**: [[Stephen Wolfram]], [[Cellular automata]], [[Assembly and assemblage]], [[Solomonoff induction]], [[Process]], [[Computation]], [[10 Bias, or Falling into Place]], [[Entropy]], [[12 Negintelligibility]], [[Structure]], [[Entropy]], [[Noise]], [[Shannon]], [[Weaver]], [[Cybernetics]], [[Statistics]], [[John von Neumann]], [[Bit]], [[Binary]] [[Combinatorics]], [[Simondon]], [[Individuation]], [[Trace]], [[Legibility]], [[Gregory Bateson]], [[Difference]], [[Foundation]], [[Complexity]], [[Compression]], [[Gödel]], [[Kolmogorov complexity]], [[Map-territory]], [[Observer's effect]], [[Turing machine]] inside Turing machine. Together with Solomonoff induction, Occam’s parsimony, Kolmogorov complexity, or other types of simplifying grips on information-processing, this idea sits at the core of any consideration of _pattern_ and its possible compression (i.e., with the observation of _anything_ and its possible predictability). Proposed by Stephen Wolfram, he developed it based on insights gained from many years of research on cellular automata. The consequence of the idea is that some things cannot be deduced from first principles, a veritable blow to any attempt at a principle of sufficient reason. When we are able to witness change and know that change has formalization possibilities, i.e., change is information as processed by _something_ ([[Computation]]), the observation is enough to give us the _sense_ that it can be understood to be _computing_, and is thus bound to rules. However, what computational irreducibility points out, is that in plenty of the processes we observe, we cannot know what will happen **x** steps down the line. In the domain of cellular automata, we can observe this effect by having to run the program in order to be able to see what happens, how it unfolds. This falls in line with three-body problems, chaos theory, problems of priors and induction, and demons in the irreversibility of thermodynamic processes: we observe a certain organization of stuff (e.g., gas molecules dissipating), but then we cannot _retrieve_ an earlier organization once the system has dissipated, it is intractable. In our interest to compress processes, we have plenty of methods for reducing many computations to rules that guarantee their repetition. But the most interesting computations (e.g., things we call living) seem to be open-ended; computationally irreducible. In the examples of some (but not all) cellular automata: computations cannot be sped up by compression rules, this is why their computation is _irreducible_. This principle of irreducibility says that the only way to determine the answer to a computationally irreducible question is to **perform, or simulate, the computation.** Simple behavior is always reducible, complex behavior often is not. %% Continue investigating this by: Looking at how solomonoff induction, which is bayesian and is incomputable, connects to no free lunch (no total generalization possible and questions of equivalence) https://en.wikipedia.org/wiki/No_free_lunch_theorem, Wolpert (Santa Fe institute) wrote a 2023 delineating some additional considerations which circumvent incomputability through what they call “meta-induction”: “# The Implications of the No-Free-Lunch Theorems for Meta-induction” https://arxiv.org/abs/2103.11956 Notes from I don't remember where: According to Wolfram (2002, p. 741), if the behavior of a system is obviously simple--and is say either repetitive or nested--then it will always be computationally reducible. But it follows from the principle of computational equivalence that in practically all other cases it will be computationally irreducible. Here, "practically all" refers to cases that arise naturally or from a simple rule system as opposed to cases that are constructed artificially such as the examples given by Wolfram (2002, p. 747). Israeli and Goldenfeld (2004) have shown that some computationally irreducible elementary cellular automata have properties that are predictable, and so these properties are computationally reducible. In particular, these cellular automata can emulate reducible cellular automata by coarse-graining. One of those is rule 110, which is a universal cellular automaton. However, as noted by Wolfram (2002, p. 745), "when the underlying rules are simple there is often still some superficial computational reducibility.... For example, in the rule 30 pattern on the right one can tell whether a cell at a given position has any chance of not being white just by doing a short computation that tests whether that position lies outside the center triangular region of the pattern. And in a class 4 cellular automaton such as rule 110 one can readily shortcut the process of evolution at least for a limited number of steps in places where there happen to be only a few well-separated localized structures present." "