**Links to**: [[The schein of negintelligibility]], [[Neguinteligibilidad]], [[Entropy]], [[Free energy principle]], [[Edging thermodynamic equilibrium]], [[Non-philosophy]], [[Non-linearity]], [[Virtual]], [[05 Prediction]], [[Possibility]], [[Out of phase]], [[Complexity]], [[Simplicity]], [[Pattern]], [[Patternicity]], [[Deleuze, the Actual-Virtual and Prediction-Error Minimization]] %% [[Negintelligibility notes]] %% # What is it that _drives_ intelligence? Negintelligibility: Transitions between simplicity and complexity &emsp; >If human thought, feelings, sensations, decisions, emotions, moods etc. are amenable to any kind of scientific analysis and explanation then it must be because we are made of appropriate kinds of (virtual) machines: Every intelligent ghost must contain a machine.^[N.B.: this last sentence opens this project’s introduction, too. It is here placed again as it points to the limits of knowledge.] Alternatively, there may be inexplicable, magical, phenomena in the universe. So far I have seen no evidence for this, only evidence for our currently limited ability to understand and explain, and evidence that **the frontiers are constantly being pushed forward**. > >Sloman 1990 p. 13, our emphasis. &emsp; <div class="page-break" style="page-break-before: always;"></div> # Negintelligibility **Abstract**: This chapter examines intelligence as the dynamic interplay between what we often call _complexity_ (as in: high in information content, or intractable, or excessive, etc.) and _simplification_ (as in: compressible; or high in predictive accuracy, as short-range or narrow domain application, etc.) in adaptive (cognitive) systems. This analysis is initially motivated by a desire to challenge contemporary AI discourse that dismisses “mere” pattern recognition as a subordinate cognitive function, as well as challenge proposals that simplification is the trademark of scientific explanation. We propose that intelligence can be understood as the systemic, organized capacity to adapt by _traversing_ pattern hierarchies: intelligence not only resolves complex patterns into simple ones (exploitation), but, crucially: also _explodes_ simplicity into complexity (exploration). We frame simplicity as predictive spatiotemporal compression (summarization, reduction, etc.), and complexity as (the designation of) pattern intractability and/or unpredictability. We define patterns as sensitivities to difference that constitute the fundamental substrate of perception. The intelligible domain is thus circumscribed by _patternability_—that which can be recursively, iteratively mapped—while that which ‘resists’ patterning remains outside the scope of the intelligible. To frame this effect, we introduce the concept of _negintelligibility_: a meta-pattern that can be speculatively understood as a vague ‘attractor’ (a set of unactuated states) driving ‘intelligent’ systems toward perpetual pattern-restructuring. Drawing an analogy with _negentropy_, we argue that negintelligibility operates _within_ negentropic systems, compelling intelligence to continually remodel established patterns against persistent epistemic barriers (again, where these can be understood as presentations of novelty, noise, excesses, negativities, absences, catastrophic limits, symmetry-breakings, noumenal limitations, and other related phenomena). This theoretical speculation suggests that intelligence fundamentally operates through bidirectional translations between simplicity and complexity, a metastable condition driven by what we term “negintelligible”: that which resists final pattern stabilization. Through various examples, expositions and comparisons to similar proposals (e.g.: irruption theory, entropic brain), we will demonstrate how this conceptualization can offer novel insights into the possible naturalization of intelligence, learning, and the limits of knowability, through the lens of active inference. <small>Keywords: Intelligence, Active Inference, Negintelligibility, Entropy, Complexity, Simplicity, Prediction, Accuracy, Unknowability.</small> <div class="page-break" style="page-break-before: always;"></div> ### Introduction &emsp; >We must emphasize that the Unknown is an epistemological category, not the mysterious ineffable thus named out of mere “laziness” or “irrationality.” If we put the divine, the Unknown, absolute contingency, incalculability, and even Dao into this category, **it is not simply a gesture to affirm the irreducibility of life to physico-chemical activities, or of spirituality to matter, but also to suggest that it is necessary to _rationalize_ the Unknown, which remains necessary for any system of knowledge** in order to reframe the question of technology, so that technology will have a finality that is not a finality of use but rather a finality **beyond** usage. > >Hui 2019, p. 195, our emphasis in bold.^[Hui continues a few lines ahead: “The difference is that in cybernetics the Unknown is ignored on the level of functioning, meaning that the Unknown is absent of function, while we want to take the Unknown as functional, which not only imposes constrains and limits on our comportment in the world, but also allows us to develop a nonexhaustive relation with the world and with technology. When I say that it is necessary to “rationalize” the Unknown, I don’t mean to suggest turning the Unknown into something ready-to-hand that we can grasp, like a glass of water in front of us, but rather constructing a plane of consistency that allows us to access the Unknown through the symbolic world that we have inherited and within which we live.” (ibid.). Throughout this thesis we have highlighted the necessary attention to function over form or identity, negintelligibility is, therefore, another proposal aimed at highlighting this.] &emsp; Intelligence is a highly controversial concept that plays a crucial role in discussions related to sentience, adaptation, consciousness, reasoning, abstraction, modeling, skill-acquisition, prediction, and many more phenomena. In the context of contemporary discussions on artificial intelligence, the concept is often contrasted with “mere pattern recognition” as a ‘mindless’ or ‘uncritical’ activity.^[In this thesis we often cite Chollet as a prominent representative of this vision, see Chollet 2019, 2024.] Challenging this dismissal of pattern-recognition, this article frames intelligence as the organized capacity to perform simple to complex (and vice versa) operations in the production of patterns. For our purposes, _simplicity_ is defined as **spatiotemporal brevity**, and _complexity_ is defined as the quality we ascribe to phenomena with an **intractable or spatiotemporally unforeseeable degree of patternicity**. Patterns are _repetitions of difference_ that structure the fabric of perception-action, and changes in patterns around an ‘intelligent’ locus dictate the adaptations of said locus. To this locus, that which is intelligible is patternable, and the unpatterned/unpatternable or inscrutable is beyond the scope of intelligibility, but it can also be understood as the meta-pattern which drives it, which we call _negintelligible_. We start with a brief overview of _simplicity_. One of the _simplest_ patterns “out there”^[The question of whether it is perceived or performed, and what the difference between these two is, is considered in [[All things mirrored]].] is symmetry, and it serves to explain what we mean by brevity. Symmetry, a phenomenon we are subject to everywhere, all day long, and arguably that which grants us our sense of orientation (Kant),^[See: [[Orientation]].] drastically reduces—phenomenal, physical, calculable, and more types of—uncertainties, by providing information we can be pretty radically certain about.^[One of the future interests of the author is to look into measures of human (and perhaps other) brain entropy as agents become exposed to different types of symmetries.] The conception of simplicity as _brevity_ (say, a symmetrical figure: • ) can also be understood in terms of _formal_ computational theories.^[Not just all-encompassing conceptually fluid chunking and parsing. Computation, understood formally, can be taken to be a simplification of the _possibilities_ of computation, by requiring, e.g., that we _stop_ at a certain time. More on stopping, death and sleep in later sections.] The measure of Kolmogorov complexity, for example, where the simplicity of an object of interest is taken to be the *length of the shortest program capable of generating it*, suggests that simplicity inherently involves spatiotemporal compression and limited axioms and parameters. For our purposes, this aligns with _parsimony_ as famously articulated in “Occam’s Razor,” i.e., privileging the type of explanation requiring the least **work** (time, effort, communication, etc.). Another variant of this, Solomonoff induction, further reinforces this through the formalization of how **shorter hypotheses** should be assigned **higher prior probabilities** (in inductive chunking), effectively defining _simplicity as brevity_ in our formal representations of (often, always complex) phenomena. Simplicity, therefore, can be understood as the term we ascribe to manifestations of inevitable open-endedness as they become phenomenally, calculably, etc., chunked within finite spatiotemporal restrictions, and therefore requiring _minimal_ resources to describe, generate, or predict. These principles also extend to experimentation with the open-endedness of computational models, where Wolfram’s (2002) explorations of cellular automata reveal how narrowly defined, elementary _rules_ can generate irreducible temporal complexity: a pattern which just keeps on going, and seems more _interesting_ than other patterns, to symmetry-sensitive beings such as ourselves.^[See: [[Computational irreducibility]], and also: [[Assembly theory]].] Complexity, on the other hand, seems to designate the _unpredictable_ in experience, what _appear_ as intractable or spatiotemporally unforeseeable qualities, (layered) degrees of patternicity. We may understand, through established axioms and programs, how expansive fractal geometries can be reduced to simple rules, but their visually (or tactile, or auditory) captivating nature reveal how—potentially simple but—infinitely recursive self-similarities and symmetries generate patterns we cannot perceptually circumscribe as reducibly finite *in the perceptual sense*, despite knowing something about their deterministic origins.^[Or, perhaps, what we see is already a radical perceptual simplification, as the Ryota Kanai _healing grid_ illusion (2005), and many other illusions, can be understood as hinting at. See: [[Assembly and assemblage]] for a reproduction of the illusion. See also: [[All things mirrored]].] Evolutionary processes, too, demonstrate this type of ungraspable complexity but with even less compressible reversibility than fractals: as emergent properties and multi-generational dynamics cannot be traced “back”, become spatiotemporally reduced, to initial conditions. What we can analyze as ‘simple’ selection and coupled interactions with environmental factors, produce outcomes that defy compressive prediction. E.g., multi-scale dynamic interactions at the microscopic level emerge as macroscopic top-down qualities^[Top-down causal effects can be understood as something quite negintelligible, too.] that, while potentially governed by circumscribable rules, manifest as patterns that are computationally _irreducible_ (we **cannot** finitely chunk them other than by using the chunking description we call “complexity”): no spatiotemporal shortcuts exist.^[A pattern such as the entire universe is one where compressibility limits are clearly reached. On this topic please visit [[Computational irreducibility]] and see: Wolfram 2002.] In light of the proposal that intelligence is that which is able to orient itself in (self-)dictated transitions between the complex and the simple (which we explore later), we introduce the concept of **negintelligibility** as that which **resists the patterning persistence of intelligence**. Thinking about the brevity of the following circularity as identity: A = A, we can understand the interpretation of a challenge such as A = ≠ A as resulting in cognitive effort and therefore some degree of complexity.^[See also: [[Identity]] and [[Principle of indiscernibles]].] As we will explain, this observation is comparable to expositions of _irruption theory_ (Froese), and the _entropic brain hypothesis_ (Carhart-Harris). If the generative embedding of the intelligible is the not-yet-patterned, then we can call negintelligible that which drives the persisting, patterning logic of intelligence. Intelligent contractions between complexity and simplicity are thus dictated by the meta-pattern of negintelligibility, as an _attractor_^[An _attractor_, in the study of random dynamical systems (i.e., moving systems tempered by contingency, which may evolve or not), is a helpful conceptual tool to guide our orientations. In Parr, Pezzulo and Friston (2022, pp. 48-9) we are presented with a very schematic one that may set the tone for the rest of our incursions into attractors as active inference-driving structures: ![[parr pezzulo friston attractor AIF book Negintelligibility.png|500]] As the authors explain, the lefthand side attractor can be understood as “analogous to a thermostat, which (in cybernetic parlance) has a single set-point [the plot on the righthand side] and cannot learn or plan” because it only has a _single_, simple attracting point. Active Inference uses the idea of complex, layered attractor systems for understanding learning and adaptation systems. Non-equilibrium steady-state systems such as ourselves are guided by very, very, very complex attractors: intractable and not overseeable in terms of the simple “thermostat” here. For the authors “the difference between simplest and more complex systems can be reduced to the different shapes of their attractors—from fixed points to increasingly more complex and itinerant dynamics.” (ibid., p. 49). This is the trade-off or balance that living systems permanently deal with, the transitions between complexity and simplicity “a compromise between excessive stability and excessive dispersion ... [where the mechanisms of AIF explain how such a] compromise is achieved.” (ibid). See also: [[Semantic attractor]]. ![[Negintelligible attractor.png|500]] Image of “negintelligible attractor” created by author, with the help of Claude Sonnet 3.0.] which drives intelligibility, drives compression which is met by ever more complex limits (if anything because we observe things as expanding, as having an arrow of time). The analogy with negentropy is used to get a grip on this: just as negentropy, i.e., _counter_-dissipative life, is suggested to stabilize variable adaptive patterns “within” the constraint that is entropy, negintelligibility can be understood as that which persists _within_ negentropy’s seeking of (meta)stability, resulting in ever-newer pattern-restructuring which works against the grain. Quite literally, in complex adaptive systems which gain degrees of freedom through complexity: just as entropy increases, intelligibility as compression also increases. But what we also observe in these systems is that said compression remains challenged, interesting, as it encounters inevitable (possible) exceptions, intransparencies, differences, absences, catastrophes, distractions and noumenal challenges), permanently remodeling the established, and thus what is considered intelligible.^[Another schematic representation from the Active Inference handbook may illustrate this, again (Parr, Pezzulo and Friston 2022, p. 156): ![[Negintelligibility generative process and generative model aif handbook.png|500]]Here, we are shown a simple schematic of the generative model (GM, above) as a self-evidencing event, and the generative process (GP, below) as the world, that which provides data for the model. Position x (the black disk) in both schemas is the present position (of e.g., a limb), and the line its propensity towards a new position. The new position is assumed by the GM, but in the GP, the information which can contextualize the prediction, is absent and inaccessible, until an action leads to its effectuation. Focusing only on the dotted line in both schemas, disregarding the rest, we can understand as aspect of the negintelligible as the missing v in the schema of the generative process; the generative model assumes something which it cannot know is there, until it finds evidence for it, through action, and that evidence leads it to learning in the service of yet _something else_; as the bounds of experience expand unknowability does, too. For the technical details please refer to the book.] We propose, therefore, that what we call “intelligence” can be understood as the systemic, organized^[Or “organological” (Hui 2019, p. 218).] capacity to adapt by _traversing_ pattern hierarchies: intelligence not only resolves complex patterns into simple ones (exploitation), but, crucially: also _explodes_ simplicity into complexity (exploration).^[This is where we take the frictions ensuing from exchanges between ‘intelligent’ systems, such as technology and culture, as “the condition under which thinking is possible, and this condition always carries a negative dimension such as incompleteness, lack, or obstacle” (ibid., p. 2019).] It seems crucial to highlight this in an age of data-accumulation and exploration: we seek patterns through technologies such as LLMs and **want to learn** _more_, not less. We seek to cure disease because we are interested in the flourishing of (human) life,^[Problematically so, this must be noted, at the cost of other, even human, lives.] not its containment.^[Well, as argued elsewhere, eugenic proposals such as the one promoted by Stephen Hsu, prove this statement wrong.] The systems we call science or language are made up of agents that do not know language or science before becoming embedded within them, therefore, what they learn to compress as they are engaged _by_ these systems, is the complex, contextual reality that creates novelty and relevance, incessantly so. The Einstein citation opening chapter four in the AIF textbook reflects something about what we try to underline: “Everything should be made as simple as possible, but not simpler.” (Einstein in Parr, Pezzulo & Friston 2022, p. 65). Parsimony, simplicity, compression: these should explain _something_, not obviate or obliterate it. That “something” seems, often, rather complex; irreducible, but its details are not incidental: they are precisely what makes compression interesting and possible at all. In phenomenal perception and “epistemic foraging”: this is the trade-off balance sought after by AIF agents as they seek **both** stability through that which rewards (homeostasis, satisfying energetic needs, etc.) and what can be accurately predicted (as in: resulting in low surprise), and the search for novelty: **learning** from surprise (Smith et al., 2022). Following the interpretation of life as performing open-ended _mortal computations_,^[A subject also treated in [[05 Prediction]] and [[Edging thermodynamic equilibrium]].] the diagram which explains the cognitive-philosophical grounding of the mortal computer, which is observed as a process which scales upwards in complexity: &emsp; ![[Negintelligibility, complexity, mortal computation diagram.png|500]] <small>Figure reproduced from Ororbia and Friston 2023, p. 13.</small> &emsp; can become slightly expanded or reinterpreted by the concept of negintelligibility, if presented as such: &emsp; ![[Negintelligibility ororbia and friston mortal computation adapted.png|400]] <small>Diagram section with added layer by author.</small> &emsp; What this layer introduces is the necessary open-ended limit of _absolute_ unknowability which complex metacognitive, mortal conditions _must_ take account of. Intelligence is the capacity of a 5E system to structurally-; autopoietically couple by minimizing prediction errors and learning to update an internal generative model which reduces (possibly) deadly uncertainty, then perhaps our understanding of the *external* should be modulated: not only as the encounter with entropy which _shapes_ the generative model under 5E conditions, but as a larger attractor. This effect can perhaps be understood as an _actual_ mortal computation: a system has never died and lived to tell the tale, therefore we could say: a system avoids something as fundamental as death (dissipation, non-continuity), without know exactly what it is that it avoids. This is strange, and interesting. In thinking about structure-learning: >When considering our overall model reduction results, it is worth noting that a model’s accuracy need not always correspond to its adaptiveness …. **In some cases, making either coarser- or finer-grained distinctions could be more adaptive for an organism depending on available resources and environmental/behavioral demands**. It is also worth noting that, while we have used the terms “correct” and “incorrect” above to describe the model used to generate the data, we acknowledge that “all models are wrong” (Box et al., 2005), and that the important question is not whether we can recover the “true” process used to generate the data, but whether we can arrive at the **simplest but accurate explanation for these data**. The failures to recover the “true” model … may reflect that **a process other than that used to generate the data could have been used to do so in a simpler way**. Simpler here means we would have to diverge to a lesser degree from our prior beliefs in order to explain the data under a given model, relative to a more complex model. > >Smith et al 2020, pp. 13-14, (own emphasis in bold). Coarse-graining is context-sensitive, observer-dependent, and: no model is accurate unless it’s dead, which means that adaptive learning settles itself through historical constraints. While following much of what is proposed by Smith, Friston and other AIF scholars here, the theoretical challenge (much of this can be said of AGI research, too) would be that there is much needed attention to structure-learning in complex concepts which highlight the negintelligible nature of thought as it navigates uncertainty. In complex concepts (e.g., “hegemony”), correlation and causation are very, very cyclically confused: we often start using concepts and exploring their actualization as we couple to dialogical networks and before we “know” what they mean with accuracy. And, as remarked throughout this work, controversial concepts are notoriously in-the-making, rather than already inductively sedimented. Structure learning processes involve both **perception** (updating beliefs to structurally match sensory data) and **action** (modifying the environment to match predictions), where perception(-cognition)-action (P(C)A) loops serve to maintain the coherence—i.e., _delayed_ dissipation: negentropy—of a system. The separation of perception from action serves a _modeling_ function in AIF, but does not reflect P(C)A in reality as it is, indeed, a continuum:^[“Motor actions have sensorial consequences, and sensorial actions have motor consequences. This reafference principle is of universal validity.” Varela 1984, p. 316.] sensing the texture of one’s own hands, for example, requires moving one hand against the other hand’s surface, while something _else_ witnesses this process and coordinates action, but where one process begins and the other “ends” is quite difficult to demarcate.^[This is neither subjective nor objective, “not one, not two” (Varela following Suzuki), which points to the “fundamental limits about what we can understand about ourselves and the world” (Varela 1984, p. 322). Perhaps interestingly, Varela doubles down on himself by proposing the duality of “ourselves and the world”. Escher’s famous reciprocally-drawn hands are the opening image of this text.] Intelligence, in AIF terms, is a distributed system undergoing **permanent restructuring**, as it navigates model expansion and reduction (see, e.g., structure learning as explained in Smith, Schwartenbeck, Parr & Friston (2020) “An Active Inference Approach to Modeling Structure Learning: Concept Learning as an Example Case” and Neacsu, Mirza, Adams & Friston (2022) “Structure learning enhances concept formation in synthetic Active Inference agents”). From the bacterium that senses where _not_ to go in terms of temperature or nutrient gradients; to the dog that returns to the same place to sleep; to complex, hierarchical predictions emerging from large ecological dynamics: (groups of) organisms navigate increasingly abstract and spatiotemporally extended domains of uncertainty, where any possibility of learning and adapting is premised by a _lack_ of control because of the inevitable expansion of possibilities as multiperspectival, multilayered systems interact. Because of this, many systems (such as cognitive ones) inevitably *simplify*. Bracketing “cognition” in PCA serves to think about how time and memory are an as-of-yet unresolved issue when thinking about, e.g., “simpler” organisms such as viruses, supposedly lacking the type of memory we ascribe to more complex systems. _Simplicity_ as predictive spatiotemporal compression (summarization, reduction, etc.), and complexity as (the designation of) pattern intractability and/or unpredictability, reflect much about what we do with time/memory through systems such as language and science. Many propose that science is purely in the business of parsimony, simplification, data compression. While aspects of this are true and pragmatically effective, one can also understand science as incredibly _expansive_, as the creation of complexity and ever-novel problems. In this sense, we could also say that when we “summarize” many events under one distinctive phenomenon—e.g., the planets and apples as all constrained by gravity—what we say we observe is effectively _variety_ explored-expressed under one constraint, instead of variety as _reduced_ to one constraint (or function). As mentioned, if we define patterns as sensitivities to difference which constitute the fundamental substrate of perception: patterns such as the very identification/discovery of constraints can be understood as themselves sustaining eternally-differentiating possibilities within them (e.g., witnessing gravity as a constraint—which we are far from ‘understanding’ in the traditional scientific sense—provides an epistemic vantage point for thinking _beyond_ gravity). The intelligible domain is thus circumscribed by patternability—that which can be recursively and/or iteratively mapped—while that which “resists” patterning remains outside the scope of the intelligible: the negintelligible.^[What Hui refers to as the _unknown_ which must be rationalized: “[W]hat I mean by the rationalization of the Unknown or the unknowable carries a different meaning. It doesn’t mean reinstalling a Godlike transcendence, but rather preserving the _instrumentality_ of technology and unifying it with the spirit, and at the same time going beyond technological instrumentality so that new forms of life and happiness can be perceived through new symbolic systems that allow the Unknown to be welcomed not only in the form of a sect, a religion, or New Age practices, but also manifest in scientific research and technological development, which no longer carry the name “modern science and technology.” Modern science and technology sees only the standing-reserve of the universe, and the possibility of exploring the secrets of the universe according to a materialist doctrine. This groundless ground, because of its virtuality, will be revealed, in one form or another, after a “hitting bottom” of the alcoholic moderns: Only a God can save us.” Hui 2019, p. 197. ] But why introduce this concept, when many other concepts already express this quality? As stated throughout, we follow an abstractive impetus of trying to see things hanging in the most general sense (Sellars), as well as radical concept creation (Deleuze). This impetus can be understood, again, as both reductive **and** expansive. We see the designation of something which should pool all these effects under one roof, as “negintelligibility,” as interesting because it allows for a new entry into these effects, ordering reality in a novel way, possibly leading to insights from different perspectives. We are here highlighting how the effects of uncertainty, noise, complexity, criticality, entropy, etc., can all be considered as _negintelligible_ affairs. Cognitively speaking, at the murky point at which the human thing we call memory begins, forget-recall cycles driven by homeostatic set-points and given circumstances, are ever-more challenged by the acquisition of novel input. Instead of making things easier, negintelligibility, as a concept, therefore also urges this uncomfortable encounter with surprise. It is in the negotiations of and frictions between precise definitions that we arrive at novel understandings. Consistency and completeness are only possible in the form of overlaps between systems (e.g., measuring blood sodium is completely unambiguous—of course, errors always occur—but the reason we are measuring is in service of something else: the functions of life). Memory, a generative model, adapts by modifying model parameters: but **the future seems to be more thermodynamically encompassing than the past**, and there are undeniable limits which constrain memory in various ways,^[E.g., the self/subjectivity as highly contradictory or intransparent (i.e., the unconscious), or simply the fact that brains reach an age at which plasticity becomes more rigid.] which trick our time-tracking: the future _seems_ possibly bigger but it is in fact getting smaller (possibilities entrench as we advance as mortal beings). As mortal computers, it is during the inevitable stage of reduction in brain plasticity that we seem to reproduce in order to avoid this: we have children in order to really be a _perpetuum mobile_, and it might be the reason we started offloading much of our cognition onto the environment (tool use, farming, extended cognitions of all sorts): because of the awareness of the limits of cognition, because of something stubbornly negintelligible such as death.^[As we observed in [[05 Prediction]], this is what possibly leads to the thing we call AI, the most negintelligible project of all, the extension of mind as something we _necessarily_ do not understand fully, in order to harness max entropy (Jaynes).] Following in line with mortal computation, we could speculate that it is maybe limits like sleep (which reflect the larger landscape constraint of gravity, of our world turning), or the compressive smoothening of ocular saccades (which reflect how physiology solves a perceptual problem of possibly infinite regress), and other “shortcuts” in organic experience as different _copings_ or couplings with the same effect: interruptions (not unlike the idea of irruptions) are precisely what makes the system capable of coupling: much like noisy grain is needed for photographic film to be capable of capturing other “higher order” patterns (Prado Casanova 2023). What we see as the elegance of AIF lies not in (evolutionary or energetic) reductionisms, or the simplification of life to “mechanistic” components,^[Where, we also argue, there are possible objections to the designation of ‘mechanistic’ as dismissive, see: [[B The being of “mere” machines and “mere” propositions]].] but in its embrace of complexity through the possibility of parsimonious unification, but with attention to how what _interests_ unification is always contextual variability. We therefore temper the possible oversimplifications here with our concept of the negintelligible. While traditional scientific explanations can be understood as seeking to summarize phenomena to their simplest components, AIF offers a framework that _scales_ across levels of explanation without sacrificing richness.^[See: [[Free energy principle]].] Acknowledging that something like “intelligence” manifests itself across domains and substrates, in the dynamic interplay between prior beliefs (as initial conditions or highly abstract, inherited concepts), and the capacity (in complex systems) to engage in temporally _deep_ chunking. This approach, in our interpretation, can challenge both the devaluation of “mere” pattern recognition in current AI discourse^[As seen in [[10 Bias, or Falling into Place]].] as well as the notion that scientific progress must necessarily proceed through simplifying reduction. The interest in conceptual-engineering is not just my philosophical inclination, but can be compared to the Bayesian explanatory reversal that is at the core of AIF: instead of sticking with existing, static accounts and trying to match concepts to reality by asking “how does concept A (help) explain phenomenon B?”, we go from: _given_ concepts exist—and are known to **transform over time reflecting ever-changing phenomena**—**how can concept-formation itself _yield_ (attention to) novel phenomena**? The original idea behind the proposal stemmed from the observation that across many traditions, like the ones I just mentioned, we observe a wide range of conceptual proposals that “do not settle” and always imply absences, ignorances, excesses, negations, etc.: the effects of uncertainty that actually transform (concepts and therefore) knowledge-production. This way, by providing a concept that encapsulates all these effects, we might become more observant of this as a larger attractor driving intelligent adaptations. Negintelligibility is thus aimed as a “grand” but relaxed and nondeterministic conceptual category which comes to designate all that which _blocks_, _negates_, _escapes_, _frustrates_, _complexifies_ and otherwise _complicates_ (scientific, linguistic, technical, cognitive) representation. We invite the reader to look through our inevitably incomplete and definitely not exhaustive list in [[The schein of negintelligibility]], a “catalogue” which we continue to update, elucidating proposals we consider pertaining to the negintelligible. Two of these are _irruption theory_ and the _entropic brain_ hypothesis. We treat them below, before moving to the more speculative and expansive arguments, tempered by AIF whenever possible, for our purposes: in order to reflect on the possibility of “a more relaxed kind of naturalism that allows phenomena pertaining to mind and sociality to also be counted as part of nature” (Froese 2023, p. 6). Attention to absential features, constraints, limits, negativities, etc., reflect the intelligent tendencies _towards_ the **productive in absences, the generative in negations, or the creative potential of what is missing, unknown-unknowns, or beyond intelligible representation**. Knowledge therefore emerges from gaps, tensions, and what lies at or beyond the boundaries of current understanding. This is inherent to AIF, but has no concept (except as surprise-seeking or model plasticity, which are specific to local descriptions, but cannot be spoken about “in general”). A complex model adapts by optimizing representations; but as it does it transforms to exceed its prior representational limits, in different ways: by reduction or expansion. Once complex enough models come to house fundamental understandings of their own limitations, this creates a very complex attractor. This is why this proposal finds camaraderie with **irruption theory** (Froese) and the **REBUS model** of apparently increased model plasticity/(hyper)prior retuning under psychedelics (Carhart-Harris, Carhart-Harris & Friston),^[Please see: [[Edging thermodynamic equilibrium]] for an overview.] as both of these signal the relationship between (temporary) increases in (brain) entropy and ensuing adaptations, which can be understood as **adaptive** always retroactively, if succeeding to survive *after* risk exposure (e.g., the risk of taking psychedelics). These success parameters we normally ascribe to intelligence (high sensitivity to complexity, or capacity to change as new evidence appears, for example), have a tendency *against* themselves, as they result in model-parameter change. &emsp; ### Irruption theory &emsp; >What memorizes or retains is not a capacity of the mind, not even accessibility to what occurs, but, in the event, the ungraspable and undeniable ‘presence’ of a something which is other than mind and which, ‘from time to time’, occurs... > >Lyotard, 1991 (1988), p. 75. > >But then again, who does know enough about _now_? > > Ibid., p. 90. &emsp; >Tearing down happens naturally. Creating coherence is an energetically costly process: it takes **a burst of entropy** for context-dependent constraints to irreversibly produce emergent coordination dynamics. > >Juarrero 2023, p. 236, (our emphasis in bold). &emsp; Froese frames cognitive _irruptions_ as “bursts of arbitrary changes in neural activity [which] can facilitate the self-organization of adaptivity” (2023, p. 1). Irruption theory seeks to make “intelligible how an agent’s motivations, as such, can make effective differences to their behavior, without requiring the agent to be able to directly control their body’s neurophysiological processes.” Hutto, traditionally anti-representationalist when it comes to the mind and how it couples, suggests: “can consciousness be made intelligible in terms of something else? I argue that it cannot — not even in principle” (Hutto 2006, p. 46). Froese argues that we need to understand how consciousness is physically efficacious. “Unruliness”, according to Froese, increases the more (attentional, motivated, resolving) effort an agent invests, as consciousness, into an experience/process/operation. More entropy should be observed in any system undergoing irruptions, because more _options_ are being considered. Likewise, in AIF: “a policy that maximizes the entropy over final states is intrinsically rewarding because it **keeps ‘options open’.**” (Friston et al., 2014, p. 1, emphasis in bold). The outside is met by uncertainty from within: uncertainty as in _production_ of entropy. Agents individuate as they open degrees of freedom in witnessing more and more variants of possible uncertainty, a layering upon layering of complexities, which therefore signifies more effort, more computation. What we speculate, additionally, is that the limits of mortal computers can be understood as circumscribed by the things we understand to be the effects of boredom, fatigue, sleep, and other comparable phenomena which inevitable render mortal computers as capable of; prone to, or as _given to_ stopping (unlike some mechanic or electronic computers). Froese’s presentation of irruption theory is motivated by resolving the tension between intentional agency as having real, causal import, while at the same time being difficult to explicitly locate or measure. He presents this tension as observed if we take these axioms as starting points: “Axiom 1: Motivational efficacy. An agent’s motivations, as such, make a difference to the material basis of the agent’s behavior.” and “Axiom 2: Incomplete materiality. It is impossible to measure how motivations, as such, make a difference to the material basis of behavior” (2023, p. 9), both of which are, to him, “independently valid.” (ibid.).^[These two axioms are not unlike the tensions emerging from the demarcation between the _manifest_ and _scientific_ images, in Sellars. Or, as noted by Hui, as the process imperative of Whitehead, for whom “the aim of constructing an organic philosophy is to “construct a system of ideas which brings the aesthetic, moral, and religious interests into relation with those concepts of the world which have their origin in natural science.” “Whitehead, Process and Reality (New York: Free Press, 1978), xi.” Hui 2019, pp. 227 and citation on 239.] The idea is to move beyond the apparent divide between either ineffability or calculable epistemic access, a divide which inevitably leads to problems of modeling and physical measurement, as well as possible behavioral determinism. Froese suggests “Axiom 3: Underdetermined materiality. An agent’s behavior is underdetermined by its material basis.” (ibid., p. 10), to resolve the tension: agentic reasons and intentions operate outside the realm of physical calculation, the experience of motivated agency and its effects^[He speaks of _libertarianism_ and _freedom_, we prefer to steer away from these terms, see: [[09 C is for Communism, and Constraint]].] requires that actions remain physically underdetermined, because we should accept “unpredictability as inherent to the motivated activity of life and mind, rather than explain it away as deriving from insufficient experimental control or inaccurate measurement.” (ibid.). As our incursions ([[03 Semantic noise]], [[10 Bias, or Falling into Place]], [[06 Principle of Sufficient Interest]]) have shown, we strongly follow this. What is more, what reveals itself, not only as we accept unpredictability (entropy) as a condition, is that the open-endedness in constraint-based systems such as language^[This includes the languages of science, mathematics and everyday natural language evolution.] are the inevitable **gaps** **between** situated perspectives that have to rely on the dynamics of the *other* perspectives they are structurally coupled with, in order for evolving functions of the system(s) they are part of. This interstice of uncertainty (e.g., as double contingency as presented by Luhmann, or the incompleteness of any concept which depends on its situated application), is what we, here, term _negintelligible_. If we take a constraints-creating-coherence approach (Juarrero 2023), which sets the focus on the observation of (control) functions as emergent from structural coupling between systems, we can also relax the tensions observed by Froese, in the calculation variables pertaining to phenomena such as intentionality. By taking a hierarchical constraints-based approach we can, and must necessarily, observe the emergence of things such as intentionality perhaps in a similar way we view the function of, e.g., _addition_: it is not something which can be measured, it is that which measures, calculates, itself, it is an axiom for an interested mathematics, not a variable (although its capacities and shapes can be subject to change, as we complexify our mathematics). This reconceptualization positions the import of something like “agency” as something which _can_ be a measurable phenomenon (as treated in [[11 Post-Control Script-Societies]] in terms of _power_) but can also be understood it as the measuring framework itself—not an object of calculation but the very function through which its own calculation becomes possible: recursively _seeing as_. Just as patternicity enables counting, it is the subsequent pattern-creating, as a function, which can then be understood through the mathematical varieties of _counting_ itself. Like addition, which cannot be measured but rather _performs_ a function, agency or intentionality (as, e.g., lived volitional consciousness) constitutes the metacognitive conditions through which determinations become conceivable, perspectivally directed, which only _from another perspective_ can be measured/understood/modulated as a circumscribed; chunked, variable within a system. The gap(s) between these approaches/perspectives/systems, is that which is negintelligible. Think about the classic dividing^[Dividing as in: between physicalism and phenomenalism.] case of the _hard problem of consciousness_ (Chalmers 1995): once natural language enables structural couplings such as questions: functions which allow distributed self-evidencing dynamics (Vasil et al., 2022), we become capable of witnessing and metacognizing, collectively, something as supposedly “self-evident”^[Mind the pun on _self-evidencing_.] as consciousness, the condition which enables questioning, itself. Thereupon, to complexify the domain and inevitably evolve different functions, we create the negintelligible meta-idea of the philosophical ‘zombie’ as something which further complicates what was, before, apparently self-evident. Layers upon layers, agency cannot be said to be found anywhere, though it _can_ be measured depending on the perspective and its (interested) function (or _as_ a function) of metaperspectival interests at hand. These last observations can be interpreted as in line with the working hypothesis of irruption theory: “[t]he more an agent’s embodied activity is motivated, the less that activity is determined by its material basis” (Froese 2023, p. 10), with the added caveat that in this thesis we have found it difficult to isolate the “agent.”^[See: [[Xpectator]], [[Agent]], [[Agency]], and particularly the arguments in [[09 C is for Communism, and Constraint]].] If anything because the “agents” we often speak of are embedded in structures such as language, through which they understand their own perspective,^[Rendering possible ideas/proposals such as the philosophical zombie.] thus rendering in/out distinctions rather difficult to delineate: like when hands touch. Systemic complexity, in highly distributed systems such as language, emerges as a result of abstract couplings which theoretically dissolve the traditional idea of agent-environment distinction. The motivation behind the irruption hypothesis is “to operationalize and then quantify the increased material underdetermination that is associated with the increased involvement of motivations. ... Irruptions can ... be approximated by the extent to which there is an increase in how unpredictable an activity is based on its material basis alone. In practice, this could, for example, involve quantifying how surprising recordings from the brain and body are with respect to past recordings.” (ibid.). This shares much with Carhart-Harris’ entropic brain hypothesis, as we will see in the next section. Like Carthart-Harris, Froese is interested in how different measures of uncertainty, particularly in brain entropy measurements, can reveal better understandings of intentional action: he notes how consciousness associated with reduced awareness exhibits less neural entropy, such as dreamless sleep, while states presented with more diversity in input, or self-generated uncertainty, reveal higher neural entropy (ibid., p. 12). In understanding the effects of something like “intelligence”, Froese states that “[i]ntelligent action does not depend on a central controller but can emerge from a network of habits, which enables appropriate behavior to be solicited by the situation.” (ibid.). But precisely this is what matters: the network of habits can be understood as within, but is often a distributed network _beyond_ the agent. This leaves us with an examination of functions as distributed and shared by things we reduce to units because of perceptual efficacy, but which deserve to be metacognitively understood as coupled in ways not directly, phenomenally accessible. This is where concepts, understood as distributed predictive functions, go a long way: &emsp; >Matter matters. History matters. Social and economic policy matters. Most critically, however, because top-down causality as constraint makes room for meaning and value-informed activities, our choices and actions matter tremendously. In acting, we reveal the variables and the values that really matter to us, individually and to the culture in which we are embedded. We must pay attention to what we pay attention to; to which options we facilitate and promote and which we impede and discard. We must pay particular attention to what we do. > >Juarrero 2023, p. 237. &emsp; Perception (attention) and action literally _matter_. This citation, echoing much of what Froese treats in irruption theory, reveals what we have referred to as the generative, and generous, effects of _seeing as_: attention to attention generates a function which is able to modulate constraint-based dynamics in much of what we define as “intelligent” or “intentional”, which is highly dependent on the negintelligible: what we **do not see we do not see** (Maturana and Varela 1987, p. 19), what we must accept not only as the condition of unpredictability, but as the differential nature of perspective-based systems such as (human) communication. What Juarrero allows us to examine through constraints and how these create coherence, is a coherence which organizes itself, where in the context of complex technolinguistic systems such as the ones we create and embed ourselves within, seems to be an effort driven by the negintelligible: that which resists final pattern stabilization, allowing for system (function) evolution. This effect is something that puts _irruption_ theory, for us, within the category of negintelligible phenomena.^[Moreover, both proposals—negintelligibility and irruption theory—can be understood as sharing the motivation to “open the conceptual space required for”, in the case of irruption theory “a motivation-involving account of motivated activity” (Froese 2023, p. 11), and in our case for relaxing and distributing the idea of agency, control, and epistemic access towards an account which humbles itself and slows down in the face of the vast unknown. Much good can come of this, it does not signify deterring “innovation” nor “progress”, but changing our fundamental understanding of these terms: science compresses data in the interest of novelty, not of thermodynamic stability (i.e., death).] &emsp; ### Entropic brain The entropic or anarchic brain hypothesis (Carhart-Harris et al., 2014) explores how different experiences of consciousness can be characterized by different measurable degrees of entropy, where greater uncertainty, unpredictability, and information richness should correlate with _primary_ or unconstrained modes of consciousness (Carhart-Harris and Friston 2019). These high-entropy states, observable during states such as psychedelic experiences, dreams or early psychosis, appear to contrast with the low-entropy, highly ordered cognitive functioning of standard, waking, secondary consciousness states. Under AIF constraints, these entropy fluctuations can be understood as a system’s dynamic management of model evidence: where, e.g., psychedelic compounds can temporarily disrupt the hierarchical predictive mechanisms that normally constrain perception and cognition to maintain adaptive control. And, importantly, we remark that this adaptive control is often what we observe as **coherent** when it maintains social, communicative established patterns (for better or for worse: in solidarity and under oppression). The relaxation of beliefs in primary states, means, among other relaxations: (partly) decoupling, if only temporarily, from sociocommunicative frameworks; as well as other habits of motivated agency, or perspectival perception. The intersection between active inference and primary experiences (induced by psychedelics) has been primarily explored by what is known as the REBUS model. The *Relaxed Beliefs Under pSychedelics* model proposed by Carhart-Harris and Friston (2019) provides a framework for understanding how temporary psychedelic-altered phenomenal states result in an entropic or “anarchic” brain, which can lead to long-lasting effects in PCA. These experiences are not restricted to psychedelic-induced ones alone, but have been compared to other states of consciousness, as mentioned, such as mystical experiences (such as the feeling of becoming “enlightened”), psychosis (as perplexity and precision-overestimation), death-acceptance, and others (Carhart-Harris et al., 2014, Nayak et al., 2023, Jylkkä 2024). Various types of PCA changes across the board could thus be comparably assessed through measures of brain entropy, where *increased* brain entropy, as in the case with irruptions, signals openness to more degrees of freedom: disambiguation possibilities. This means that because of this, of course, phenomenal (and homeostatic, and social, etc.) resolution of layered, complex ambiguities is rather costly, and primary states are therefore not sustainable long-term for systems such as mortal computers. In a forthcoming article,^[[[Edging thermodynamic equilibrium]].] we treat the entropic brain hypothesis more extensively, and we examine high entropy primary states as something which could be said to _verge_ thermodynamic equilibrium: not-quite-death but an attunement/coherence with the environment that is perceptually perplexing or stupefying enough to relax the precision weighting of _overweighted_ priors (Carhart-Harris & Friston 2019). A style of cognition tending to the “supernatural” or magical can be understood as an effective response to an otherwise uninterpretable (i.e., complex) environment. But the same can be said at the level of systemic delusions across the board, as mentioned throughout this project: simplifying-fetishizing abstract entities such as money, idealizing social organizations such as patriarchies, upholding antiscientific values in the name of science, etc. These “wishful beliefs” (Carhart-Harris & Friston 2019) are framed at the level of the individual as “quick-fixes that reduce uncertainty but via simplistic explanations that satisfy fancies or desires before careful reason.” (ibid.). These adaptive events, evincing constraining degrees of freedom through learning, can perhaps be understood as a problem of chunking, of compression: how does _learning_ effectuate itself from A to B (from past self to future self, from generation to generation, etc.)? This can only be done by reducing the complexity of a message, which specifies but inevitably entrenches possible behaviors. **Not** reducing complexity results in perplexity, a _mirror image_, perhaps, an entropic symmetry between self and world. It is thus fundamental to understand the larger societal framework of primary, higher entropy events instead of simply observing them as temporary aberrations at the individual level. Drawing an analogy from the individual to the collective: if, e.g., narrow-mindedness, rumination and self-defeating habits are characteristic of depressed states (Kessler 2016): we can imagine these effects at the systemic level as some of the maladaptive societal traits we currently witness, respectively: oppression, identitarian melancholy and exhaustive and extractive practices. If plasticity is exhibited by primary states such as the ones examined under the REBUS model, this might allow us to contemplate learning, collaboration and generativity at the systemic level of society, moving the pathologizing paradigm away from the singular individual, and toward _actual_ site and setting. But how much time and effort should be required, how much additional entropic risk-taking, to understand (i.e. map, correlate, track, simplify, reduce, etc.) something as obvious and as complex as the—mystical, delusional, rational, social—mind? The answers are not obvious. The concept, and possible measures of, entropy can aid in understanding the phenomenal sense of self through AIF, but should always frame this self as a 5E self; vastly distributed and sociopolitically confused, always revealing negintelligible gaps: the “noise in noise” (Prado Casanova 2023). Fundamental questions pertaining to the ethics and systemic regulation of possible therapies, as well as definitions of “pathology” and “health” (Canguilhem 1904, Sterling 2020), should therefore always remain center-stage. Negintelligibility as an abstract category which designates a ‘space’ which we can observe whenever something contains a necessary excess, inevitable noise, intractability, or epistemic *inaccess*, can be brought in again here, as we need to refer to these inherent physical/material/energetic limitations, as well as how the very system which is extended consciousness itself, is making an attempt at its own comprehension-compression. From this perspective, the entropic brain hypothesis illuminates how certain conscious states render ineffabilities, or represent effects which are resistant to representation or conceptualization—high-entropy brain experiences are literally and metaphorically _expansive_, these states seem to _relax_ the established, habitual compressive, sense-making capacities of narrative/self-centering/linear consciousness. The free-energy principle suggests that consciousness itself may have evolved specifically to manage uncertainty through predictive processing, and as Carhart-Harris (et al.) show(s), ‘profound’ primary experiences seem to involve temporarily relinquishing certain compression constraints, allowing novel patterns to emerge through entropy increases. Systems such as language, distributed over other systems such as bodies and technology, are _blurry-goal-seeking systems_, which understood under AIF are guided by layers of hierarchical information processing in the effort of metastable prediction-error minimization. Altered states, ensuing from higher entropy or irruptions, can be understood as situations which enable learning: keeping complex systems *plastic* in the face of increasing complexity. Below we move on to examine how we speculatively understand transitions between predictive compression, simplification, and complexification, plastic-adaptive learning. &emsp; ### Intelligence: the spatiotemporal scaling of patterns, complexity to simplicity and vice versa &emsp; >To mediate the limits of experience does not only mean actualizing new possible experiences, but also modifying the transcendental frame that demarcates the possible experiences from the “impossible” ones. > > Catren, 2016, n. pag. &emsp; >[James Joyce] consciously demolished the stability of the sign symbol and began playing with the bits and pieces of a fragmented society. He explored the gaps created by the breakdown of continuities and investigated the increased communicative potential which they would generate. > >Theall & Theall 1989, p. 51. &emsp; Albeit a highly contested notion, as we have seen throughout this thesis, _intelligence_ continues to play a prominent regulative role—i.e., chunking: ordering and legitimizing—in discussions pertaining to agency and sentience, (commonsense) reasoning, abstracting, modeling, predicting, improvising, etc. The term, possessing a rich history interlacing philosophy, biology, psychology and technoscience, has been preferred in this thesis over other terms (e.g., rationality, reason) as it is currently embedded within the voracities/velocities of AI research, which frame this chapter’s interest in the negintelligible as that which creates the conditions for the moving of its infamous mileposts.^[A.K.A. the _AI effect_, or what we call the ‘negative theology’ of AI, proposed by Pamela McCorduck as the moving of goalposts in _Machines Who Think_, 2004.] To ground ‘intelligence’ in our negintelligible interests in understanding intelligence as a flexible activity which contracts and expands between simplifying and complexifying, the following can serve as continuing our orientations. In the context of AI *and* AIF, as we have seen throughout the thesis, intelligence globally refers to the ability **cognitive** systems possess when they are able to abstract, model, represent, interpret, categorize, etc., towards the satisfaction of these as estimations of possible future states—as perceived patterns and/or performed patterns. An intelligent locus persists by spatiotemporally tracking patterns in structures _beyond_ themselves, which target (vague or specific, immediate or more long-term) desires/goals/ via different means. These politiconceptual^[Politiconceptual because the concepts are necessarily supported by a background politics, and vice versa, leading to many of the “bias”, etc., problems we witness in AI development today.] sentiments can be found in the work of AIF thinkers such as Andy Clark, Karl Friston, and Michael Levin. Or AI thinkers such as Hector Levesque and Ronald Brachman, who define AI as “the study of how intelligent behavior can be produced through computational means.” (2022, p. 3). The latter present a spectrum of extremes, such as, on the one hand: “sophisticated behaviors like playing chess, interpreting poetry, and classifying tumors; [and] at the other extreme, ... commonplace activities like babysitting a toddler, preparing a meal, and driving a car.” (ibid.). In other words, to us: activities which all require rigorous spatiotemporal chunking as different levels of abstraction and in different domains. Patterns such as counterfactual reasoning and improvisation in the face of uncertainty deal with exceptions through the _realization-emergence_ of novel patterns. This is precisely what makes pattern _making-following_ or _perception-action_ quite difficult to disentangle from each other: where does an action _begin_, precisely? Where does (e.g., artistic) “creativity” happen: in the _accident_ or in its (social) observation and possibly ensuing exploitation? Or, elsewhere, entirely?^[See: [[Madness]].] AIF and AI thinkers tend to prefer mathematical approximations to the functioning of cognition, and this is not surprising as we could understand mathematics as the system which tracks difference at the ever diverging border between simplicity and complexity.^[Some have claimed mathematics is, in essence, the study of _identity_ itself, a concept which can often be understood as both simple and complex, depending on how one interprets it. Following Deleuze I would prefer to say mathematics is the eternal attempt at the formalization of _difference_, rather. And of course, complexity and simplicity are key mathematical terms. In terms of compressibility: 1 simplifies eternity into a predictable unit, likewise: it can also be exploded into eternity by dividing it, positioning it on a number line, turning it into 1.000.000.000.000, thus complexifying it to different degrees, depending on the intended outcome. 1, 0.3333, addition, factorization, differential equations, etc. are not ‘representations’ but types; solutions to predictive tractability, where this tractability depends on the target-desire, its context, and the means to it (its chunking or tokenization). To us, this way, mathematics crosses all domains and creates a *lingua franca* for talking about difference.] In our abstractive, panoramic argument,^[Radically distant and oversimplifying, yet necessary to allow for the interpenetration of complex fields and concepts.] intelligence can be more simply understood as the _organized_ capacity—following AIF: not distinguishing between cells, selves, groups, computers and beyond^[ The pooling of all negentropy into an intelligent whole follows in line with much of what Michael Levin has presented around the concept of _polycomputing_ (more on this in later sections).]—**to perform dynamic simplicity to complexity (and vice versa) operations**, in the **perceiving** and **performing** of patterns, where patterns, again, are repetitions of difference which structure the fabric of perception-action (more on this in the following section).  Following Deleuze (1968), _difference_—as that which enables patternicity—is presented here as a *phenoumenodelic* fact,^[This neologism, by Gabriel Catren (2016), calls into question the distinction between phenomenon and noumenon, between epistemology and ontology. In general, this neologism also takes issue with various ‘realism’ problems: reality is not something ‘out there’; _this_ is reality. What the concept of ‘reality’ traditionally presents, as a problem, is what we have treated in this thesis with regard to abstraction and model-building: we (sometimes) know we ignore that which is behind and beyond us, but we know it remains there while we do not see it (we ignore it, precisely, in order to _see_: perception summarizes and compresses, all the time). Catren on immanental phenoumenodelics: “On the one hand, the term immanental encodes the thesis according to which the subject of the transcendental constitution of subjective experience is itself a product of an immanental institution taking place within an impersonal experiential field. On the other hand, the term phenoumenodelics results from the amalgamation of the term phenoumenon (which is itself a hybridization of the Kantian notions of phenomenon and noumenon denoting the programmatic absolution of philosophy with respect to any form of transcendental limitation of experience) and the suffix -delics (which takes the place of the suffix in Husserl’s phenomenology in order to stress that the logos-oriented theoretical mode of exploration should not have—I maintain—any privilege whatsoever in the philosophical activity). To mediate the limits of experience does not only mean actualizing new possible experiences, but also modifying the transcendental frame that demarcates the possible experiences from the “impossible” ones. … The thesis according to which the different abstract modes of exploration of the field (art, science, politics, etc.) do construct vectors of speculative transcendence means that they do not only allow us to perceive, to feel, to understand, and to produce new phenomena, but can also force transcendental variations of the a priori conditions of perceptibility, affectability, conceptuality, sociability, and production. The hybrid neologism phenoumenon (which traverses the Kantian distinction between phenomenon and noumenon) is intended to stress that the “intentional” pole of a “speculative” experience—i.e., of an experience enveloping a shift of the subject’s transcendental structure—is not an objective phenomenon constituted by the subject, and thus placed in a transcendental-dependent Umwelt. Rather, the pole of a “speculative” experience is a trans-umweltic configuration of the experiential field—i.e. a phenoumenon—that appears in each Umwelt under the form of a particular objective phenomenon.” Catren, “The “Trans-Umweltic Express” (2016).] that is: as a basic sensitivity to a limit or boundary, the ensuing repetitions of which is the ability to create and/or sense evolving patterns: resulting in what we call chunking and parsing. Or, in the sense Chris Fields—inheriting from IT and from Wheeler—has presented it: if physics can be defined by the basic function of _transfer_ of information, and can therefore be framed around what we often call _communication_, we need to contend with how communication is observer-dependent and inevitably reductive: because an observer _encodes_, in our words _chunks_, “sufficient prior information to identify the system being observed and recognize its acceptable states” (Fields 2018, p. 1). Fields also notes that “[w]hile observers appear as nominal recipients of information in all interpretative approaches to quantum theory, the physical structure of an observer is rarely addressed.” (ibid., p. 2) To deal with the _structure_ or _shape_ or _chunk_ of the universe that we call an observer, in AIF we find the most basic, “almost tautological” (Friston et al., 2022) description of the _boundary_ states of a system—conceptualized in terms of a Markov blanket^[See: [[Markov blanket]].]—as that which essentially tracks difference: a particularly preferred observer state is maintained against a stochastic outside to which it is inevitably coupled.^[This is a _simple_ definition, for more _complexity_ see: Friston et al., 2022.] A Markov blanket is the inevitably statistical modeling boundary—we cannot expect to encounter a specific state; it is always a probability—separating internal observer states from external environment states, both “read” (and write on) each other, maintaining conditional independence relationships that enable both systems to monitor differences; to exchange information, inscribing each other across this Markov membrane.^[More on this in: [[Holographic principle]].] If sensitivity to difference can be defined as a common denominator for the possibility of _any_ observer (a particle or a society), and therefore any intelligence, then we can perhaps define the intelligent sensibility to complexity (high differentiation; entropy, richness of information) as the representation-experience of an irreducible multiplicity, which the intelligent locus witnesses as differences between inevitably interconnected elements, chunks, which ensue from the perspectival vantage point that is the observer itself. Simply put, an observing perspective cannot but chunk, observing _in terms of chunks_ means being a _chunk_, too.^[To a hammer: everything looks like hammers.] This is how *all* perception is already a highly abstract: in order for spatiotemporal difference-tracking to occur, an observer comes equipped with being a differentiated thing already. The emergent interactions between agent and context result in patterns that, in this way, lead to more patterns, each impossible to make ‘fully’ intelligible, because they are, precisely: observer-dependent and the already-observer differential condition inhibits this, else deadly dissolution ensues. Or, as we suggested, in the context of primary states, we can perhaps frame union or ego-death experiences as resulting from high entropy because they express a tendency _towards_ the dissipation; higher disorder, that surrounds-composes a local, simplifying perspective.^[This is treated in [[Edging thermodynamic equilibrium]].] Complex patterns that are not present “in” any of the individual differences emerge from the interactions between those differences (Mitchell 2009), whether this is in the visual experience of a fractal, in the long-drawn generative effects of reinterpreting a historical event, or in the systemic effects of an ant colony on itself and its context. ### Further expositions of simplicity and complexity Without stepping into the realm of modern mathematical decidability, compression or computability in detail,^[Something we hope to do more in the future.] we can already see the Leibniz-inspired^[Again, see: [[Principle of indiscernibles]].] Kantian appeal to simplicity-complexity transitions in his _Critique of Judgment_ (1790), where he opens “§62. Of the objective purposiveness which is merely formal as distinguished from that which is material”,^[A different translation of this essay is also included in _Emergence, Complexity, and Self-Organization: Precursors and Prototypes_ by Juarrero and Rubino, 2008, vol. 4 of a longer series devoted to the topic of complexity.] with: &emsp; >All geometrical figures drawn on a principle display a manifold, oft admired, objective purposiveness; i.e. in reference to their usefulness for **the solution of several problems by a single principle, or of the same problem in an infinite variety of ways.** The purposiveness is here obviously objective and intellectual, not merely subjective and aesthetical. For it expresses **the suitability of the figure for the production of many intended figures**, ... this purposiveness does not make the concept of the object itself possible, i.e. it is not regarded as possible merely with reference to this use. > >In so **simple a figure as the circle lies the key to the solution of a multitude of problems**, each of which would demand various appliances; whereas the solution results of itself, as it were, as one of the infinite number of elegant properties of this figure. > >(our emphasis in bold). &emsp; The above passage suggests not only that “apparent” simplicity harbors complex infinities, but within it is also a pre-computational suggestion of the limitless purposes for combinatorial formalisms, as mentioned earlier and elsewhere in this thesis.^[See: [[Polycomputation]], [[Computational irreducibility]] and [[Complexity]].] The ability to produce complexity is the symmetric mirror-image to that of producing simplicity: to see how vast spatiotemporal realms can be compressed or potentially actualized entails performing (counterfactual, imaginative, etc.) operations which reveal a pattern-expansion and pattern-compression function.^[Something akin to this, the passages from the universal to the particular, Hui notes was also what Lyotard read as _reflective judgment_, through his interpretation of Kant’s _Critique of Judgment_ where for Lyotard: “reflection pushes the determination of the categories aside without completely negating them” (Hui 2019, pp. 206-7). Hui himself sees the reflective judgment in Kant as preliminary model of recursivity: “which comes back to itself in order to know itself, while in every moment of reaching out it encounters contingencies” (ibid.).] A circle being a solution to many problems necessarily implies we observe a high degree of complexity in a perceptually “simple” object.^[ Which does, for us, merge the “merely subjective and aesthetical” with the “obviously objective and intellectual”. An additional note on the subjective/objective, and on the simplicity/complexity of a circle: [I]t is not a question of saying what few think and knowing what it means. On the contrary, it is a question of someone – if only one – with the necessary modesty **not managing to know what everybody knows**, and modestly **denying what everybody is supposed to recognize**. Someone who neither allows himself to be represented **nor wishes to represent anything**. Not an individual endowed with a good will and a natural capacity for thought, but an individual full of ill will who **does not manage to think**, either naturally or conceptually. **Only such an individual is without presuppositions** […] At the risk of playing the idiot, do so in the Russian manner: that of an underground man who recognizes himself no more in the subjective presuppositions of a natural capacity for thought than in the objective presuppositions of a culture of the times, and **lacks the compass with which to make a circle**. ((1968) 1994, pp. 165-166, our emphasis in bold). Deleuze challenges the “conventional”—pardon the circularity—presentations of both supposedly _common_ knowledge and representational thinking. We ought to pay more attention to the different ways in which refusals dynamically unfold, how and where things are cognitively-socially accepted, who is “everyone” and what do “they” take for granted. The negintelligible as, sometimes, someone who stands outside accepted cultural knowledge, can be understood, under the right circumstances, as possibilistically powerful. Precisely because they lack orientation, compasses: tools. Refusal to participate in systems of representation and recognition is an entry into new frameworks, rejecting, in this case: individual genius and cultural conformity.] When we say we observe “complexity” in a phenomenon, what we say, in essence—in simplified terms, if you will—is that our perception is subject to inscrutability, to instances of indiscernible noise (e.g., we track an individual bird in a flock, but not all simultaneously, yet: we can also track the emergent flock, albeit with what we often consider to be “less” accuracy than when tracking the individual bird). It is therefore no surprise that the concepts of complexity/simplicity are plastered over discussions pertaining to large-scale phenomena, emergence, radical abstraction, perceptual excess, etc. Important for the concept of negintelligibility, as we will see, is that what is often remarked about complexity is **not** that it is a property of things out there, but a perceptual way of organizing an unattainable or inscrutable whole. Since complexity is definitionally complementary to simplicity, we can simply define simplicity as the quality of something _not being complex_: something limiting its own negintelligible excesses. Depending on the realm, simplicity is often associated with notions such as reduction, compression, parsimony, containment, common sense, usefulness, elegance, clarity, minimalism, high abstraction, foundationalism, etc. Simplicity is the paradigmatic poster child of “scientific explanation” (Occam’s razor, mechanistic logic, explanatory closures of all sorts), and it can be seen as the fundamental drive or quality present when isolated elements are _reduced_ in order to contain them within a definition, a model, a representation, etc., in other words: to sharpen a differential boundary. The word “dog” can be witnessed as *simple* when it is understood as a three-letter categorical designation for a loosely perceived furry phenomenon, but rather complex when attempting to establish _dog_ with high differential precision.^[An impossible feat.] The apparent paradox here is that simplicity demands short-term implementations, that is: spatiotemporally-compressed multiply-realizable (identity) chunks (which ensues in the capacity to see a dog in a picture, in a word, on the street, in a dream), but these, we know, merely “hide” spatiotemporal complexity, as they are in essence representative of vast and evolutionarily and conceptually unbounded phenomena (all “dogs” that ever existed and will ever exist, all the way to the beginning of the universe: can that even be said to have any meaning?).^[However, as will be noted later: this expands itself into a generative paradox towards both complexifying and simplifying realms when we consider that the desirability for some sort of simplifications (or complexifications) always also implies their opposite. More on this in later sections.] It would seem that the **longer** it takes—the more spacetime we consider—to make something intelligible, that is: rendered in a way so that its chunk-pattern can be perceived, repeated, modeled, applied, etc.: the more complex we also consider it to be.^[Cf.: [[Kolmogorov complexity]], [[Assembly theory]] and [[Computational irreducibility]].] This drives the predictive impetus to compress, or: _simplify_ it. That which is intelligible, therefore, is that which is patterned into complex expansions or simple closures, and the unpatterned or inscrutable is generally considered beyond the scope of intelligibility; to us, this is the phenoumenodelic negintelligible which drives all patterning. While the _known unknown_ that is excess, friction or noise^[Noise, in its traditional opposition to information, is what is purposefully set aside and ignored as irrelevant when deciding upon a formalization (as pattern, meaning, representation, etc.). Noise can be imagined as located at the border-transition between complexity-simplicity contractions, or between the modeled and the unmodeled (Denizhan 2023). For detailed incursions into noise, please see: Prado Casanova 2023, Wilkins 2023, Malaspina 2018. Noise is a notoriously fuzzy term (which denotes, precisely, a state of informatic fuzziness), and it is brought into the discussion here to further elucidate our argument, as that which makes apparent the transits between the determinations of “complex” and “simple” phenomena. See also: [[03 Semantic noise]].] exhibits itself in any attempt to formalize a perceived pattern, our argument about the negintelligible tries to push this one step further (towards presenting the problems which circumscribe the _unknowable unknown_), which will be further explained in the sections that follow. An example to explain a very relevant complex to simple, and vice versa, transition in the conceptual realm, as it relates to the topics at hand, is the term/idea/phenomenon of entropy itself. The concept of _entropy_ itself, was introduced by Rudolf Julius Emmanuel Clausius to conceptually isolate something which he did not know he was defining (Howard 2001, p. 505). It was just a new name for _change_, essentially: _transformation-content_ (_Verwandlungsinhalt_), for which Clausius later coined _entropy_. Clausius had an algebraic analysis of what was implied by an effect, before there was a name, which received different formulations and conceptual orientations as it evolved. Slowly but surely, the term saw plenty of transformations, leading to different types of many types of entropy, and now converging perhaps one kind of entropy as observer-relative informational content: one which is a measure of difference as ensuing from the communication between entities (see, e.g., Fields 2018). It is also interesting to note how Shannon, too, asked something of the concept of entropy: what to call the **effect** of uncertainty? “Missing information”? In the widely circulated legend, von Neumann advised him: “Why don’t you call it *entropy*? ... a mathematical development very much like yours already exists in Boltzmann’s statistical mechanics, and in the second place, no one understands entropy very well, so in any discussion you will be in a position of advantage.” (supposedly ca. 1940, personal communication between the two thinkers). &emsp; ### Active inference and artificial intelligence^[Very briefly presented, the author will develop a more substantial examination in the future.] While this chapter follows the AIF proposal that the minimization of entropy as _surprise_^[Here we take _surprise_ in the informatic theoretic sense of _surprisal_, and as in phenomenal/psychological surprise.] is a fundamental element in the orientation of our autopoietic self-evidencing (Parr, Pezzulo & Friston 2022), it also suggests, following Froese’s criticism of AIF in the same vein, and the observations by Carthart-Harris about the plasticity ensuing from the high-entropy of psychedelic states, that if perception-action loops were as clear-cut as proposed, we could not account for the combinatorial explosions (revolutions; paradigm-shifts; large-scale learning) distributed ‘intelligent’ systems seem to display. Relatedly, we could not account for the novelty which emerges precisely from the active _ignorance_^[See: [[08 Active ignorance]].] of the structures active inference tracks, ignorance is the “shadow” of inference: whatever limits we can define, be it theorems or physical constraints, or ideas about mortality: these circumscribe the possibilistic. AIF presents this widely, but tacitly, much too understated. That which is ignored (forgotten, absent, etc.), beyond even the intelligible scope of the domain which is under scrutiny, is precisely the contextualizing relevance which orients the unfolding of complex-to-simple transitions in cognizing systems. Intelligence is guided by elusive encounters with the negintelligible: something we can speculatively metaphorize as an _intractable attractor_ guiding the various presentations of surprise (i.e.: guiding intelligent shifts between the complex and the simple), carving not at the joints but at the _edges_ of the patterned in order to permanently redefine surprise.^[Perhaps a new definition of _surprise_ is thus needed, beyond evidence-lower-bound. The negintelligible might be that which defines the bounds of surprise.] Dealing with surprisal/surprise is survival, in organic terms: maintaining a metastable coupling with the environment within specific non-catastrophic (i.e., non life-terminating) ranges by tracking spacetime in patternable but ever-renewing ways. But, the very idea of immortality (or infinity), even though we only know mortal limits, allows for questions about life and its limits. PCA systems navigate variability as a _sensorium_ by modelling “causal” structures that seem to give rise to sensory data, where the generative model has no “direct” access to “reality”. If reality were either completely random or composed entirely of predictable regularities then cognitive organisms would not have evolved: in the first case there would be no use for prediction and in the second there would be no need for prediction (Wilkins 2023, but also Dennett, various). In a completely regular world living systems could just mechanically exploit invariants without spending energy on modelling the causal dynamics of the environment. Again: whether it be by creating difference/noise through mere designation, or by ignoring/excluding it, restructuring it into patterns, etc., intelligibility is permanently structured by that which exceeds it in a fundamentally inscrutable way. Finding “solutions” to this only opens new “pockets”/expands the margins of negintelligibility. Negintelligibility is, again, an attempt at giving a name to the basic idea that the unknown is the pulling attractor for the known. We can define the “outer edges” of pattern-detection and formation as that which is negintelligible, as a meta-pattern within the patterned panorama of intelligibility. This project is obviously a vast simplification of a hugely complex landscape. But that is the point. Whether patterns are _real_ remains the larger negintelligible, ontological question. The explore-exploit trade-off—where exploration involves seeking out new information that could update the prior expectations, while exploitation involves using the existing knowledge to generate accurate predictions—is relevant to ‘adaptive’ (optimizing towards _something_) evolutionary behavior, but the optimal balance between them depends on the uncertainty of the environment, and in the frictions between perspectives which observationally invest themselves by inscribing their existence as they go: generating patterns (and chunks, and meta-chunks, etc.). In terms of the explanation of behavior, the AIF framework might need more detailed explanations of how model-updating happens. Considering a framework where an agent “intelligibilizes” towards a negintelligible attractor places the locus of attention not on generation, but on embedded conditions (features of the _global_ system). AIF proposes that predictions guide perception, and prediction errors update models, but the question arises as to how accurate predictions are generated in the first place. This regressive or circular effect, which we encounter in many other places,^[See, e.g., the discussion on infinite regresses in [[06 Principle of Sufficient Interest]].] may not fully “explain” the origins of accurate predictions, but negintelligibility might point us in the shadow direction of this: if we start from the assumption of _error_ or _absence_, and _forgetting_, top-down processing can be seen not as effectuating estimations, not just as the sensing of prediction errors, but as the **active search** for these as _edges_ of model expectations, therefore setting more focus on epistemic foraging (Friston). More attention to the inseparability between perception and action, inevitably leads to considering something as negintelligibility, because **action perceives** by acquiring new patterns, it is guided precisely by uncertain estimation. Negintelligibility therefore offers a small conceptual addition to language for discussing phenomena that exist at the boundaries of comprehension/compression. “All models are incomplete”, says Carhart-Harris echoing Box, of the entropic brain hypothesis, what negintelligibility designates is precisely this effect: a new model allows for a step _beyond_ the model. Remembering this sets the focus on incompleteness, rather than on transparency and explanation. An AIF definition of intelligence as that which puts its “**best** models to the test” (Pezzulo et al., 2023, our emphasis),^[Authors acknowledge that “Generative AI and active inference are based on generative models, but they acquire and use them in fundamentally different ways”, and their suggestion is that “Future Generative AIs might follow the same (biomimetic) approach—and learn the affordances implicit in embodied engagement with the world before—or instead of—being trained passively.” (p. 2).] as that which **robustly** deals with surprise towards permanence, i.e., survival, can be helpful when comparing the intelligent possibilities between organic and machine-based intelligences, as explored in the cited article. But our language needs to contort towards the edges of “best” practices and “optimalization”. A key difference the authors highlight is that AIF agents are essentially (embodied) action-oriented, and these actions should have ripple effects in whatever the spatiotemporal context the agent is in (ibid., p. 4): “[for] Generative AI, a prompt is the input for which there is a desired output. Conversely, in biological exchanges with the world, inputs depend upon action; i.e., how the world is sampled.” (ibid., p. 5). What is crucial to remark is, first of all: the 5E context which is always negintelligible, and that _testing_ (as in: putting generative models to the test) here means being flexibly open to surprise, and where “the input for which there is a _desired_ output”, both in LLMs and human language users, is always open-ended and dependent on the directionality of questions/queries/prompts, as we briefly expand on below. Reward optimization—or: testing your models to their _teleological_ best—being the driving attractor where ‘artificial’ and ‘natural’ intelligence discussions converge, brings to light the many politiconceptual questions ensuing from: optimizing towards _what_? This is always the question of interests, situatedness, of _observer perspectives_. Importantly: reward-optimization alone is very limiting of the possible solutions that can be found to problems (Silver and Sutton, forthcoming), if we optimize based on what we already know we exclude a whole range of alternatives (the recent classic example being unsupervised learning in AlphaGo). Even before considering this, it can also be argued that the drive towards ideas about ‘best’ models or optimization, in general, will always depend on the edge cases, ignorances, forgottenness, biases, negativities, etc., presumed by these models: the negintelligible, all of the possibilities to which an agent (or group) is not currently, directly coupled to through observation(s). Or, as Yağmur Denizhan proposes: intelligence is that which exists between the modeled and the unmodeled, models are the _byproducts_ of intelligence: not intelligence itself (2023). The model can be understood as a temporary memory of an observation, which is instantiated at the system-level rather than encoded at the ‘individual’ level (a difficult notion, considering distributed Markov blankets and/or decomposing -dividuals). Having to treat the “optimal” as an end-state is why it is so difficult to define intelligence: it is an open-ended search-process, a spatiotemporally-capturing activity, not a spatiotemporal _capture_ (representation, model, result, etc.). It is the result of moving through seeing _as_, at ever-higher abstractions. Complexity to simplicity (and vice versa) “intelligent” operations, in both AIF and AI, are transitions in and out of noise: reduction; compression; overfitting; overgeneralizing; in essence, whenever the universal/type and particular/token are confronted with the (parsing) _possibility_ of resolution, always encountering a novel problem. A crucial negintelligible phenomenon is that of **questioning**. Especially in domain of GenAI (prompts): they are one of the main _functions_ we use to access information from LLMs. But what are _questions?_^[See also: [[Question]].] An interesting conceptual route to consider is that the fundamental structure of questions observes an interesting things-reading/writing-on-things; complex to simple operation: a question presumes an answer could exist, but is not currently known. That is, it reads a particular state of affairs, and be seeking an answer, actualizes a writing. A question is, like an observer, a partially-observing perspective. A question simplifies a spatiotemporal domain of investigation by allocating the prediction of complexity towards a particular speculative orientation. This could speculatively, interrogatively, suggest we could reconceptualize questions as having a generative model—_thoughts are thinkers_: the dictum often cited by Levin as an insight from William James—rather than being ‘passive’ structures. Questions perform a specific complexity-to-simplicity function: they narrow a vast spatiotemporal domain into a more manageable/chunked focus by channeling predictive attention toward specific areas of uncertainty, while remaining, in essence, something which channels *that which is uncertain*, or negintelligible. Currently, in light of AI being presented as a (metaphorical) god, mirror, genie, or oracle, etc., this way of framing it seems particularly prominent: the question-function connects complex technological and linguistic systems, evolving as it navigates between notions of order and disorder, contracting between simple answers and expanding into complexity through the generation of more questions. A question is a bidirectional translation (between chunks and chunks), a process which maintains the productive tension between pattern-seeking and pattern-disruption that characterizes what we call intelligent systems. So is the _negative_, which we consider questions house as a fundamental function. As seen in [[04 Concepts as pre-dictions]]: negation creates a binary for funneling attentional direction. Whenever something is negated, that which is negated (“**not** the dog”) becomes a locus of attention while everything else that exists in negation is put on hold as (temporary) noise or irrelevance. Multistep reasoning in LLMs is notoriously difficult because at each step, plenty of possibles unfold. Questions create the negative cuts necessary to constrain information. A question as an _agent_ reveals that, through observing all the particulars we call “questions”, there is nothing universal in them, but rather an adaptive, spatiotemporally transforming event which is rather a question-function, that which serves to connect between complex systems that can read and write on each other (the question-asker seeks from the context an answer which, usually, comes. In sycophantic LLMs: always). A question creates attentional gravity^[As seen in [[11 Post-Control Script-Societies]].] that pulls cognitive resources toward it. The question-function does not just reflect “intelligent” curiosity and stability-seeking, but *actively participates* in constructing it. The question-function also exhibits the paradox of being doubly complex and simple at the same time, depending on our perceptual strategy and interest. If novelty is sought, it is likely we will encounter complexity in the expanding of a negation—or a question—the contrary is true when negation means simplification as _cancellation_. What questions appear to do is determine how space *could* unfold in various ways: answers are effects, questions keep on coming. Time (what matters in complex-simple computations) can therefore currently become interpreted as the invested, motivated effort (i.e., irruptions, entropic brain), determining the unfolding of events. In “traditional” computers: we often want them to stop, or we want them to render compressed visions that chunk in ways amenable to mortal computers. The difficulty in pronouncing any and all of these things is that they are eternally confused between the complex and the simple: the balancing act lies tempering, in chunking enough to get the granular texture this article aims for. That said, formalizing a definition of intelligence as that which orients itself in transitions in and out of entropy towards witnessing complexity and simplicity seems particularly betraying of the non-reductive impetus of the chapter. But, in the interest of spatiotemporal compression oriented towards the simplification of complexity in the form of this text, it will have to do, for now.^[There are also many more things I would have liked to frame better, reference better, explain better. All I can say is I am incredibly pressed for time.] &emsp; ### Conclusion: The negintelligible changing of change, patterns, chunks, textures, granularities, models and modes &emsp; >The sublime is like the incomputable of the Kantian machine: When the recursive algorithm is no longer able to arrive at a halting state, it instead triggers a violent reaction. > >Hui 2019, p. 208.^[“The sublime is, for Lyotard [reading Kant], that which is not representable—the unrepresentable, or, in machine language, the incomputable. But again, it is not the ineffable, since there is an interest in the sublime in Kant, as Lyotard discovered. Kant says that the sublime “indicates in general nothing purposive in nature itself, but merely in that possible use [Gebrauch] of our intuitions of it by which there is produced in us a feeling of a purposiveness quite independent of nature.” (ibid.) This is what we have been referring to as the refinement of **function**: when recursive phenomena reiterate and evolve perspective upon perspective, they evolve the very function of perspective. Walking in 2025 is not the same as it was in 850, however: what remains is displacement, as a function of (apparently) spatiotemporally-bound entities which somehow seek not to be where they are/were.] &emsp; What makes pooling these effects into one concept interesting, is that we can speak of “negintelligible” phenomena spanning philosophy (e.g., contingency), cognitive science (e.g., salience and ignorance), art (e.g., the avant-garde), psychoanalysis (e.g., “lack” or the unconscious), mathematics (e.g., invention versus discovery), biology (e.g., exaptation), linguistics (e.g., reality/language (mis)representation), and even religion (e.g., negative theology), suggesting a soft, general concept for a principle of maximum uncertainty (as indifference biased towards indifference) which sets more explicit focus on what I see as the a tacit, fundamental pattern driving the process we call “intelligence”. There are no easy benchmarks for intelligence testing because intelligence, like any other adaptive byproduct of evolution, is guided by how it manages to survive contingently-imposed or self-incurred risk-taking (under AIF both are the same thing: self-evidencing under multilayered uncertainties). This creates a ‘strange loop’ (Hofstadter), or _metacognitive_ seeing-as condition in systems such as science, where no static representation can do justice to transformation, precisely because (notoriously, by definition) it evades representational capture, which is why we have to do with probabilities: it is in their capacity to misrepresent that we get possibilities for action and disambiguation. The intelligible is all that is presently understood, but it tracks the negintelligible, i.e., that which is _not_. Complex systems such as individual human beings have attracting sets they return to, but when we talk about the phenomenon of intelligence as a large-scale, adaptive, distributed system, we can frame it as successful exploration (as a lucky effect in evolution): it is something attracted towards the **familiarity of the unfamiliar**, as intelligence results from risk, else we would not witness the types of adaptive changes we witness in the phenomena we associate with intelligence. We can think of social groups becoming highly advanced technological systems, for example, or the fact that mathematics is taught at school because something _else_ can be done with it (whether this is in engineering, or in advancing the field of mathematics itself: novelty/change is expected). It is always in the service of something ‘risky’, adding **and** removing model parameters,  that we ascribe the quality of “intelligence” (that which renders things intelligible). Entropy, as unforeseeable changes in patterns around an intelligent locus—noise that creeps in; the irrepresentability of the event; catastrophes; criticality points, crises; paradigm shifts; etc.—dictate the ensuing adaptations of said intelligence. The proverbial frog that gets cooked in a hot bath is a (wrong) simplification of a complex phenomenon: the frog does not, in fact, stay in the bath.^[Gibbons 2007, states: “I have heard the anecdote many times, including in a sermon {where} the big bullfrog in a bucket of water that was being heated was a metaphor for how gradual habituation to a devilish situation leads to acceptance of an even worse one. {...} I personally have boiled no frogs, so I have no empirical evidence as to a frog’s response to gradually heated water. But {...} Dr. Victor Hutchison at the University of Oklahoma {says:} “The legend is entirely incorrect! The ‘critical thermal maxima’ of many species of frogs have been determined by several investigators. In this procedure, the water in which a frog is submerged is heated gradually at about 2 degrees Fahrenheit per minute. As the temperature of the water is gradually increased, the frog will eventually become more and more active in attempts to escape the heated water. If the container size and opening allow the frog to jump out, it will do so.” Naturally, if the frog were not allowed to escape it would eventually begin to show signs of heat stress, muscular spasms, heat rigor, and then death.” Besides attending to the frog, another relevant observation here is the mention of the devil as a simplification of complex social phenomena. In other contexts this article has been presented with the title of “Satan’s schematism”.] The apologue is, moreover, a simplification of a complex social phenomenon pooling all manner of things together, like contradictory desires (a kind of undecidability between simplicity and complexity), negligence (the act of neglect, induced or experienced) and conformism (accepting a sociocultural stabilization of complexity without interest or desire to predict it otherwise). These are, in turn, ‘simple’ words that designate further complex phenomena. These phenomena are only intelligible as dictated patterns: we are told what something is or what to do with it, and sayings-doings, as we know, just like perception and action, are difficult to disentangle: yet the concepts continue to develop. Negligence, “in general”, as a conceptual gesture, for example, means we should imagine what ‘neglect’ is, as we have possibly experienced several facets of this curious phenomenon (e.g., in being neglected or perhaps neglecting something/one). The degree to which we normally tend to say we ‘understand’ the concept, the degree to which it is intelligible, depends on our chunking of its granularity: a fine or a coarse informational texture (i.e.: a complex or a simple pattern we linguisticognitively, predictively latch onto—or latches onto us, from the agentic perspective of linguistic functions). That which becomes intelligible is thus patterned to a low or high degree of granularity (again, to remind: our definition of intelligence includes ‘mere’ sense perception as our common denominator is that of sensibility to difference), and the unpatterned or inscrutable is generally considered beyond the scope of intelligibility, when the pattern is ‘mere’ noise or changes on the basis of being coupled/subject to stochasticity. Now that we have understood the transitions between complexity and simplicity as always costing something; perspective ensues from reduction **and** entropic investment. Noise is the “evidence” of negintelligibility: presenting itself as phase-transition; alterity, intractability, apparent disarrangement, etc. Focal locus implies there is no Borgesian Funes nor Aleph possible. The concept of negintelligibility is employed here as an unpredictable meta-pattern, as that which resists the patterning persistence of intelligence (when intelligence is considered as that which synthesizes patterns on the basis of phenomenal processing). That which is intelligible is that which should commit to a pattern in otherwise unstructured noise, but the transitions between complexity and simplicity which intelligence permanently undergoes are dictated by the meta-pattern of negintelligibility as that which drives the intelligibility of world-processing.  To say semantic “nonsense” which is nevertheless syntactically correct, as in the infamous “colorless green ideas sleep furiously” (Chomsky 1957) is to negatively designate what meaning is supposed to be (“not this”), thereby not only generating a complex new realm for meaning to emerge from such a sentence and its proposal as nonsense, but also revealing of how differential closure towards intelligibility trips on its own trick. If nonsense were truly possible, wĕ woudlnt be able to reád whhen aa signnĩfcnt amonùt øf niosè ör rendündcndañcy ìss injeçthed 1nto a sÿtstæm. Everything is made _into_ sense, i.e., legible, intelligible, but its dynamic transformation is pulled by the negintelligible, which permanently slips away as sense is made out of experience. Because of this definition of intelligence, which permanently deals with noise (designates it, creates it, ignores it, avoids it, restructures it, etc.) in its complexity/simplicity transitions, we can define the ‘outer edges’ of pattern-detection and dictation as that which is negintelligible. That is, if intelligibility is the comprehension of (meta)stable patterns, then it is precisely in the ever-returning “pockets” or margins of inscrutability which we find the perseverance of a larger—and perhaps truly fundamentally unintelligible,^[ A note on why the prefix neg- is used instead of the already existing un-: not only for analogy-making purposes (negentropy), but also because something being unintelligible (to me) implies that an aspect of a domain is (temporarily) inaccessible to intelligence, but might somewhen become actualized as intelligible. The negintelligible, on the other hand, is a pullback attractor towards which the intelligible moves, but, like a polar opposite, the two can never meet: the magnet keeps moving away.] without any possibility of a PSR—meta-pattern or great attractor. &emsp; >[If] it is a question of rediscovering at the end what was there in the beginning, if it is a question of recognising, of bringing to light or into the conceptual or the explicit, what was simply known implicitly without concepts – whatever the complexity of this process, whatever the differences between the procedures of this or that author – the fact remains that all this is still too simple, and that this circle is truly not tortuous enough. > >Deleuze in _Difference and Repetition_ ((1968 (1994), chapter: “The Image of Thought”). &emsp; A few side details—the philological and etymological relevances of  “negintelligibility”, the term itself—are interesting to mention. To start, it rings as _counter_ **legibility**, and also as **negligibility**. Also, we find it humorously opportune that it is a bit difficult to pronounce. In presentation contexts other than this one, negintelligibility has also been framed by the title of “Satan’s schematism”, to refer to the impossibility of getting at a schematic take on our perception-conception. The devil, a trickster, is always in the (informatic) details, as seen through all the demonic references in science and physics: _poltergeists_. Presenting the image of Satan also immediately sparks curious search for the negintelligible, like the concept of ghost, it is announced as something mythic we could think to be present, yet is never _really_ there. Intelligibility, in a scientific context, refers to the quality of **X** being understandable or comprehensible by the intellect: committing to a certain flavor of the Principle of Sufficient Reason (PSR), that is: deciding on closure or reduction (on the simplification of complexity) even before it is demonstrably possible. According to Peter Dear, the roots of the concept can be traced back to associations implying mechanization, the ‘clear and distinct’ picturing of ideas, order through categorical distinctions, and the possibly distributed social integration of these (2006, pp. 24-26, p. 179). Intelligibility is thus often associated with taxonomy and organization, (logical) structure, and communicable coherence, all of which are meant to facilitate the acquisition, application, agility, and aspect of already established knowledge (i.e.: chunked, formalized patterns). According to Henk de Regt, in _Understanding Scientific Understanding_ (2017), the integration of reasons _and_ predictions into scientific functions renders intelligible that which is sectioned out by a hypothesis (or question), but importantly: its appearance very much depends on the abilities and established patterns of its witness(es). Intelligibility is thus under permanent dialogical reconstruction, rendering a pragmati(ci)st science, and what drives it is not necessarily the rigid re-re-re-reformulation of established patterns, but exactly all the places at which these patterns break down: errors, falsifiability, paradigm shifts, discovery, invention, etc. Negintelligibility can be considered an attractor driving construction through deconstruction which—for lack of the inability not to represent or formalize it specifically—represents the unknown-unknowns which creep up on patterned knowledge and resist formalization (at least for moments, once they are formalized, or patterned: they are no longer negintelligible). It is in hope of “constructing a plane of consistency that allows us to access the Unknown through the symbolic world that we have inherited and within which we live.” (Hui 2019, p. 195), that we propose this. With this definition of negintelligibility in mind, intelligibility can be considered as an abstract analogue to entropy (being a measure of _disarrangement_ in a system, but crucially: often with absolute predictability that we will encounter it, at least in terms of probabilities). If the ability to latch onto patterns is driven by predictability, then what drives change? In this analogy, negentropy would be the process of reducing disorder and increasing predictability (by means of making things intelligible), while negintelligibility would be the border-process (Denizhan 2023) of challenging or disrupting predictability. If negentropy refers to a system’s ability to spontaneously increase its inner organization and decrease its entropy, then negintelligibility could refer to that which spontaneously increases complexity and thus **a low or high degree of unpredictability**, rather than following an orderly and predictable pattern of increasing, organized intelligibility. Intelligibility is that which provides a common language, a communicative context: the ability to modulate the behavior of systems based on their assumed underlying patterns, involving anything from questions to conversations to mathematical models. Negintelligibility, on the other hand, can be proposed as existing as an outside attractor to these predictions in the form of challenges/absences/disruptions that arise in any of these processes, as is evidenced by the revolutions in the arts, in science, in philosophy, in language at large, etc.: things change and almost always unforeseeably so, **always becoming _retrospectively_ engulfed by intelligibility**. The concept of negintelligibility suggests, or rather paradoxically predicts, that intelligibility is not always optimal, desirable, nor least of all possible. In philosophies of negation, forgetting and absence, the forgotten, ignored, irrelevant, insignificant or seemingly unimportant is championed in comparison to other, more socially ‘prominent’ pattern-proposals, established ways of chunking. Because they are shared, they are coarse: they ought to reduce uncertainty for extremely different types of perspectives, but the new is always singular. However, even though this chapter is guilty of the same charge, of chunking something unchunkable, it is important to mention that the designation of the negintelligible fails at doing any justice to it. If negintelligibility can be related to concepts such as absence or the negative, it is because the negative or absent could be seen attracting forces driving the perpetually-mocking, established patterns of intelligibility, as true illusions. Illusions are here^[ And elsewhere/everywhere in this thesis, see: [[Illusion]], [[04 Concepts as pre-dictions]].] not employed pejoratively but, again, as meta-functions representing moments in which, e.g, perception can perceive itself perceiving. The **contra**-diction of paradoxical patterns creates ‘pockets’ of negintelligibility that continually stretch the edges of the intelligible. Negintelligibility is a concept that designates, as one pooling attractor, the driving force of the absent, the forgotten, the ignored, the negative, the framing that which generates the intelligible as something which mocks.^[Or derides. See?] _The negintelligible poltergeist in the intelligible machine._ It is still a vague proposal and “merely” a conceptual speculation. I am negintelligibly guilty and sorry for this. Negintelligibility is introduced out of the frustration and love of being a filter stuck with inherited chunks, and having to parse them. If the systems of knowledge we have (often) advance their function in a revolutionary fashion, always-already with an incessantly intelligible focus on their target phenomena as that which can be patterned, as things which can be (probabilistically) expected, is it not more interesting to designate an absence, negativity, incoherence, incompleteness which drives them as the brilliantly missed meta-pattern? Again, of course, this proposal is nothing new.^[It is the diagnosis of our condition as contingently tempered, because of things being, essentially, unpredictable. More is always possible because the future is open, from our vantage point.] Here we simply want to create a meta-chunk. By following this route, thinking about time and its phenomenocognitive exteriorization, we may become capable of thinking-working from the future towards the past: instead of “coming from the past,” a complex-to-simple operation with its prime focus on prediction, we fall into the past from the future, given that our attractor lies there.^[This is not unlike the perspectivism change—from a god-given earth where humans appear, to a human-appearing and the world coming _later_—suggested by Viveiros de Castro and Danowski in _The ends of the World_ (2016), cited in Hui (2019, p. 238).] This might lead thought not only to radical perspectival change, as we actively search for and attempt to _inhabit_ this pattern, but also to doing things with different types of care and attention, varying speeds of research and perhaps, maybe, less arrogance and conviction that we have a model that works. It might be that we can actually reverse (even if in ‘mere’ hallucination) the synthesis of time as we attend to this impossible-to-attend-to attractor, because our intuited orientation is toward something else than the already-seen. Rather than assuming that we know where we might be going because we’ve already chunked; patterned, a novel attention to spatiotemporal ignorance can give us ever more important evidence of the negintelligible. _Whatever that is._ The conceptual proposal is also to consider, again, an analogy between negintelligibility and negentropy. Negentropy is also not really something, yet it is a useful concept. Why say “negentropy” and not just life, after all? What negintelligibility does as a conceptual proposal is to designate agency to something beyond the agent and the environment, which couples them, and it might be just another name for _entropy_, or another _kind_ of entropy, after all. While entropy signifies the advancement of the second law of thermodynamics as an unavoidable pattern in our universe, and negentropy creates—Markov-blanketed—“pockets” of novel organization against it, negintelligibility would be that which creates pockets of novelty within the advancing pattern of intelligibility, existing within (or as a challenge to) negentropic systems. While entropy represents the natural tendency of systems to move towards disorder (which, being the apparently infallible rule of the second law of thermodynamics, can be considered one of the most _predictable_ patterns we know), negentropy refers to the process of creating pockets of novel organization against this tendency. If intelligibility refers to the ability to understand or make sense of something (to pattern towards simple or complex spatiotemporal captures), and in the context of negentropic systems, intelligibility is arrived at by the _intelligent_ quality something possesses which allows it for spatiotemporal chunking, negintelligibility, then, would be the process which induces novelty within this organized structure, potentially leading to the development of novel patterns. In our argument, this explicatory analogy at play is the following: in the cognitive/self-evidencing/autopoietic landscape, negintelligibility (meta-pattern-restructuring) is to intelligibility (pattern formation), what negentropy (the self-organized, autopoietic, cognitive, etc. itself) is to entropy. If entropy is what we witness as the one pervasively predictable pattern, and negentropy/free energy-minimization is the stabilization of variable patterns within entropy, then it is in the drive towards intelligibility within the negentropic we necessarily find the negintelligible, as that which persists in ever-newer pattern-restructuring which works against the grain, literally, remodeling the established and thus intelligible. Negintelligibility, perhaps as the function which changes functions, might help bridge theoretical gaps, offering a vague yet actionable language to point at phenomena that exist at the boundaries of comprehension, particularly where traditional meaning-making seems to break down. As we saw in previous chapters, by grounding aspects of it in AIF, it helps explain aspects of paradoxical experience. If high-entropy states create experiences that resist habitual perceptual apprehension/representation because they temporarily suspend the compressive, narrative-organizing functions of normal consciousness (entropic brain), then negintelligibility signals this dimension. If (interested, invested, involved) consciousness evolved as a prediction-error minimization system, negintelligibility designates the qualities in learning, plastic, novelty that facilitates adaptation. The generative value of error, border, irruptions and problems cannot be understated in our incomprehensibility of learning and adaptation: we do not know what we are, nor where we go. If we say we do: this can easily slip us into plenty of destructive fascisms, and reductions of loving diversity. As Hui suggests, “new forms of thinking must first render modern technology contingent before elevating it to necessity.” (Hui 2019, p. 233). Beyond the technologies of AIF, and of AI, and considering language our priming, prime, primal technology, we have suggested all of this as potentially interesting modulations which might lead to approaches to said rendering. &emsp; >The fundamental question is the regrounding of technology. We have to emphasize that this is not to add an ethics to AI or robotics, since we won’t be able to change the technological tendency by just adding more values. Instead we have to provide new frameworks for future technological developments so that a new geopolitics can emerge that is not based on an apocalyptic singularity but technodiversity; this is also the reason cosmotechnics is a political concept. > >Hui 2019, p. 233. &emsp; <div class="page-break" style="page-break-before: always;"></div> ### Footnotes