**Links to**: [[12 Negintelligibility]], [[Cognition]], [[Perception]], [[Knowledge]], [[Persistence]], [[Evolution]], [[Predictive processing]], [[Time]], [[Control]], [[Determinism]], [[Noise]], [[Chance]], [[Necessity]], [[Chaos]], etc. # What is the future? Prophecy, providence, preference, pattern, planning, perfection, persistence, procreation: prediction. &emsp; >Any real change implies the breakup of the world as one has always known it, the loss of all that gave one an identity, the end of safety. And at such a moment, unable to see and not daring to imagine what the future will now bring forth, one clings to what one knew, or thought one knew; to what one possessed or dreamed that one possessed. Yet, it is only when a [person] is able, without bitterness or self-pity, to surrender a dream [they have] long cherished or a privilege [they had] long possessed that [they are] set free—[they have set themselves free]—for higher dreams, for greater privileges. All [...] have gone through this, go through it, each according to [their] degree, throughout their lives. [...] There is never a time in the future in which we will work out our salvation. The challenge is in the moment, the time is always now. > >J. Baldwin, “Faulker and Desegregation,” _Nobody Knows my Name: More Notes of a Native Son_, 1961. (Our adaptations). &emsp; <div class="page-break" style="page-break-before: always;"></div> # What is the future? Prophecy, providence, preference, pattern, planning, perfection, persistence, procreation: prediction. **Summary: the function of _prediction_ can frame almost anything, and this is useful:** The concept of prediction has a long (i.e., _salient_ and _complex_) short-history.^[This is a joke about long-short-term memory.] From its belligerent cybernetic origins in the realms of encryption and control, to the evolving paradigms of experiment-replication, translation and modeling across all sciences, to the challenging criticisms of the _planning_ zeitgeist (e.g., Agre, or as bounded rationality in Simon), all the way to current developments in cognitive theories of predictive coding and philosophical proposals of predictive processing. In the recent decades _prediction_ has certainly become even more prominent by the hand of so-called artificially intelligent technologies, and the impending crises (sociopolitical, technoclimatological, etc.) which demand predictive risk assessment. The interesting limit-point we seem to have reached in the past few years is that, on the one hand, we have amassed complex data landscapes which are predictively probed for known-unknowns (how to get a robot to spatiotemporally direct itself, how to predict the next word in a sentence, how to produce an image based on instructions, etc.) and on the other: both the scientific and the commercial audiences seem to demand unknown-unknowns from AI technologies (how to produce an ‘unexpected’/‘surprising’ cultural product, how to interpret and produce ‘human-like’/‘creative’ texts, how to reveal aspects of protein folding we do not yet understand, etc.). In this article the proposal is made that it is at this limit-point that the concept of _prediction_, by becoming deconstructed and reconstructed again in a new light, might reveal something about the interesting desire for the _unpredictable_ that seems to be inherent to the project of “AI,” and more widely: the historical project of cognition in general. If prediction is understood as a basic, unavoidable ‘schematic’ operation of _perception-cognition-action_, it might become easier to reveal some of the challenges facing AI and our relationship to its development. As a conclusion we reach an untempered explore-exploit: the future as perceived through AI is as cornucopial^[Generative, vast.] as a beacon of abstraction, and as concrete as mindless copulous^[Coupling, as in [[Structural coupling]], and procreative.] procreation. <small>Keywords: Prediction, Scientific models, Causal Inference, Knowledge, Predictive processing, Uncertainty.</small> &emsp; ### [[Postulate]]: Prediction: once you see it, you can’t unsee it. The challenge, really, is to answer the question: what is _not_ prediction? &emsp; >“Voordat ik een fout maak, maak ik die fout niet.” J. Cruijff. &emsp; >“Overconfident predictions about AI are as old as the field itself.” M. Mitchell, 2021, p. 2. &emsp; And so we embark on an overconfident ride which mistakenly predicts things about both. <div class="page-break" style="page-break-before: always;"></div> ### Introduction &emsp; >But the anguish is that of a mind haunted by a familiar and unknown guest which is agitating it [i.e., the poltergeist], sending it delirious but also making it think--if one claims to exclude it, if one doesn’t give an outlet, one aggravates it. Discontent grows with this civilization, foreclosure along with information. > >Lyotard, 1991 (1988), p. 2. &emsp; If one were to summarize the impetus driving contemporary machine learning (ML) and 20th/21st century science under one heading, it would be difficult to argue against this being “prediction.” Vernacular conceptualizations of prediction have a long history: as premonition, providence, prophecy, divination, fate, destiny, etc. In comparison, its scientific conceptualization is more recent. From early cryptologist al-Kindi (801–873), through Pascal and Fermat, to today: measuring/tracking phenomena in terms of their predictability indices (by way of statistical inference and probability) is now a prominent conceptual tool spanning all scientific-modeling. In recent decades, the concept has become even more prominent by the hand of so-called artificially-intelligent technologies, many of them potentially useful for augmenting humanity’s capacity to risk-assess impending (sociopolitical, technoclimatological, etc.) crises. Given the increasing implementation of ML-techniques across scientific-modeling (Buijsman 2023, Le Page 2025 2024), it will be argued here that a technical-philosophical reconceptualization of “prediction” in the ‘broadest’ sense (Sellars 1962), is needed for clarifying certain driving factors behind the construction of the future.  However, this is where we reach an impasse. The predictive limit-point modeling seems to have reached is that, on the one hand: we have amassed complex data landscapes which can be probed for _known-unknowns_ and on the other: many of the complex problems (probabilistic) modeling would provide meaningful answers to are intractable (van Rooij et al., 2023) and/or unexplainable (Buijsman 2023). At the same time, we seem to demand the prediction-production of _unknown-unknowns_ from ML technologies. Scientific modeling-crises reflect the impasse: from Ioannidis’ exposure (2005) that the majority of behavioral models cannot be replicated; to the translation between non-human/human animal models in pharmacology; to the metaphysical implications of modeling fluidity with the assistance of software. We thus demand predictability when we know for a _fact_ that we are subject to modeling irreducible aspects of uncertainty: this makes ML subject to high suspicion in any of its claims to model the future. The line between suggestive/approximate modeling and self-fulfilling prophecy becomes blurred. This problem is nothing new. In philosophy, these frictions have been brought to our attention by D. Hume (induction), Hegel (who thinks abstractly?), W. James (vicious abstractionism), Nishida (subject/object), A. N. Whitehead (misplaced concreteness), G. Box (models), A. Korzybski (map-territory), Bateson (tautology or generalization)^[“ The “laws” of probability cannot be stated so as to be understood and not be believed, but it is not easy to decide whether they are empirical or tautological; and this is also true of Shannon's theorems in Information Theory.” (Bateson 1971, p. 4).] and many more (Borges, Wittgenstein, Bateson, McLuhan, Ashby, the list goes on). Essentially, for our frame, the gripe being a problem of the instantiation(s) of predictive legitimacy: the smoothing or generalizing tendency inherent in abstractions is a problem _vis a vis_ the evolving universe, try as we might: no event occurs twice (Heraclitus), and we are stuck with motion (Nail 2024).^[Which necessitates an encounter with the modulation of modalities: “For in itself, a mode is nothing but what will be; it is pure ‘advent’. ... the mode of making something is both less and infinitely more than the thing made ... Passing between being and becoming, modality is the potential (_quod_) by which a thing (_quid_) becomes what it is, by which it is differentiated from abstract flux, and by which it continues to communicate with other things in a metastable state ... But modes are not categories, that is, concepts applicable to all possible things” (van Tuinen 2019, pp. 11-2). To van Tuinen, modes are representative of the inseparability between actual and potential, and point precisely to the generative in the potential. We frame this as prediction and gravitation (see [[10 Bias, or Falling into Place]]): “nothing is ever fully concrete in itself. Things are always affected by a certain attraction, a tending to whatever tends their way. They are part of a series in which there are limit states, but no unique and superior ends.” (ibid., p. 13). The way modes are framed is not unlike how we frame _function_.] Thinking about the current paradigm, where ML-assisted predictions imply the production of the future, not just its armchair interpretation, this is where we ought to start thinking about the abstracting, predictive possibilities afforded by ML. Entertaining high speculation: what would it mean to expect of “AI” that it ‘understands’ this negative philosophical fallacy and/or predictive meta-concept of world-model frictions? That it really learns to say “no”? In this piece the proposal is made that it is at this limit-point that the concept of prediction, by becoming deconstructed and reconstructed again in a new light, might reveal something about the interesting desire for the _unpredictable_ that seems to be inherent to the project of “AI,” and more widely: the social, historical project of cognition in general. It is, as mentioned, not just known-unknowns we want (“how will this storm system unfold?”) but particularly the virtual, the creative, the artistic, the radically unexpected: the unattainable generative friction between the model and the unmodeled (Denizhan 2023). Following active inference as ensuing in predictive processing (Parr, Pezzulo & Friston 2022, Clark 2023), the concept of prediction will thus be treated in the amplest sense, encompassing all manner of function-concepts such as _(meta)learning_: as in heuristics for problem-solving under uncertainty; _reliability_: as in striving for certainty, _causal inference_: as in (inventing) predictably tractable models, _robustness_: as in predictable foundations, _explanation_: as in dialogically verifying models, _trust_ and _transparency_: as in ensuring collaborative predictability and tractability, and _understanding_ (see: _explanation_). If, roughly put, “[a]n explanation **adds information** to an agent’s knowledge” (Halpern & Pearl 2008, our emphasis), and knowledge (as a _generative model_, Parr et al., 2022) and explanation (as social, _dialogical verification_, Dutilh Novaes 2022) are inseparable from understanding (as the process capable of **revealing** the _pragmatic intelligibility_ of scientific theory, de Regt 2019) then grounding these concepts under the heading of _**prediction**_ seems to provide a parsimonious vantage point from which to assess the implications of a lot of what we seem to desire from ML in (scientific-)modeling. This being because we can distill, from each of these understandings, a _function_ which seeks the transfer of a pattern, contained as a chunk, into a projectible yet impossible to realize^[_Virtual_, if we wish (Deleuze 1968). See: [[Function]].] since its expression is precisely unfolding as competing probability mappings:^[This is given we have chunks, probability textures to begin from at all. Which is why AIF is only tractable in modeling when it simplifies the past into particularly circumscribed probability conditions: contractions of the past. See, e.g., Smith et al. 2022, and Andrews 2021.] a strangely-looping, retroactive, self-evidencing history. To elucidate, this is because “adding information” means changing the past (and expanding the future); which is precisely how a generative model _generates_, how the dialogical unfolding of a socially-embedded reason advances and how scientific understanding creates pragmatic traction. This chapter presents an image of prediction which is open-ended, counter the charges the concept has received as possibilistic reduction or predetermination.^[Exemplified here by Lyotard: “[I]f one wants to control a process, the best way of doing so is to subordinate the present to what is (still) called the ‘future’, since in these conditions the ‘future’ will be completely predetermined and the present itself will cease opening onto an uncertain and contingent ‘afterwards’.” (Lyotard 1991 (1988), p. 65).] The concept of prediction has a long short-history. Long in the sense that much has transcurred since its semi-recent origins in the literal and metaphorical trenches of WWII. It is always worthwhile repeating how cybernetics, the science of communication and control, as well as encryption, the practice of (re/de)coding, have been driven almost exclusively by the desire to develop enemy-outsmarting and obliterating war strategies (Farocki 1989, Galison 1994, Halpern 2015, Bucher 2018). The very concept of a “black-box”, i.e., an object enjoying a high degree of encoded unpredictability, literally refers to “a physical black box that contained war machinery and radar equipment during World War II.” (Bucher 2018, p. 42).^[Bucher quotes von Hilgers (2011), who “describes how the black box initially referred to a “black” box that had been sent from the British to the {United States (original says “Americans”)} as part of the so-called Tizard Mission, ... This black box, which was sent to the radiation lab at MIT, contained another black box, the Magnetron. During wartime, crucial technologies had to be made opaque in case they fell into enemy hands. Conversely, if confronted with an enemy’s black box, one would have to assume that the box might contain a self-destruct device, making it dangerous to open. As a consequence, what emerged was a culture of secrecy or what Galison (1994) has termed “radar philosophy,” a model of thought that paved the way for the emergence of cybernetics and the analysis and design of complex “man-machine” systems. The black box readily became a metaphor for the secret, hidden, and unknown.”] In our current AI-obsessed context the metaphor of the black box is pervasive and it, too, _camouflages_ its much-indebted war origins. War, abstracted as two or more entities strategically predicting their control over resources, is a well-known novelty-producing imitation game. Similarly long (complex) and short (very recent), (crises of) replication and modeling across all sciences currently reflect a drive towards tempering problems of prediction: from the radical shift brought about by Ioannidis’ exposure (2005) that a majority of behavioral science cannot be predictively replicated (about 60% of all psychology studies, for example, according to Nosek et al., 2015); to the conundrum of how to accurately translate between non-human animal and human-animal models in, e.g., pharmacology (bringing into question the _relevance_ and _legitimacy_ of these studies); all the way to the metaphysical implications of modeling physical phenomena such as fluidity with the assistance of software: here, the reliability of the resemblance between the modeled and the model will necessarily be out of phase because silicon-based computational models are useful because they _approximate_, but can currently not _imitate_ emergent phenomena such as fluidity.^[The question is also: what would imitation entail? Complete copy/reproduction? See: [[Identity]].] In thinking about the current paradigm, where AI prediction implies the _production_ of reality, not just its interpretation, this where we might start thinking about the abstracting, predictive possibilities afforded by AI. What would it mean to expect of AI, if that is what we are after, that it doesn’t _replicate_^[Mind the pun.] this common fallacy? Or that it understands the meta-concept of world-model frictions? And, more importantly, is it possible to think _at all_ without committing (to) it? To think is to forget (to _abstract_: Lao Tse, Kant, Nietzsche, Borges) and this is something so-called AI does pretty well, albeit in a manner different from that of “humans”, which most certainly alienates us. The predictive capacities of AI radically differ from those of “humans”.^[Although “what humans can do” is far from having been understood, subject of [[03 Semantic noise]], published as “Semantic Noise and Conceptual Stagnation in Natural Language Processing.” _Angelaki_ 28.3 (2023): 111-132.] This is not “bad”—counter the usual knee-jerk reaction—this is precisely one of the main reasons we are interested in predictive AI outputs: because the amount of data that can be parsed, the ways in which it can be compressed and reinterpreted, is of scales which are not even close to conceivable by us mere mortals. The logic of this extended-mind scenario goes back to logarithm tables and (mechanical) calculators of all kinds, revealing the extended allocation of memory and planning and/or predictive capacity to objects of a character _other_ than human. In the context of cybernetics, W. B. Ashby’s law of requisite variety,^[Ashby was also the pronouncer of “The whole function of the brain is summed up in: error correction.” (cited in Clark 2013).] was intended to formulate an approach to the metrics of this capacity to deal with world-model frictions. In order for a system to achieve the predictability necessary for its own persistence, it should match or surpass the contingency or variety afforded by its target environment, whatever it is that it is afforded _within_.^[Persistence being “stability over time (Pascal and Pross 2015) ... constraint regimes endure longer than their moment to moment realizations ... This persistence holds even if the possibility landscapes that contain those interdependencies turn increasingly rugged over the entity’s existence or lifespan. ... What persists in individual realizations is not the material substrate of concrete particulars but the stored information embodied in constraint regimes.” (Juarrero 2023, p. 129).] More recently, in the context of late-early AI, Agre’s criticisms of planning (e.g., “What are plans for?” with D. Chapman, 1990) argued against the installed AI-research drive which viewed planning as distinctly formulating a script of future actions in advance. Plans, instead, should be able to afford more if they are conceived of as _possibilities_ for behavior which can be modified as a situation changes. In a completely different vein, _bounded rationality_ (Simon), is a (still much-too-rationalistic) acknowledgement of the impossibility of any _absolute_ prediction. Absolute prediction is essentially (heat) death (or at the very least: a very dark room), as we will see when discussing predictive processing below.^[And as presented in [[10 Bias, or Falling into Place]].] While all of this can be taken to signal a dominating representational and teleological bias in Western technoscience, it also signals, for the purposes of our argument, an encounter with what seems to be an insightful approach to cognition. We are dialogical creatures constrained by spacetime, and a very salient aspect of our embedded, embodied, extended, etc., nature can be pooled under the concept of prediction—carnal sentience as well as distanced, armchair thought—because we _expect to cohere_ (i.e., predict each other’s actions: reading, talking, organizing, etc., always tempered by possible surprise). The ability to engage with cultural affordances such as concepts and narratives “(compared with the inability, or reduced ability to do so) enhances [our predictive] capacit[ies:] real or imagined events (or relationships), and the ensuing ability to represent and predict events (and their contingencies), instead of only decontextualized elements (e.g., objects, people, places), enriches the repertoire of predictions in a way that serves human adaptation and everyday functioning.” (Bouizegarene et al., 2024). Is _prediction_, however, a sociopolitically problematic concept because it confronts us with the permanent catastrophe that is the rehearsal of past problems (biases, assumptions, ignorances, legitimations, etc.)? Yes. Coming to terms with it might be better than hiding our heads in the sand (a.k.a., the actual _dark room_, see: [[Dark room]]). Thinking of ML as a cultural affordance, Marshall McLuhan can frame our thoughts with the following (1964, p. 8): &emsp; >Examination of the origin and development of the individual extensions of man should be preceded by a look at some general aspects of the media, or extensions of man, beginning with the never-explained numbness that each extension brings about in the individual and society. &emsp; This numbness is inevitable, precisely because we are relegating predictive capacities to sensors beyond our bodily sensing. Two years earlier, in _The Gutenberg Galaxy_, he writes: “This externalization of our senses creates what de Chardin calls the “noosphere” or a technological brain for the world. Instead of tending towards a vast Alexandrian library the world has become a computer, an electronic brain, exactly as in an infantile piece of science fiction.” (p. 32). The only point of disagreement with us would be that these pieces of fiction are, actually, rather mature, they are the forward-projection of an insatiable will to persevere that extends the mind beyond its own borders, beyond what the mind itself even conceives as possible for the mind.^[See also: [[12 Negintelligibility]].] &emsp; ### Predictive processing In the recent decades, the field of predictive processing (led, citation-wise, by K. Friston, A. Clark & J. Howhy) has made significant advances in signaling the predictive dimensions of cognition and, the various takes on the different dimensions of active inference (AIF) as organismic persistence-prediction continue to advance,^[Primarily as Bayesian _active inference_ ensuing from the _Free Energy Principle_, FEP. See: [[Free energy principle]]) and the references on active inference in the bibliography.] to the extent that these approaches have all been termed variants of what could be called _predictivism_ (Bruineberg 2017). AIF it has constructed some important abstractions that allow for an understanding of the computing of predictions _across_ multiple agents (Friston 2013, Friston et al., 2020, Clark 2023, Constant et al., 2019, Vasil et al., 2020, Ramstead et al., 2021; Parr et al., 2022, Bouizegarene et al., 2024). Crucially, prediction is not implied to mean _deterministic planning_ in the sense Agre criticized: consciously organizing, transparently judging and linearly-causally executing. Predictions begin at the given morphologies (and thus preferences; biases) of an agent, and its structural contextual coupling. Because of this, learning/adaptation is framed in terms of the parsing of changes, of _difference_—perhaps too hastily termed “errors” in PP, because of its inheritances from ML and predictive coding (though error-minimization can also be understood by the term _surprise_, ^[Again, see [[Free energy principle]], but to clarify the connection between thermodynamic dissipation and its possibly isomorphic relation to information-theoretic surprise, Clark: “Thermodynamic free energy is a measure of the energy available to do useful work. Transposed to the cognitive/informational domain, it emerges as the difference between the way the world is represented as being, and the way it actually is. The better the fit, the lower the information-theoretic free energy (this is intuitive, since more of the system's resources are being put to “effective work” in representing the world). Prediction error reports this information-theoretic free energy, which is mathematically constructed so as always to be greater than “surprisal” (where this names the sub-personally computed implausibility of some sensory state given a model of the world...” (Clark 2013, p. 186.] which is terminologically more felicitous. In general: confusion around these terms is what leads to a lot of misunderstandings in AIF when interpreted as a reductive framework). &emsp; >The strategy of prediction error minimization can lead to a major economy in information processing because sensations that were accurately predicted need not be further processed. In addition, the prediction errors are used to constantly improve the generative model in an iterative process. Of course, the perfect model can never be obtained, both because we live in dynamic, ever-changing environments and because we never have complete access to our environment through our senses. Thus, the current best model is not the “true” representation of the environment, but the one that yields the least prediction error relative to one’s adaptive goals or necessities. > >Bouizegarene et al., 2024. &emsp; Which, to us, means: the _modulation of possibilities._ For our purposes, we would thus say that in thinking about _error_, we are talking about a measure of _difference_.^[Which this project deals with in the sense of chasing after the Deleuze-inspired question: _is difference “reducible” to something more foundational than itself?_ The answer would be that it depends on how we formalize it, and AIF has some pretty convincing takes on how organisms deal with self/environment contrasts.] If an agent expects (predicts) to be homeostatic in terms of x, y or z, it will seek to actualize its preferences by advancing towards x, y, z (thereby effectuating changes in its context: actively inferring towards them) or changing what it houses within itself as x, y, z (its model) on the basis of a stubborn environment which won’t comply. Under AIF, all facets of perception/cognition/action (PCA) inevitably follow the imperative of minimizing the surprise, what is sensed as an error, in sensory observations. “Surprise has to be interpreted in a technical sense: it measures how much an agent’s current sensory observations **differ** from its preferred sensory observations—that is, those that preserve its integrity (e.g., for a fish, being in the water). Importantly, minimizing surprise is not something that can be done by passively ... agents must adaptively control their action-perception loops **to solicit desired sensory observations**.” (Pezzulo, Parr, Friston 2022, p. 6, our emphasis in bold). The reframing of PCA under the active inference means we the turn of the tables: senses do not make experience (traditional view, however recursively or transcendentally understood), experience is what seeks to _make sense_ by producing evidence **of** and **for** itself. The main question that AIF, the process by which predictive processing ensues, is: “How do living organisms persist while engaging in adaptive exchanges with their environment?” (Parr, Pezzulo, Friston, 2022, p. 3). Which leads to the [[Free energy principle]], a dialectical tautology according to Friston (2011, 2023), very comparable to this one: &emsp; >A existência já está sempre existindo; assim, não pode nunca começar a existir.^[“Existence is always-already existing; therefore, it can never start existing.” Own translation.] > >Marcia Sá Cavalcante Schuback, _Atrás do pensamento: a filosofia de Clarice Lispector_, p. 13, 2022. &emsp; In other words, as Friston formulates it: if an organism exists, what must it do (to continue to exist, persist)? “[I]f I am a model of my environment and my environment includes me, then I model myself as existing. But I will only exist iff I am a veridical model of my environment. Put even more simply; “I think therefore I am, iff I am what I think”. This tautology is at the heart of the free-energy principle and celebrates the circular causality that underpins much of embodied cognition.” (Friston 2011, p. 90). Much like the phenomenon known as reward-hacking in AI, or the proposal that evolution has no business in representing reality (Hoffman 2019), AIF also clearly stresses that an agent has no reason to “accurately” match up with the contingencies it encounters (in terms of survival: if it did, this would not prove to be a very flexible, plastic system).^[“The animal’s regulatory system (for instance the nervous system) does not have access to the viable states of the agent-environment system. Instead it needs to estimate them.” (Bruineberg 2017, p. 3).] An organism’s PCA cycles—where motor commands can ensue from prediction errors, blurring the line between perception/action—minimize sensory prediction errors by “sampling, and actively sculpting, the stimulus array”. (Clark 2013, p. 186). What organic evolution amply shows is that what is important is that an organism manages to survive (and produce more of itself): this can include selection for all sorts of traits, which are randomly determined (depending on how we wish to read this effect: genetic mutation, environmental pressures, etc.). Following this witnessing of continued organic _existence_ as the result of the search-space of evolution, i.e., all organisms we know as having persisted, what witness is the survival of that which has been successful in terms of _reproductive_, replicative persistence.^[See: [[Function]].] The evolved “generative model” of any existing ‘living’ thing has, we must accept, persisted because it made it out the other end as a new creature.^[The tacit drive towards immortality inherent in the AI project signals (which is also, before “AI” the very business of culture, of language), among other things, the desire for more-than-organic, vastly outlasting progeny. Interesting to note: providing an _absolute explicitation_ of this, which applies across the board, would imply formalizing the generative model of any and all predictive agents, and all of them as a conglomerate, which thus solve many of the things science/humanity at large seems to desire: absolute _providence_. More on this later on.] In order to adapt to the future, to be able to at least minimally *pre*-sense what the patterns in a projected _countercurrent_ (i.e., not currently the case) will feel like, what next (and next, and next) situation we will fall into, a generative model is possessed by a projective agent.^[Or: [[Xpectator]].] It is the “filter” (Bergson)^[In relation to future lightcones (Levin 2019), we can also relate Bergson’s take on projective memory and its tangent with the real, in the image of his famous memory cone, which is an inverted version of the projection of a future lightcone.] which “processes signals that track _divergences_ [i.e., the difference] between expected and actual sensory data.” (Bouizegarene et al., 2024, p. 3, our emphasis). Any entity (with a generative model), must “remain measurable/identifiable as distinct, persistent entities over some macroscopic time-scale.” (Ororbia and Friston 2023, p. 14). Its model is **generative** because it *expects* and thus *produces* a specific reality, actualizing itself forward. It is a model and therefore defines what an agent is, to itself, while at the same time this very processing of divergence tracks everything that it is **not**. In AIF, this active co-creation of reality between agent and environment is dictated by the minimization of free energy, or surprising states.^[See: [[Free energy principle]]] In order to exist (i.e., persist against dissipation), a system “explains” its states to itself, on the basis of what it senses as its non-self, the outside.^[This, in our preferred framing, brings attention to the possibility of proposing a radically distributed agent (over groups, environments, systems, etc.) and not an image of a lone agent voluntarism against the background of an inert world.] Simplifying things to the level of a unicellular organism, it could hardly be denied that a minimal capacity to move towards and away from states is what can be defined as a common denominator among all systems which persist in the face of contingency. Up the complexity a notch and you get counterfactuals, the intricacies of future planning, including the simulation of an agent: _what if we did this? What happens when…?_ (Friston et al. 2013). What is particularly important to note, in understanding the 5E dynamics of AIF, is that the “horizon of action possibilities ... [the] field of relevant affordances ... coincides with a coherent self as a bodily power for action” (Bruineberg 2017, p. 14), the _coherent self_ from the subjective perspective is the function that the organism needs to maintain in order to project something (i.e., itself) forward in time. This is more complex in distributed, extended mind phenomena (families, ideologies), but the same principle applies: something continues to repeat, elicit itself, as evidence for itself. >I model myself as embodied in my environment and harvest sensory evidence for that model. If I am what I model, then confirmatory evidence will be available. If I am not, then I will experience things that are incompatible with my (hypothetical) existence. And, after a short period, will cease to exist in my present form. > >Friston 2011, p. 117. This is why AIF frames this phenomenon as a _generative_ model.^[See: [[Generative model]].] Senses, given by whatever it is that a system is (a bacterium, a bat, a society) effectuate the possibility to _prefer_, and given these preferences an evolving (across generations or within just one) then effectuates its ever-changing generative model. By merely existing, the model has preferred states, such as finding itself in a warm, dry environment (the case for a lot of mammals). Being outside said state would signify surprise, something interpreted by the generative model as low probability of being itself, in that state; and therefore, e.g., seeking shelter. Crucially, this model cannot but be probabilistic for an analysis with our current methods, and what we understand as physical limits: there is never 100% certainty against a contingent reality, and whatever is sensed needs to be contemplated and retroactively explained (modeled). The causes of things may indeed be rather creative or misguided inventions, but they frame what the (group of) agent(s) prefer(s) as _coherence_ and learn to witness as relevance. Please note, again, that nobody in AIF/PP claims that there is anything “correct” or “accurate” being predicted here (Parr et al. 2022, p. 22). _Accurate prediction_ is simply what happens in specific cases where the tempering of the densities, or _shapes_, of probability distributions seems advantageous from a specific perspective.^[For a technical insight into this, which exceeds me, see: Parr, Thomas, Lancelot Da Costa, and Karl Friston: “Markov blankets, information geometry and stochastic thermodynamics.” _Philosophical Transactions of the Royal Society A_ 378.2164 (2020): 20190159.] This is often framed in terms of agent-environment couplings, where we (humans) tend to prefer to see the agent as having legitimate conceptual primacy over the environment in terms of persistence.^[Though proposals against this bias are also being advanced in the AIF community, see: Fields’ and Levin’s work on intelligence variability (2022, Levin 2024).] Cognition (“the brain”, says Clark) is “adept at finding efficient embodied solutions that make the most of body and world” (Clark 2016, p. 300). What “the most” consists of, is of course the question. In a comparable approach, according to Sterling and Laughlin (2023), the brain’s purpose can be reduced to understanding it as an organ which evolved for “regulating the internal milieu—efficiently—and helping the organism to survive and reproduce.” (p. 1751). They suggest that all complex behavior, including work, play, music, art and politics “are but strategies to accomplish these [regulating] functions.” (ibid.). While this may be true examined from a specific perspective, it seems something might be missing considering the brain’s capacity to become embedded in larger structures such as culture and technology. Once it couples, and internalizes projective capacities of said structures (e.g., I know that technologies, such as AI, will most likely continue to exist and evolve once I am gone), it becomes difficult to say that work, play and politics are but mere side-effects of organic regulation. According to the authors, “although ... features [such as love and art] arouse intense curiosity, they are merely baroque decorations on the brain’s fundamental purpose and should not be mistaken for the purpose itself.” (ibid.). This framing fails to account for how complex niche-formation results in cultural and technological feedback loops, which transform the capacities of individual cognitive systems. Large scale projective projects such as love, or AI, transcend mere biological regulation, they are difficult to imagine from an evolutionary origins perspective alone (unless, perhaps, we frame them as merely _replicative_, more on this below). A more comprehensive view would recognize that while regulatory functions may have been the initial evolutionary driver, the brain has evolved capacities that now serve goals beyond individual survival—participating in processes that have become rather complex ends in themselves, and may even signify the end of organic life (due to planetary exhaustion; singularitarian visions, and everything in between). &emsp; ### Enactive inference Countering brain- (and machine-)centric accounts of PP cognition, in “The Anticipating Brain is not a Scientist,” Bruineberg et al. (2016) suggest a theoretical reframing of the free-energy principle as operating within loci such as singled out brains or agents. Friston’s FEP proposes biological self-organization in information-theoretic terms and physical limitations of measure, specifically as hierarchical layers of Bayesian filtering. Importantly: “[T]he free-energy principle is consistent with many other accounts of self-organizing (living) systems such as synergetics (Tschacher and Haken 2007), global brain dynamics (Freeman 1987, 2000), metastability (Kelso 2012; Rabinovich et al. 2008), autopoiesis (Maturana and Varela 1980; but see Kirchhoff (2016)) and ecological psychology (Gibson 1979).” (Bruineberg et al., 2016). And through this the “organization and dynamics of living systems prefigures the organization and dynamics of cognitive systems ... [maintaining organic persistence is] the basis for [more abstract] complex capacities such as social cognition, cognitive control and language use” (ibid.). Bruineberg et al., argue that active inference can be understood as incompatible with (Helmholtzian) unconscious inference, if the latter is understood as analogous to scientific hypothesis-testing. The FEP is, in fact, better understood in ecological and enactive terms. If “the free-energy principle applies to a biological system in terms of the dynamic coupling of the organism with its environment”, they find Friston overstretches the concept of _inference_, which, “within the Helmholtzian theory, ... is standardly understood as a probabilistic relation between prior beliefs, current evidence and posterior beliefs” (p. 2420). If the FEP constrains Bayesian active inference, it needs to account for the hierarchical stacking of models upon models: agents such as parents seek to minimize surprise beyond their own homeostasis. Presented “in general”, without much of the inevitably dynamic contextual (cultural, social, etc.) information, the minimization of free energy can seem overly simplistic and fail to account of its enactive elements. Sometimes agents maximize uncertainty (e.g., _overheat_) in order to achieve a, b, or c. This process is always at play in the context of agents that can plan far ahead: looking into the future might mean overwhelming energetic investments _now_, towards surprise-reduction in the distant future. We can also link this idea with the proposal that the reason GPT-generativity is perhaps generatively uninteresting is because it is, despite vast amounts of noise-injection, actually in the business of _reducing_ entropy, and cannot produce great novelty, as it has been designed to bottom out at highly expected linguistic structures.^[Again, this is the reason we take a lot of poetic license in this project, thank you for understanding.] However, moving on from the criticism, as shown in Maxwell et al., 2021, the FEP and AIF are, in fact, enactive accounts of distributed structures, and the literature exploring this continues to grow (Constant et al., 2019, Vasil et al., 2020, Parr et al., 2022, Albarracin et al., 2022, Clark 2023, Bouizegarene et al., 2024, and more). Indeed, as Ororbia and Friston point out more recently in a paper exploring the limits of biological computation (as mortal): “Examining mortal computation through the lens of 5E theory licenses a characterization in terms of increasing complexity according to cognitive functionality.” (2023, p. 14). Expectation (the generative model) meets reality somewhere between the _sensed-and-produced_ features of an input signal, whatever is preferred will lead to the suppression or salience of certain aspects.^[“The relationship between bottom-up (input) and top-down (prediction) processes is entirely mutually dependent, and the comparison between them is essential to the system, since a variety of environmental causes can theoretically result in similar sensory input (e.g., a cat vs. an image of a cat). ... In this way, the causes of sensations are not solely backtracked from the sensory input, but also inferred and anticipated based on contextual cues and previous sensations. Thus, perception is a process that is mutually manifested between the perceiver and the environment, reflecting the bottom-up/top-down reciprocity...” Vuust & Witek 2014, section 3.] What “clashes” with expectations can be understood to change the model or lead to things like confirmation bias: _seeing what you want to see_ (which is much of what we do, all the time, or we couldn’t cope). The focus on the mechanism of error-minimization is one of the things that, again, often distances critics of PP from its claims (as well as the use of words such as “optimization”, “agent”, “system”, etc.). Any bounded agent/system maintains _homeorhesis_. Homeorhesis, introduced by Waddington, describes the tendency of a developing system to follow inclination trajectories *despite* perturbations—this is the classic landscape/descent vision of developmental canalization. Other similar notions relate: homeostasis, allostasis, resilience, etc. All of these, understood through different contexts, are proposals of how system metastability is achieved through _change_ rather than constancy, but all differ in their temporal scope: homeorhesis primarily concerns developmental trajectories and their robustness, while homeostasis (returning to set point attractors) is often of shorter temporal scope, and allostasis (for complex, distributed systems, see: Schulkin and Sterling 2019) addresses (sometimes) longer, dynamic, moment-to-moment adaptations to environmental challenges (the modulation of attractors). “What counts as minimizing surprisal is thus relative to the current state and situation of the animal and its lived perspective on the many affordances offered by its environment.” (Bruineberg et al., 2016, p. 2427). The scenario we contemplate today is one in which these affordances may imply outliving ourselves in ways difficult to frame in the temporal dynamics of individual, evolutionary agents. The multi-temporal framing of adaptive responses (allostasis, homeorhesis, etc.) can be understood as emerging from the need to analyze the complexities of nested constraints: “immediate allostatic adjustments are shaped by homeorhetic trajectories established during development. ... From the computational point-of-view, homeorhesis shapes and directs the ‘calculations’ conducted by the organism as it interacts with its environment; specifically, its ability to infer, learn, and evolve.” (Ororbia and Friston 2023, p. 1). The concept of mortal computation incorporates embodiment as a fundamental principle, where morphology is essential to the computational processes sustaining a system. In a mortal computer, computation cannot be abstracted away from its physical implementation (ibid., p. 4). Therefore, perhaps the ultimate “error”, i.e., is that it has a mortal limit, and therefore tends to overcome. By expecting of itself that it can continue to exist (this is the continuity ensuing from the FEP), what needs to be computed by a system to itself are the prediction _errors_. The limit of death as a possible error should hint at the fact that this recursive process—where a system’s recognition of its own mortality becomes integrated into its predictive mechanisms and therefore also distributed in its niche—seems to be the effort to resist entropy through increasingly sophisticated forms of anticipatory modeling. Niche-formation itself can then be understood as an emergent property of systems that have evolved to model not just their environments, but their own finality or possible continuation, in some form, within those environments. Again, to note something about the language: reframing the perhaps malemployed term “error”^[Harriet and Friston 2010 provides an ample glossary of key active inference and probability terms which can give the reader orientation.] towards the signaling something computationally functional,^[E.g., Pasquinelli 2019, or as ‘noise’ in Malaspina 2018, Wilkins 2023.] what we actually should understand PP/AIF to be saying is not “_optimization_!” (towards “truth”) but rather: degrees of openness to contingency (i.e., _learning_), through something capable of arriving at end-states, in as many different ways as there are organisms on the planet. This offers a highly perspectivist position, and our preferred version follows radical ideas of distributed cognition in the style of enactivism, too (Bruineberg et al., 2016, Ramstead et al., 2019). The frictions ensuing from predictive “accuracy” that an organism seeks can be understood on parallel with the history of world versus model criticisms presented earlier (Hegel, James, Whitehead, Korzybski, Box, Agre, etc.). Complex adaptive systems (thought thinking itself, distributed), reveal a dynamic openness to radical contingency. This might be because we, humans, tend produce our own stress (Sapolsky), in the current landscape: by entertaining complex niche-formation such as AI. Entropy and what we understand as the complexity it results in, inevitably leads to adaptive learning. Ambiguities of all sorts _increase_ information gain, as we know, the problem is: when to stop gaining information? This seems to be a matter of tempering scales and speeds. “Machine intelligence research may need to focus on the processes that realize its nonlinear self-organization and efficient adaptation; such a move is towards the development of survival-oriented processing that embodies computational notions of life/mortality” (Ororbia and Friston 2023, p. 2). In terms of scales and speeds: we should note that we can go as concrete (i.e., immediate) or as metaphysical (as vastly into the future) as we like, with prediction as a catch-all term. In this piece we are surfing the generalist surface (_sorry_): embodied, embedded, etc., of prediction as _the capacity to persist, by latching onto the repetition-creation of patterns_. This means, for our purposes, that there is a common denominator underlying the observation that hydrogen remains hydrogen every time we ‘observe’ it, in different ways, but also, perhaps: hydrogen itself as a predictive capacity of the universe.^[In the Whiteheadian sense.] If something has the capacity to become a universal habit/general constraint: then it is a memory (at least as we perceive it!), with a noticeable _tendency_ towards the future. Prediction is not determination, but creative approximation (to _what_, again, is the open question). It is the taking of a risk (exploring), joining patterns established in the past towards possible patterns in the future (exploiting). Exploit, from the Latin _explicitum_ is “a thing settled, ended, or displayed,” past participle of _explicare_ “unfold, unroll, disentangle,” from ex “out” (see _ex_-) + _plicare_ “to fold”.”^[Etymonline.com, accessed December 2023.] Exploitation-exploration quite literally _unfolds_. This is further explored in [[10 Bias, or Falling into Place]], but we present it here to steer the reader away from the negative connotations of _exploitation_ (which, in biology and AI do not hold) and highlight the predictive aspects in its _unfolding_ history, as related to (self-evidencing) _explanation_: a modeling impetus. As you walk you set the next foot forward and incline yourself in the direction you estimate as your current gravitational model(s). As you listen to music you enjoy the (un)predictability provided by the extension of yourself towards a (group of) musicking other(s), you externalize into uncountable wavelengths extended into the environment. All of these phenomena can be framed as distributed and fundamentally predictive: without prediction there is no coupling to the environment. Importantly: complex social agents can only distribute themselves outward and communally, if they are capable of mortally latching onto the evolving universe’s memory (else they die or get paradoxically stuck in an actually nonpredictive “dark room”).^[This is explained in [[10 Bias, or Falling into Place]].] In what follows we will continue to trace how this ties in with the drive for AI to predict the unpredictable (or the desire for the _unknown-unknown_). &emsp; ### Exploring and exploiting An interesting tempering aspect of predictive praxis is that when something is _too predictable_: we might not like it, and the same of course goes for what whatever we consider too _unpredictable_. This _Goldilocks dialectic_ occurs at all scales and in all processes, and there seems to be no standard pattern as to when and where we will like unpredictability and predictability, unless we gather data over specific cases, such as in the case of slot-machines and other perhaps more intimately familiar _interact-to-refresh_ technologies, such as your email interface or apps on your mobile phone. In the latter case large-scale predictive systems such as the nexus between venture capital, visionary Silicon Valley tech bros and an increasingly consumerist society, all collide in a _perpetuum mobile_ which predicts itself towards the situation we now find ourselves in. Research in the mechanics of exploration-exploitation shows that we—organic evolvers with a certain brain structure^[More on the brain and self-regulation later.]—are highly attuned to reward (finding what you expected) and punishment (error signals, learning) cues (Melhorn et al., 2015, Cogliati Dezza et al. 2017). Exploration being the (evolutive, biological, predictive) strategy which seeks novelty, exploitation being the strategy which latches onto expected-produced patterns. These “strategies” are of course not mutually exclusive, and always function together, on different model-hierarchical scales. But, inevitably, we arrive again at the question of tempering explore-exploit balances: making “the most” of things. In the recent (2023) book _Technological Accidents, Accidental Technologies_, Sjoerd van Tuinen poses the question: “… [are] the events that [qualify as “technological accidents”] rather forms of functionality, which may be undesired, but not “dysfunctional” in the way that the derailment of a train appears dysfunctional?” What this question essentially asks is (at core one of **form and/or/as function**, but on its political skin it reflects): should risk or risk-avoidance be a driving target? The confusion between these two is littered over the entire biopsychosociopolitical landscape in the form of explore-exploit relationships between beings and their worlds: risk both drives and hinders speculative research, _no pain no gain_ logics of the everyday, dark room considerations in PP, etc. Tempering prediction is a collective project, but collective _alignment_ is _anything_ but possible, nor desirable. Consensus—as a very general concept which can include agreement, mutual understanding, etc.—can be said to be assumed exist across our species because of a certain symmetrical _and_ imitative pattern we are able to intuit:^[Symmetry as in: we look similar, and imitative as in: anthropomorphizing is also a major aspect of how we engage with the world. Optimistically: this is empathy with all that is _other_, pessimistically: this is making the world conform to our own image. More on symmetry in [[All things mirrored]].] you look like me and therefore _probably_ think like me.^[We should always be wary of the problems of other minds and Kripkensteins, though.] But, as life teaches us, no two things could be more different than two (human) perspectives, however superficially similar. Currently in the guise of “AI,” predictive technologies have been around forever (as mentioned earlier). Insisting on our point: all technology can be said to be driven by prediction, as technology not only extends predictive human capacities, but technology is built to make things persist: fire to keep the homeostasis of warmth, as does shelter, a chair the persistence of a certain posture, a book: memory, speech; a factory: repetitive gestures and products, etc. These ‘algorithms’ have grown into abstractions in the current age, as software, which has exploded exponentially in implementations with ever-more computational power. From the early days of cybernetics and planning to generative LLMs, we observe the integration of organic predictive capacities into the persistence-project of creating systems that can operate “independently,” or “autonomously,” like we believe organic beings do,^[In [[09 C is for Communism, and Constraint|09 C is for Communism, and Constraint]] we reveal how, according to our logic, they don’t.] and are able to couple to their environment by predicting it. To estimate, to guide behavior and action, we require something on which our predictions can be based: “_constraints_ on the set of prior probabilities” (Swanson 2026, p. 5.). Without them, narrowing down things to action becomes impossible. Thinking back on Ashby’s requisite variety: if, informally, the capacity or otherwise _complexity_ of a system can be considered in terms of a metric that measures the amount of processing needed to generate the system itself,^[We can think of the consequences of modeling here, leading to issues of tractability and compression. See also: [[Computational irreducibility]].] then, as creatures who often seek to outlive ourselves (in progeny, in books, in AI), we are bound to find systems such as ourselves inclined towards interactions with the incommensurable: the weird, the alien, etc. Much of what we produce as unfolding emergent events—socioeconomic crises, large corporate projects, dominating historical narratives—can be considered systemic results of these explore-exploit tendencies looped forward, beyond our capacity to tame or contemplate their unfolding. If this sounds similar to the complex experience that ensues from producing progeny, that is because it is: reproduction and replication sit at the core of what we deem _organic_. &emsp; ### Alignment, imitation and progeny &emsp; >Machine intelligence research may need to focus on the processes that realize its nonlinear self-organization and efficient adaptation; such a move is towards the development of survival-oriented processing that embodies computational notions of life/mortality, a sort of naturalistic machine intelligence. > >Ororbia and Friston, “Mortal Computation: A Foundation for Biomimetic Intelligence,” 2023. &emsp; The paradigmatic 20th century example/case of _complex social prediction_ as contemplated through the lens of something _other_ than humans, is the Turing test (1950). The testing idea behind it being that _we_, humans, should (often) be able to predict whether we are speaking with a man or woman (the premise of the so-called _imitation game_ Turing was inspired by). What is in fact being tested is the human interactor’s _response_ to a situation, not the “imitative” capacities of the possible human-simulation. This reflects, among many other things, the implication that one of the most compelling effects of language is that it provides a predictive framework which establishes the _possibility_ of (symmetrical, imitative) _recognition_ between those capable of using it, even when they are not physically present (in Derrida’s terminology: language is _iterable_). Of course, it also allows for the predictive exploration/exploitation of possible asymmetries, in the case of this game: ensuing from classifying concepts such as _gender_.^[One possible undercurrent that may be read into this is, precisely, the tension presented by focusing on aspects of sexual reproduction which result in _progeny_ (the perpetuation of the _thing_ in question).] This self/other existential paradox at the heart of the Turing test reveals how, in attempts to recognize another, our own self is what remains fundamentally opaque to us. In seeking to determine whether a conversational partner is man or woman, human or machine, we confront the limitations of self-knowledge that make such distinctions tenuous at best. Most importantly: because concepts are distributed negotiations, never static representations.^[As presented in [[03 Semantic noise]] and [[04 Concepts as pre-dictions]].] Additionally, without complete information or stabilized understanding, we readily make ontological judgments about entities beyond ourselves, forever modeling not just others, but ourselves, through predictive acts whose “accuracy” we can never fully verify. Again, open-ended mortal machines seek to explore themselves beyond themselves by self-evidencing through risky games.^[I was very pleased and surprised to find comparable claims made by Cavia: “We can begin with the observation that at stake in the Turing test is the human capacity for self-recognition, our ability to judge the human. This tribunal of the human is not a test that machines should be seen to pass so much as a test that humans are set up to inevitably fail, an inexorable humiliation or Turing trauma, which concerns a failure of self-identification. By framing intelligence as a mode of “passing” in a given gender role, I read Turing as making explicit the project of AI as a project of transition, which is to say a traumatic rupture in the category of the human.” 2024, p. 17.] In _Intelligence and Spirit_ (2018), R. Negarestani treats the subject of AI as progeny quite extensively, not only through the dramaturgy of Kanzi (a child automaton, or _automaton spirituale_, p. 277), but also through the extensive employment of metaphors such as non-human beings becoming capable of graduation “into” the intelligent domain, raising the AGI child (p. 275), or a “global pedagogy” for AGI (p. 277), or “child-machine,” etc. (e.g., p. 292 but the book contains many, many instances). Negarestani goes as far as to say that: “Kanzi the automaton is not born into the full-blooded status of general intelligence. It can only come to occupy that position as a child, one whose formal autonomy must be recognized and cultivated.” (p. 217). We find this quite endearing but certainly problematic from the point of view of self/other dialectics with possible non-biological others. Other biological metaphors are not that helpful, either. In “Can thought go without a body?” (the introductory subchapter to prolific and impactful _The Inhuman: Reflections on Time_, 1991 (1988)), F. Lyotard muses on mortal computations, on the possibility of the endurance of _thought_, should the Sun cease to exist,^[An event assumed to be taking place in the future: where those speaking it might not be there to witness it. Hui treats this text extensively near the end of _Recursivity and Contingency_ (2019). Many of the things covered there will not be covered here, we do not follow some of the questions and proposals. One of them being “Does it also mean that there will be no longer any thinking, and no longer anything contingent?” But what if we could relax our humanisms, rather than reinforce them? In Hui’s conclusions it seems we need to end up believing in some form of humanism, after all. I do not know. I suppose I am ready to be met with transhumanist charges, but I wish to relay them to: I do not know. What seems unnecessarily problematic, for me, is how whether it is Lyotard thinking the end of the Sun, or Meillassoux desiring an anticorrelationist vision which renders the possibility of existence possible without a (human) perspective, both approaches seem to me as desiring to effectuate themselves, as _self-evidencing_, in the sense that they each lay claim to perspectives _beyond_ where they are then and then. This position may be interpreted as a correlationist, or the very opposite.] and questions the traditionally humanistically ‘positivist’ or affirmative image of thought, by focusing instead on the labor^[Labor here can already hint at procreation and gender, as we will see.] of thought as something which defines it: “The unthought hurts ... we’re comfortable in what’s already thought. And thinking, which is accepting this discomfort ... [means] the unthought would have to make your machines uncomfortable ... to make their memory suffer. ... Otherwise why would they ever _start_ thinking?” (pp. 82-82). Perhaps Ororbia and Friston would absolutely agree with Lyotard here, if we frame things in terms of active inference (or self-evidencing): complex prediction is hard (high-exploration is energetically costly), thinking (learning) is exactly this, and to think machines that think they might unavoidably need preferred states in order to navigate-predict-effectuate their possible persistence against dissipation. Additionally, mortality as the ultimate limit, the _actually_ fatal error, could be—and has extensively been—framed as the source of all suffering. Lyotard goes on to suggest that this _suffering-driving-desire_ bottoms out at sexual difference in human beings, and machines which outlive the Sun should therefore be gendered, somehow: “Your thinking machines will have to be nourished not just on radiation but on irremediable gender difference.” (ibid., p. 86). Like with the imitation game, we are presented with the problem of other minds at the level of supposedly irreconcilable difference: gender. So much for the end of _grand narratives_.^[“One may reproach the inhuman as a humanist concept, since Lyotard still want to get hold of the phenomenological body, but as we have seen that it is not the case and this kind of accusation offers nothing productive, since it is only a posthumanist identity fetish while ignoring the organological struggle in Lyotard’s proposal.” (Hui 2019, p. 228). We follow, but to focus on the reproductive capacity of human organs seems rather limited a conception of the complex possibilities on offer for engagement with a future cosmotechnics. We still cannot assume the death of the Sun.] Also, not a dissimilar intuition as that from Turing’s other conclusion in “Computing Machinery and Intelligence” (1950), that something like a human child would actually be what we’re talking about when we want something that learns and grows, as we speculatively demand to witness in a possible thinking machine. It seems something about the possible recapitulation, rebirth, of the organism sits at the core of what drives our search for intelligence. Perhaps because that which seems intelligent—i.e., capable of complex projection—is inherently aware of its finality, knowing that all its planning is bound to a mortal limit (Ororbia and Friston, 2023), and in seeking escape from that it offloads prediction onto other structures (be it messages, creatures, and even speculations about whether replication is, at all, possible). Gregory Chaitin has also said that mathematics only advances creatively by changing its foundations and thereby proposing new concepts, and he compares biological with mathematical change, underlining how in both the inherent (possible) creativity emerges through recombination and birth (2006, p. 40). To remind, or point the reader, Lyotard opens the _The Inhuman_ with, among other things, the presentation of “the child” as an anarchic element, foundationally (in)human as it is that which becomes organized by the community of adults around it (p. 4). Can we be so crass and speculate that speculative _procreation_ (repetitive evolution at different scales, resulting in evolving criticality points) lurks behind every attempt at self-evidencing prediction?^[Please note by focusing on procreation I am sidelining the question of sexual pleasure and desire, which can be part of the phenomenon of progeny, but also not.] That libidinal cosmic desire grounds thought? Yes. No. Maybe. Does prediction reveal procreation or vice versa? The current trends in generative AI do reflect this (not only in the realm of porn).^[This is meant to be provocative, but it seems AI-generated porn is allowing humans to explore all sorts of modes, manners and styles of libido. Porn, in general, seems to be something which short-circuits many of the aspects of social reproduction, in many senses of the term, organic libidinal systems seem to be driven by.] But if the supposedly singular agent, whatever it is we are talking about, is already a cacophony of confused differentiations within itself, then why gender (for Lyotard, for Turing)? This question seems timely as we live in a moment during which gender has become a highly questioned functional classifier, one with a very boring _organizing_ logic. It seems an interesting category to question if what we aim to predict is the _form and/or/as functionality_ of future beings, by how they might combine their distributed capacity to _differ_ from each other. Although, as noted, we have been moving towards ideas of swarm intelligence, slime intelligence, etc., (Levin 2023), too, hopefully bringing attention to the emergent, top-down causality of complex systems, over that of top-down control by a categorical mover (or resulting in problems of proletarian dimensions). The idea of a singular human agent as necessarily a simulation of itself,^[As presented in [[10 Bias, or Falling into Place]] and [[07 Phenomenology of Sick Spirit]].] is always-already plural, and always within a language-‘game’ or _family_ resemblance.^[It seems we can add Wittgenstein, too, to the list.] Perception being a self-evidencing affair, _abstracts_ an apparently simple and linear self-narrative from billions of inputs (just imagine having _permanent_ conscious access to all the states of your cells, including gut flora, etc.: we need a perspective in order to differentiate).^[Hui charges this kind of vision as too dataist. We dissent in favor of perspectives.] The phenomenon, of course, occurs within agents just as much as among groups of agents: the predictive, simplifying “linear” narratives in the latter being predictive habits such as gender, history, ideology, etc. Melanie Mitchell (2021, p. 7) notes how Stuart Russell converges with Nick Bostrom on the ideas about the possible orthogonality of intelligence and goals. She cites him on the thought-experiment of the paperclip maximizer (the idea of an abstract goal which leads to human despair: a machine with a narrow goal such as turning everything into paperclips spells disaster for _supposedly_ diversity-loving humans) and the question of a “superintelligent climate control system” who is “given the job of restoring carbon dioxide concentrations to preindustrial levels” might end up annihilating humanity because this is the only “logical” and intelligent (i.e., realizable) way to accomplish this. Again, the confrontation is with something incompatible with **different** _procreation_/_prediction_ goals than those of ‘humans.’^[Although, the question about the (in)human remains: if these creations are ‘our’ ‘extensions,’ then who is who and what is what?] The thought experiments proposed by both Bostrom and Russell, says Mitchell, seem to assume AI could be “superintelligent” without any resemblance to the type of intelligence we cherish. Mitchell notes clues lie in notions of _speed_ and _precision_, as characteristic of the machine, and we may add: these are often specifically presented as repetitive, “soulless”, industrial-type machinic intelligence: supercapable reproduction? Mitchell also notes that embodiment and emotions are obvious challenges to these dry visions of superintelligence. Again, we can extract from this argument the idea of suffering and mortal limits as framing that which defines human (mortal) significance. To return to the perpetuation of whatever it is we do/make/think/create: perhaps we never knew replication other than through the cycles of things such as sleep and procreation until language enabled cross-generational communication which made other chunks durable across time. Language is something _else_ than flesh, and it is certainly a foundational experience of alienation, as we noted at the beginning through model-reality frictions: it never does what we (flesh-based creatures) want. Perhaps, this is therefore also what is reflected in the linguistically-based logic of a system such as AI. Perhaps the _creabilia bias_ of creativity _as_ fertility, as procreation, also has the unconscious pitfall of thinking of the alien-creative in AI as a either a branching off: a new species, or a species so specialized that, like language, it cannot procreate on its own (or will only render paperclips, or endlessly-splitting brooms).^[This is a _Fantasia_ (Disney 1940) reference, which I am sure Bostrom must have watched.] If the _genus_ is what would give the _general_ to AGI, and species procreatively specialize by differentiation, perhaps what we are ultimately scared of is that misaligned AI will be a (super)_mule_. This _Frankenstein syndrome_-like (Falk 2021) misalignment fear is certainly the repressed fantasy of doomsayers: **“it” will be _different_ from us** (so much so that we can’t even have sex with it). This may appear laughable on the surface, at least it does to me, but it’s a common internet trope in the realms of Reddit. Reddit was and is, of course, a gigantic chunk of most commercially available LLMs’ training data. “WebTex2”, owned by Open AI was “built on every webpage linked to Reddit in all posts that received at least 3 “Karma” votes.” (Stringhi 2023).^[Reddit, 4chan and similar venues are, in many ways, the open and transparent subconscious of the US media empire, which is our current cultural substrate in much of the “West”.] Relatedly, at least in terms of language as one of the prominent AI foci, if not as that which entirely composes it:^[From its basis being programming, all the way to speculative fictions of AI.] thought effectuated through language coheres with other thought effectuated through language because there’s procreative compatibility, fertility _in_ language: iterability. This is the reason we (seem to) share symbols, habits, thinkers, etc., as dynamically changing but temporarily grounding points of reference; chunks. When something falls _beyond_ the scope of linguistic intelligibility,^[Wittgensteinian _nonsense_.] while it may be incredibly interesting—legible, useful, valid, etc.—it is often a mule (e.g., _Finnegans Wake_: a system onto itself).^[However, despite its possible mule status (i.e., we do not speak Joycean, far from it, and “uninitiated” readers often find it rather illegible), it must be noted that: “[w]ithout our being conscious of it, Joyce has been a major contributor in shaping the ways we speak and think about communication.” (Theall & Theall 1989, p. 58) M. McLuhan thought of Joyce as the writer of “the most luminous analogical order for the unique experience of that age.” McLuhan notes, on _Finnegans Wake_, that this is the type of work he considered his own books as relevant to, in terms of providing a creative learning window. McLuhan’s own books should “enable a man to read and enjoy Finnegans Wake.” (Cited in Chrystall 2008, p. 103). McLuhan, prolific media predictor, desired generative persistence for the wonderfully contrived, productive uniqueness of the Joycean mule.] LLMs can be seen as enjoying this closed-universe specialization, too, they are not only hermetic but also difficult to interact with (however convincing the chatbot interface).^[Humans also seem to ascribe “hallucinatory” mistakes to GenAI systems when they reveal things which do not match with current, given human understandings of reality. More on this in [[Wolfram irreducibility and interconcept space]].] However, we should not forget that, as we are increasingly having (linguistic) sex with robots and enjoy AI-generated porn, much in the same way that you will know a word by the company it keeps,^[An often cited trope in NLP research (Firth 1957, but see also: Derrida 1978 in Salmon 2020). A trope which led to most of the organizational logic inherent to vector-based representations of language, see: Masterman 2005. See also: [[03 Semantic noise]].] we may already be mules, or may become mules slowly but surely. As argued throughout this project: conceptual creativity—emerging from metaphor: the combination of two previously uncombined things—owes its novelty to the mixing of the genetic pool, not to its stagnation.^[I.e., remaining in like-minded sameness, in assumed “good company.”] In formalizing predictions about human mortality—focused among other things from differences between genders—researchers also put the linguistic structures of life-description events—as word-embeddings, contrasting vectors—to work, in order to predict things such as life-expectancy (Savcicens et al., 2023). One of the researchers involved in this project, Tina Eliassi-Rad, has also proposed the connection between AI and the evolution of the human genome through the very commonplace fact of recommender and organizing systems behind dating apps, based on matching algorithms we do not know are matching us.^[Mindscape interview with Sean Carroll, January 25 2025.] We may be reaching specificities so unique we cannot make _heads or tails_ of them.^[Pun intended.] It is perhaps this that Lyotard feared about genderlessness and dead suns, and others fear today about paperclips. Evolving our metacognitive niche as information-processing optimized for repeating what came before (e.g., through statistical analysis in LLMs), yet producing something entirely different and arguably illegible (because it is inaccessible at the level of interpretability through the type of predictive cause-effect logics we humans like to parse reality through). Cognition therefore becomes increasingly adapted to interfaces designed for machine-readability, creating a loop where humans refine behavior to maximize legibility _elsewhere_, while simultaneously believing we make machines more human-like. This can be understood as the blue line of intensifying “behavioral contamination” in the diagram below: &emsp; ![[computation nru.png|500]] <small>NRU and workshop members: The evolution of computation, Porto 2020, original sketch by S. de Jager with workshop participants, digital design by Diede van Ommen. At the bottom right corner: the emergence of organic life on Earth, chronologically represented as an upward moving line, is placed alongside a dotted line representing the materiotechnological. The explicit differentiation yet convergence of these lines signifies the proposal that from the perspective of a very general definition of computation, computation could be said to be the basic activity of thought, made explicit in the technological encounter of logic and matter. Matter undergoes computation bound against a background of noise, which can be conceptualized as the conflict between different realms/types of computation (therefore spawning change/transformation; think the matter-organism-beyond continuum as computational “transmogrification” through noise). Organic systems are here initially shown to differ from ‘pure’ matter— which eventually becomes (intertwined with) technology—as they engage in the computationally-elemental process of signal extraction (i.e. sameness creation/contemplation via differentiation), complexifying and/or transforming computation as an activity of rule-following and rule-making or breaking. The organic line gives way to new computationally enabled or enabling activities: musicking; language; numbers; typography, etc.: presented as interlaced with the ‘technological/material’ line: tool-making and trapping; design, etc. — these things are not said to be computationally different from each other (except in levels of complexity) but by separating them we can appreciate the many approaches/variations to computation. Beyond deep time and the emergence of human life, we distinguish the computational era “proper”—that is, the conceptual awareness of computation as a pervasive phenomenon— which engages in signal processing. This is the moment spanning everything from the extraction of labor from mechanical automation (e.g., mills), to the invention of early computing devices (e.g. the step-reckoner) to the conceptualization of modern artificial intelligence, all depicted here as ‘synthetic systems;’ computation upon computation. A trivial distinction, in a way, if we are to conceive of everything from the physical to the organic to the technological as bound by rule-making and rule-following, but again: this is meant to underline the development of the concept of computation more than it being a proposal of what computation actually is. The organic computation and synthetic computation lines start converging as they engage in a process of behavioural contamination. This process is shown to be the contemporary condition and thus takes precedence on this diagram, but this same ‘behavioural contamination’ line could be drawn between any of the other steps mentioned earlier (i.e. between music and language, between trapping and design, etc.). For example: once a human understands what it might be like to be a machine, the human becomes or has the capacity to become more machine-like, and the other way around. Many other examples can be given of this: from songbird to factory workers to digital phenotyping. Crucially, for our contemporary contemplation: the moment computation becomes possible at a complex and fast enough level, it starts having an effect on the conception and possible capitalization of the past and future.</small> Cognition, distributed across systems, continues to challenge its own limits. The limits of the brain as the seat of intelligence, the limits of language, the limits of the social (reproductive group), the limits of machinic extension, the limit of the liminal itself.^[See: [[12 Negintelligibility]].] The selective pressures currently constraining brain-linguistic-machinic cohorts; groups, seem to signal—from contemplations of existential risk all the way to birth-rate declines worldwide—to question the limits of reproductive existence and predictive relevance. Even though, as Lyotard notes, we do not know what it is we are dealing with to begin with. If “a system’s directive is to ... incorporate its “ends” (i.e., goals) into the “means” (i.e., mechanisms), to ensure that attainment of its goals is (almost) inevitable [and where] the parts give rise to the ‘whole’ (upward causation), and constrain it ... while the whole constrains the constituent parts ... so as to conform to the laws of the higher levels defined by the system (downward causation).” (Ororbia and Friston 2023, pp. 9-10), then we advance as an inevitable _perpetuum mobile_ predatively seeking its own unpredictability, ultimately. That is, if our imperative is evolving cybernetic meta-variety and its observation, following the mortal computation lead. &emsp; >God is dead, and yet metaphysics continues. > >Salmon 2020, p. 111. &emsp; ### Limit/aporia/catastrophe point &emsp; >What would be the best question for us to ask, and what is the answer to that question?^[The context of this in the paper by Markosian are the paradoxes created by trying to find the—one and only—“winning” question to ask an angel/genie/deity/omnipotent oracle. Special thanks to Ties van Gemert for pointing me to this paper/question.] > >Markosian, N. “The paradox of the question.” _Analysis_ 57.2 : 95-97, 1997. &emsp; We often desire novelty as radical unpredictability. Always, of course, tempered by prior preferences which are way beyond our access: the universal, evolutionary, etc., the habits which led ‘us’ here. We can think of different types of unpredictable spatiotemporal circumstances: from unpredictability in times of foraging for food, all the way to the reward mechanisms of slot-machines and mobile phones, to the computational contemplation of protein-folding and back. The interesting limit or catastrophe point we seem to have reached in the past few years/decades is that, on the one hand: we have amassed complex data landscapes which are effectively predictively probed for _known-unknowns_ (we tend to call this _science_) but on the other: both the scientific and the commercial domains seem to demand _unknown-unknowns_ from GenAI technologies. We are epistemic-foraging at the very limits of epistemology. It seems, somehow, that scaling up the speed and size of things has created a double-edged _metareward function_: we desire a pattern we cannot predict, while simultaneously demanding transparent explanations of how these unpredictable patterns were produced (because we are used to things being, well, predictable). The situation—comparable to the gradual sedimentation of large-scale systems such as institutions, states, etc. in the past—is one where methods for collective prediction become expanded and distributed, but as they unfold they create, therefore, conditions which alter their target, moving _beyond_ previous methods of predictability. We continue to trip on our own shoelaces; evolving functions with snowball effects. The long history of knowledge-mediating structures unfolds with each new layer increasing the complexity of **what we _thought_ was epistemic access**; but because of what we know as the arrow of time; the increase in vantage points (methods for accessing reality, or actual perspectives such as new species, new technologies, new knowledge) also increases their unavoidable implication and consideration, and therefore their network-effects. The effects of this idea, just as a speculation, does not just make “knowledge” difficult to access but radically transforms what counts _as_ knowledge, because the predictive work this knowledge is put to drastically differs in chunk, in scope, than previously. With so-called AI: dramatically so. AI adds yet another sedimentary layer to this already-rough epistemological terrain. When patterns chunked by electronic computers become parsed by human collectives, we might be compounding interests we do not know we have: “It would appear that human reasoning has bootstrapped its way into forms of expressivity which would altogether elude the human mind in all practical terms, without an artifactual elaboration of what is not merely a subset of natural language, but an autonomous hierarchy of generative grammars, bound to their own interactive sites of formation.” (Cavia 2022, p. 100). One of the challenges which the disunified field of AI is faced with today is the fact that tracing a possible logic through complex, multidimensional data landscapes is _precisely what we need automated prediction for_. But, so long as it remains epistemically opaque or inaccessible from a _prior_ vantage point, it will remain both fascinating _and_ unconvincing, somehow (a trade-off which seems to drive human surprise-seeking). Reza Negarestani distinguishes between prediction and explanation in order to frame whether automata^[He means a specific kind of non-human agent he explores in _Intelligence and Spirit_ (2018).] could be understood as possessing intelligence: &emsp; >Prediction is not the same as explanation. ... What is required for explanation is not better prediction or the compression-regularity duality, **but the ability to selectively compress data or to single out one regularity over another.** ... precisely what language affords agents is the ability to **selectively compress data**, not merely to picture pattern-governed regularities but to describe and explain them in context. ... For our automata to count as belonging to the order of general intelligence, they must be able to perform material inferences, **to have the practical know-how or competences to use concepts**. > >Negarestani 2018, p. 314, our emphasis. &emsp; To us, however: selection implies pattern-preference; chunking. Having practical competence is the ability to foresee, to parse in a particular manner. Many others also charge simple versions of prediction as running counter to explanation because correlation, famously, does not imply causation. However, in our understanding, patterns govern all sensing and acting: synthesis by compression, and therefore advancement towards a new state—and the possible halting problems this implies: in both humans and possible automata—is a matter of persisting, of survival, hence: of what we frame as _prediction_ **writ-large**.^[In the “most general sense” (Sellars), as well as inevitably _creatively_ so (Deleuze): abstraction _always_ calls to concreteness. See: [[04 Concepts as pre-dictions]] for notes on this.] Different versions of what “selecting” (ibid.) entails is what leads to confusions regarding agency, intelligence and other related concepts. A forest or a network of mycelia can certainly be thought of as intelligent. So what does the concept do? Intelligence seems to track that which has dominating scope; chunking control, that which can effectuate itself further by predicting ever-novel states, under chaotic thermodynamic duress. We seem to confuse the drive to find patterns of ourselves, with the drive to enter into a new dialogics with the not-so-inert environment. These and other problems of prediction in AI—if we may amass them as such, e.g., from the imitation game all the way to (un)supervised learning, human-in-the-loop, _orthogonality_, explainability and transparency, mechanical interpretability, etc.—are due to the their functionality precisely having been designed for parsing reality _differently_, unlike ‘us’: as multidimensional spaces human brains can do _other_ things with (such as conceptually probe to imagine what AI could be). As well as, of course, due to the contemporary matter-of-fact that “[a]lgorithms are made to be opaque—not just to protect businesses but due to the technical necessity of handling the complexity of the system.” (Bucher 2018, p. 41).^[The problem of the “black-box” in general (which Bucher tackles in her 2018 book) is, fundamentally, a problem of prediction: “The concept of the black box has become a catch-all for all the things we (seemingly) cannot know.” (ibid. p. 42).] This is a somewhat familiar trope in critical (new) media studies, and while “transparency is not just an end in itself,” transparency is still seen as a necessary condition for greater intelligibility (Pasquale, 2015: 8).” (Bucher 2018, p. 41). As Cavia (2024, p. 4) defines it: intelligibility means the “property of a mental state that permits its presentation to thought as an object of cognition,” a definition which perhaps also contrabands a certain image a self-understanding transparency within it, or more generatively: a definition which allows us to understand _limits_.^[See footnote on Wittgenstein in [[02 Introduction to the Poltergeist]].] Transparency and intelligibility are two sides of the same predictive coin: transparency implies that an intelligible—(re)presentational; reproducible—logic can be traced, where _tracing_ implies prediction (if we know _how_ x, y or z happen we will know how it can(not) happen again). Importantly, with computation as the “diagnosis of contingency”, (Cavia 2024, p. 10),^[Cavia also states that: “The project of computation under univalence can thus be viewed as an attempt at outlining the minimal axiomatic commitments required to maximise inferential freedom, geared towards autonomous agency over rule-following automatism.” (ibid.). We are not sure here, because: what is the difference between the two? This is explored in [[10 Bias, or Falling into Place]] and in [[09 C is for Communism, and Constraint]].] as the lens through which we may understand processes of individuation, essential differentiation, and mortal computation, as the project which also seeks to make flexible certain formal constraints and focus on the system dynamics of organic computers, we cannot but assess the directives of the _local_ framing which entails from presenting these perspectives: who speaks and from where? Do these pronouncements problematically assume their readers as progeny? Am I overstepping my boundaries?^[The joke should explain itself.] And we should pay mind to how, in the blinding light of AI, the aporia or limit in the moving-goalposts or AI effect-style logic, is there **because we want both opaqueness and transparency** (exploration _and_ exploitation): once we ‘understand’ something, it is predictable and thus no longer interesting for explorative generativity. The problem of prediction as determinate prescription is obvious, especially in the case where we are unable to trace where and how a certain result emerged: in weather-modeling we may be able to observe the consequences of a failed prediction, but in legal cases where AI-assistance is already implemented (Ashley 2019) “will this person commit an offense again?” the prediction-prescription line is incredibly blurred. &emsp; ### Conclusion: _perfection_ Clearly, these are problems which exist across the board and before/beyond AI, but the AI-lens we now seem to evaluate everything through allows for some scrutiny with regard to our mortal proclivities. Bias, unavoidable as it is, presents us with the challenge that all decision-making is interested and invested in particular realms. Dreams of absolute prediction are therefore certainly dreams of absolute control. The concept of prediction though, if thoroughly accepted as a matter-of-fact in all perception-cognition-action (this trinity not presented in any particular order), can improve our self-understanding as embedded within myriads of predictive, distributed schemas: from self-identity all the way to political alliances. If I know I want to predict how things will behave, and I also know I enjoy a certain degree of unpredictability, these things can be then discussed generatively.^[As a banal example, the conversation that goes: A: “why did you betray my trust?” B: “I did not, I thought you were OK with what I did!” becomes elucidated in a different light when we understand what underlies it, in terms of prediction: A: “your behavior has had the consequence that I have to drastically adjust my model of our relationship/you, i.e.: my way of being able to predict your behavior.” B: “my behavior functioned within the parameters of what I had as a predictive model of our relationship/you.” What changes here? Well, for starters we sound like robots. Surprise, surprise. But: we are able to discuss each other as distributed (amongst each other, collectively: _with_ and _for_ each other) collections of models (assumptions, etc.) rather than reduce each other to errors (you are at **fault** of x, y or z). We can frame negation and generativity _as_ experiences, detaching ourselves from the experience of confrontation/conflict. These methods are tried-and-proven in the context of psychology: ‘frame discussions around what is it that _you_ perceived as bothersome, rather than blame the other party,’ etc. In the context of AI: ask what the model wants of you, rather than what you want of the model. Additionally, asking what the model wants of you should be paramount (_we should know better_) now that we exist in a world of “free” services which source us for their predictive (capital) gains. Moreover, in the context of a transparency and bias-auditing paradigm, where all efforts seem geared towards the allocation of blame, responsibility, etc., we might be better off challenging said notions through their predictive impetus.] A final note on the idea of _perfection_—which through our incursions into AIF we link to most other metaphysical _tendencies_ of mind: transcendence, stasis, infinity, etc. “Mechanical objectivity” (Daston and Galison 2007), the revelation of reality through technical means offering a “lesser-partisan,” or at least lesser-human-physiology-mediated perspective on phenomena, marked an important change in scientific representation. Arthur Worthington’s water droplets, presented as an exemplary case in Daston and Galison’s _Objectivity_ (2007), to us reveal the physiological desire-imperative to simplify reality—to save energy on perceptual-cognitive-active resources—by way of, e.g., the prediction of geometric regularity, the prediction-production of _symmetry_.^[See also: [[All things mirrored]].] This is what Worthington drew, through “naked eye” (assisted by intense lighting) observation, as splashing water droplets _before_ photographing the splashes: &emsp; ![[Worthington water droplets daston galison objectivity1.png|400]] <small>Worthington, “The Splash of a Drop”, 1895, p. 44.</small> &emsp; This is what, through photography, was later revealed to Worthington: ![[Worthington water droplets daston galison objectivity2.png|400]] <small>Worthington, “The Splash of a Drop”, 1895, p. 71.</small> This is his _fantastic_^[See Friston et al., 2024 on the brain as a “phantastic” organ.] observation: &emsp; >I find records of many irregular or unsymmetrical figures, yet in compiling the history it has been inevitable that these should be rejected, if only because identical irregularities never recur. Thus the mind of the observer is filled with an ideal splash—an “Auto-Splash”—whose perfection may never be actually realized. ... My experience is that most persons pronounce what they have seen to be a regular and symmetrical star-shaped figure, and they are surprised when they come to examine it by detail in continuous light to find how far this is from the truth. ... I believe that the observer, usually finding himself unable to attend to more than a portion of the rays in the system, is liable instinctively to pick out for attention a part of the circumference where they are regularly spaced, and to fill up the rest in imagination, and that where a ray may be really absent he prefers to consider that it has been imperfectly viewed. This opinion is confirmed by the fact that in several cases, I have been able to observe with the naked eye a splash that was also simultaneously photographed, and have made the memorandum “quite regular,” though the photograph subsequently showed irregularity. > >Worthington, 1895, _The Splash of a Drop_, pp. 75-6. &emsp; As we have reviewed in this and our previous chapter, AIF posits that all persevering processes can be understood as manifestations of _prediction_, through a balancing act which aims to minimize the possible surprise that the guiding model from which predictions ensue, might cease to exist. While this perspective elegantly accommodates diverse phenomena—from meditative states as recalibrations of hyperpriors to aesthetic-abstract appreciation as epistemic “foraging” in perceptual spaces—it might warrant some critical examination. Mathematical reasoning which seems to enter a realm dissociated from everything that can be “perceptually” attained, or very specific altered states consciousness, and even random unguided exploration, can indeed be reconceptualized as falling _out_ of the predictive domain. While we do argue for this still constructively operating as the tempering of varying temporal scales through levels of abstraction, there is much research to be done. Some of these aspects will be treated in [[12 Negintelligibility]]. At least, let us settle on the idea that **predictive speculation, i.e., abstraction is reproductively social**: it is the moving _and_ pooling of perspectives through linguistic modulation, perhaps stemming from the proto-indexical capacity to simply point at something (and therefore merge two or more perspectives). In this vast space we _necessarily_ cannot know but continue to navigate through, it seems we continue echoing Turing, who said our best understanding of AI should come from observing children. Thinkers such as Lyotard, Negarestani, or Chollet^[See: [[Negation]].] and Bach,^[See: [[DigitalFUTURES 2024 Doctoral Consortium on AGI 1, Joscha Bach, Is consciousness a missing link to AGI?]].] all seem to express variants of this. Creating more of _what we think we know_, seems to be all we know: taking a huge risk, allowing for the unfolding of a highly unpredictable structure. As above, so below, for better or for worse. Additional theoretical limitations emerge in AIF when considering the grounding of supposed _homophily_ across humans, which, in accepting it as tacit, tends to ignore the complexity of social cognition, stemming from a sometimes loving and sometimes (un)consciously cruel, negative dialectics. What determines which predictions an organism-system-paradigm prioritizes over others? Intersubjective experiences and collective intentionality introduce dimensions which exceed agent-focused accounts. We treat aspects of this in [[08 Active ignorance]]. The historical contingency of our conceptual frameworks, including active inference itself, creates a potentially self-fulfilling explanatory structure^[Perhaps, what Hui denounces when he challenges the possible “totalizing power of [technodiversity’s] mechanism, whether mechanical or organicist.” (2019, p. 235).] (which would _make_ **sense**, in every sense of _sense_ in the expression made here). These challenges delineate approaches which might provide creative insights into how we effectuate ourselves forward, through systems such as AI, through frameworks such as AIF. What is most important in metacognizing abstraction—in thinking _how_ chunking and parsing takes place at all—is that when we recognize how our concepts _pre_-dict—operating through and shared among different partial perspectives—we witness how they track and influence various spatiotemporal depths, allowing our dialogics to become generative and generous; seeing _as_, all the way down (perhaps so reaching final immobility or non-action). This can mean that conceptual disagreements need not be resolved through either pure relativism or claims to absolute truth. Conceptuality can be made flexible and expanded by examining how _different_ modes of thought handle uncertainty in _distinct_ ways, revealing both possibly overlapping chunks as well as a vast diversity in spatiotemporal parsings. This is fundamentally a question of control, both sociopolitically and philosophically: concepts and their definitions actively shape attention and behavior. Prediction does not equal restriction, reduction or simplistic negation, it _contains_ these characteristics as _functional_ features.^[This is treated in [[04 Concepts as pre-dictions]].] Prediction involves a complex dance with surprise-appraisal, affordance exploitation-exploration, hence “bias” (as the paradigmatic virtue-signaling mechanism of our era) is not a negative, undesirable condition,^[Kahneman-style: where noise is a “fundamental flaw in thought.” (2021).] but a _given_,^[More on “givenness” through Sellars in [[06 Principle of Sufficient Interest]].] _filtering_ condition. A concrete condition which is accessed through predictive abstractions, this is simply the reason people are interesting, conversations are combinatorially novel, procreative. Prediction is only negatively reductive/oppressive when it is oppressive/negatively reductive, becoming aware of our incessant drive to predict (from mindreading to weather-modeling) actually probes our unavoidably predictive biases and reveals the possibility of a more social, perspectival landscape where consensus, dialogue, understanding and collectivity can be _explored_ **rather than _assumed_ to exist.** The latter is a coercive predictive trap sustaining many of our cultural-niches, installed by aggressive monocultural, flattening visions. <div class="page-break" style="page-break-before: always;"></div> ### Footnotes