# Conclusion
### Ways forward, following others
In “What Makes a Good Theory, and How Do We Make a Theory Good?” (2024), computational cognitive scientist Olivia Guest proposes a “metatheoretical calculus” in an effort to constrain, or at least challenge some of the ways in which much of our research is (currently) conducted. She defines the _calculus_ as comprising the following elements:
>(a) _metaphysical commitment_, the need to highlight what parts of theory are not under investigation, but are assumed, asserted, or essential;
>
>(b) _discursive survival_, the ability to be understood by interested non-bad actors, to withstand scrutiny within the intended (sub)field(s), and to negotiate the dialectical landscape thereof;
>
>(c) _empirical interface_, the potential to explicate the relationship between theory and observation, i.e., how observations relate to, and affect, theory and vice versa;
>
>(d) _minimising harm_, the reckoning with how theory is forged in a fire of historical, if not ongoing, abuses—from past crimes against humanity, to current exploitation, turbocharged or hyped by machine learning, to historical and present internal academic marginalisation. (Guest 2024, p. 510).^[Guest also informs us, very importantly, that: “Metatheoretical calculi do not require one single framework nor formalism, but constitute a proposal that one or more such formal systems might provide useful ways of navigating our metatheoretical ideas. An important consideration that must be addressed before moving to the definitions and formalisms below is the following: It is not the intent of the author to propose that a metatheoretical calculus is a single beast, it can be composed of figures, it can be verbal descriptions, it can be set theoretic, etc. It is the idea that formalising can set us free, allow us to think critically about our own thoughts, in this case about theory, and should not be used to lock us in to a certain way of thinking. Much like formal or computational modelling of phenomena generally, metatheoretical calculi, can be seen as consumable scientific products on the way to deeper insight.” (ibid.).]
Running into this paper in 2024 was very insightful in retrospect, as from the beginning of this research the central commitment of my early proposal was to distinguish different conceptual (i.e., metaphysical) assumptions in _AI-as-engineering_ (van Rooij et al., 2023, more on this below) and AI as philosophy. One of the dangers of AI as a “floating signifier” (Suchman citing Levi-Strauss (1987)), is that “[w]hile interpretive flexibility is a feature of any technology, the thingness of AI works through a strategic vagueness that serves the interests of its promoters, as those who are uncertain about its referents (popular media commentators, policy makers and publics) are left to assume that others know what it is.” (Suchman 2023, n. pag.). Therefore, beginning anew, this work treats this metatheoretical calculus as a future commitment, and as retroactive starting point, leading to the observations that with respect to:
**(a)**: As stated throughout the introduction, the main drive behind the current thesis is to put metaphysical commitments, particularly in language-modelling, under a _perspectivally predictive_ microscope. With respect to the metaphysical commitments of this project itself: it is a pity that it has unfolded in English only, but this will have to do, this is our contemporary _lingua franca_. Equally unfortunate is the individualist nature of the PhD: even though everything which exudes from this is a collaborative effort, this all must be framed as if resulting from a single individual. What is not under investigation here is how this ought to be received. As stated earlier, the combination of many fields results in a combination of audiences that is difficult to delineate.
With respect to **(b)**, in terms of (discursive) survival: we take this quite seriously, as this what drives our guiding questions. How will future language-modulating result in the survival of x, y or z? How do we chunk and parse reality according to different logics or modes of survival? How do different concepts modulate the future, and how do they make different logics/modes of survival possible? As we have seen in the different chapters, this is always a 5E; and therefore political question, since we are considering what gets to survive and what constraints guarantee its survival.
**(c)**: is a point where I relied mostly on the experimental work of others, plus a few experiments with language models, and “experiments on the page” through the use and abuse of optical illusions. However, an important observation needs to be made with regard to the future possibility that hypothesis-testing will become amenable to a high degree of inspection by systems such as (highly advanced versions of current) LLMs. It will most likely be that soon, it will be possible to put many of our concepts to the test by querying a dynamic database such as a specialized language model. The speculative proposals advanced in this thesis therefore possess this possible promise as a quality. I will have to wait and see. What seems to hold promise, given this research, is the idea that concepts _abstract_ by intending futurity (rendering computations “above all [as] an apparatus that encodes” (Cavia 2024, p. 14)). Higher levels of abstraction mean the encompassing of orders that transcend more specialized, localized, actualized ones, which again, in our view, signal to the predictive traction of concepts; a metaphysical commitment which leads to the effectuation of different power structures. The more abstract a concept; the stronger its future-oriented aspiration: _this here should apply to all cases_, i.e., the future is determined in these and those ways by the _shape_ of this concept (by the ways in which it segments reality).
**(d)**: is paramount, and as also mentioned in the introduction: this is a work in good faith. However, considering our speculations about **(c)**, it is difficult to say where things may or may not fall on the “wrong side of history”.^[E.g., I have used language models, of all sorts, extensively. Have I contributed to their training in ways that benefit the companies that I see as exploitative and problematic? Yes, most likely. How to engage, though? Often, I have wanted to stop working on the concept of “AI” altogether, to stop fueling the hype. However, not engaging often seems like a worse option. Wherever one finds oneself, my only helpful heuristic has been: do what you can, at your pace.]
There are overlaps here that should be highlighted in the work that Guest also conducted with van Rooij et al. (2023) where the authors diagnose, as we saw with Mitchell (2019, 2021, or see also: Bender et al. 2021, Suchman 2023), some possibly misguided notions behind the terms “artificial”, “intelligence” and “A(G)I”. In “Reclaiming AI as a theoretical tool for cognitive science” (2023), the authors present reasons for how, in good faith, and removed from commercially-driven “AI-as-engineering”—the target of criticism in the article—the project of AI should rather be seen as a “computational toolbox” offering models and approaches that enhance theoretical cognitive science work (which connects/overlaps with philosophy and other humanistic fields). Many, however (especially those scientifically _and_ commercially invested in the project of AGI) have overextended this modeling relationship, which the authors find stemming from (early) theoretical ideas that human thinking might be understood through computational models, all the way to the hyping claim that we can and soon will build machines with cognitive abilities matching human-level performance. Van Rooij et al.’s formal analysis demonstrates that this goal faces fundamental computational barriers, the paramount reason being that human cognition is mathematically intractable. From the perspective of our work, we are in full agreement, except with their narrow definition of computation, in which case, we remain undecidedly pancomputationalist: signal-processing as an effect spanning all life, can be understood through methods which _track changes_, all of which we can parse through traditional computational methods, where intractability is a fact which proves processes are open-ended. Perhaps the future will determine how a certain image of computation falls short of processes we find interesting to analyze, but until then: it is one of the most reliable lenses we can put on to formalize and therefore examine processes.
We will explain more about this later on, but first, van Rooij et al.’s list of AI diagnoses, to elucidate their criticisms (which, besides the point just mentioned, we endorse):
![[van rooij et al ai types hypes.png]]
<small>From: van Rooij et al., 2023, p. 2.</small>
The authors note how AI was, in fact, always about the question of the relationship between what we can computationally effectuate, and what we view as “thinking.” They cite, e.g., Herbert Simon as representing the beginnings of this quest. This thesis diagnoses _AI-as-engineering_—as do the authors, following Crawford, Gebru, Birhane and others—as the chief problem. This problem was also identified earlier, by Philip Agre, as resulting from the context of early AI research, where psychologists were funded by instrumentally-oriented military research interests (1997, p. 16). Van Rooij et al. note how AI as (early) information-processing psychology rested on the conditional that human cognition _could_ be understood as a form of computation. It seems the interests in drawing boundaries between what defines humanity versus all that is _not_ it, always goes hand in hand with the fear that inspires destroying that which _resembles_ humanity but in essence is _not_, as the war machine continues to prove. We treated aspects of this in [[05 Prediction]], but will not dwell on it further here beyond saying that there are people who destroy people, this is a problem I do not know how to solve.
This much-too-war-indebted view of what counts as thinking and what does not, has slowly become that which is known as “(minimal) computationalism” currently comprising authors such as Chalmers, 2011; Dietrich, 1994; Miłkowski, 2013, all of whom van Rooij et al. cite as associated with the inclination that cognitive processes involve computational operations.^[Extending the invitation, in the list we could also include others closer to pancomputationalism: such as John Wheeler, or more recently Max Tegmark.] But to us, the conceptual leverage is precisely in how: to say that cognition is a kind of computation while being crucially aware that we actually cannot define either of these activities but must do with fuzzy, blurry visions which combine both, is to employ a metaphor which travels; _relates_ the map to the territory, instead of confusing one for the other.^[The authors refer to this map/territory conundrum on page four, on this subject, see also: Andrews 2021.] We would argue that, unavoidably, in **any** modeling we think about paths or mappings between inputs and outputs (broadly construed): whether we call them causes and effects, phenomena and noumena; impressions and facts, etc.. It is a matter of how we chunk and parse: how we decide that reality ought to be tracked, given that analytical possibilities are veritably _inexhaustive_ (Felin and Kauffman 2019). What we define as input and as output is quite literally up for grabs.^[As Varela noted, and as we know from how the body synthetizes all sensations into an apparent continuum, it is rather difficult to count or otherwise arrange perceptual phenomena as a series of ordered inputs (Varela cited in Agre 1997, p. 57). However, counting and ordering is a highly useful abstraction which provides many future-modulating affordances, as is computation.]
>The current AI arms race is more symptomatic of the problems of late capitalism than promising of solutions to address them.
>
>Suchman 2023, n. pag.
To reclaim AI as a **tool**—do we want, specifically, _that_ concept?^[Suchman also cites: the “Technology at Georgetown Law issued an announcement that began: Words matter. Starting today, the Privacy Center will stop using the terms ‘artificial intelligence’, ‘AI’, and ‘machine learning’ in our work to expose and mitigate the harms of digital technologies in the lives of individuals and communities ([Tucker, 2022](https://journals.sagepub.com/doi/10.1177/20539517231206794#bibr21-20539517231206794)).” (Suchman 2023).] We’ve sought to question notions such as “tool” or computation, to at least temper (or tamper with) the speeds being pushed onto us by the commercially-driven AI paradigm. Computationalism theoretically implies that cognition can be understood as a form of computation, and we agree with van Rooij et al. that this does not imply that it is possible _in practice_ “to computationally (re)make cognition” (p. 15). However, it is possible to temper with the concept of computation itself: we followed Agre in thinking the computational in terms of machinery (something we may formalize and possibly construct) and dynamics (leading to emergent properties and with intractable qualities); whether it be analog or digital. Above all, computation should be _interactionist_: making the elements of interest explicit (as also suggested by Guest 2024), and veering away from generalist claims (Agre 1997, pp. 57-63). “Reinventing computation is a considerable task.” (Agre 1997, p. 57). If we can identify-invent input and output in the analysis of a process: we can call it a computer. The ghostly idea of the computer itself is what has transformed our understanding of chunking and parsing _functions_: first as calculations: stones, then as people, then as (paper) machines, and back to silicon. Functional adaptations—mere organic _movement_ through possible bifurcations, for example—persist across diverse physical instantiations throughout biological history: selection preserves function, substrates are unreliable and environments are contingent.
Perhaps an odd example, but this is the type of example we go after throughout the project:^[E.g., in [[06 Principle of Sufficient Interest]].] the “computer” that is the medical field can be understood to be chunking reality, largely, into health and unhealth, while at the same time supposedly parsing it through the Hippocratic oath as a chunk which also witnesses how “harm” and “wellbeing” are societally, culturally, historically; etc., parsed. Our interest in framing activities as computational is the analysis that is on offer by doing so, which allows us to move away from paradigms which distinguish the cultural from the biological, the natural from the synthetic, etc. All these phenomena are overlapping, self-referential and theoretically convoluted.
Input/outputs of any kind can be defined by an infinity of programs, leading to interested ideas about compressibility, or possibly expansive complexity,^[See also: [[Quantum computing since Democritus]], Scott Aaronson, 2013.] all depending on the chunks we are _given_ and therefore _willing_ to parse. “The trajectory of AI research can be shaped by the limitations of the physical world—the speed of light, the three dimensions of space, cosmic rays disrupt memory chips—and it can also be shaped by the limitations of the discursive world—the available stock of vocabulary, metaphors, and narrative conventions.” (Agre 1997, p. 15). This is why we ought to engage both, and this is why this project advances an unorthodox position on computation as the modeling of transitions, of change, in different forms. Fundamentally, this project does not cognize cognition as something bound to the brain or computation as something bound to the abstract or silicon-realm, but both as a continuous and self-(dis)integrating process unfolding across substrates and timescales. It is, for example, impossible to say that cosmic rays won’t interfere with digital bits in any ongoing silicon-based computation. It also difficult to say that tractability, or the time it takes with our current tools to “solve” a complex operation, is a measure for biological mimicry, simulation. Crucially: “Abstraction and implementation are defined reciprocally: an abstraction is abstracted *from* particular implementations and an implementation is an implementation *of* a particular abstraction.” (ibid., p. 66).
In this way, computers are also to be understood as “_language machines_” (Edwards 1996, p. 28, cited in ibid.), and language _as a computer_: a process which compresses and becomes an (ad)vantage from which to witness transitions through inputs and outputs. Following the “critical technical practice” proposed by Agre, what we seek is “an intervention within the field [of AI] that contests many of its basic ideas while remaining fundamentally sympathetic to computational modeling as a way of knowing.” (1997, p. xiv).^[Which is what van Rooij et al. also want to retain: the (pragmatic and investigative) usefulness in _AI as modeling_.] A _way of knowing_, in the same way that language is a way of knowing, and in the way that we can say that _everything is language_, in order to perhaps run into trouble where some things seem _not_ to be linguistic. Much of science used to say other animals do not feel, or think, or plan. We change our ideas about this. Agre examines concepts such as bits, gates, wires, etc., as basic components for computation, and in the same way we could de/reconstruct concepts such as symbol, word, concept, only to realize—as did Derrida and Wittgenstein—how recursive, unfounded and eternal they all are. This is an intractable _fact._ However, as Agre says: “[t]he only way out of a technical impasse is through it.” (ibid., p. xv), “[c]omputational principles are, of course, always open to debate and reformulation.” (ibid., p. 19). This is the productive confusion in comparing and contrasting (through the forgetfulness of metaphors). Challenging the restrictions imposed by, e.g., David Marr, Agre notes that computational inquiry into human activity “requires a broader conception of computation itself” (ibid., p. 20).
Current AI scholarship proposes terminological adaptations, too. Shah and Bender (2024) propose to analyze language models as Information Access (IA) technologies, in order to allow us to _specifically_ focus on their effects, and treat aspects of what we might (not) want from them.^[This reframing follows in line with Bender’s call to stop calling things “AI”, which is dangerous and misleading, Bender 2023.] This is particularly important in light of LLMs residing somewhere between oracle, search engine and generative combinatorial machine, in terms of functionality. The authors refer to the general term of _Information Behavior_ to indicate how users interact with information, how this occurs in a given context, involving “not only the information that people need or seek but also what they encounter accidentally and serendipitously” (Shah and Bender, section 2). Indeed, we can also think about the complex consequences of information behavior referred to in the introduction, as the explorative drive into the black box: we want to be surprised and challenged, and learn things we did not know.
However, here we encounter not only the issue that these systems are highly limited in terms of training and what they can provide as surprising results, but also the issue of the black box itself: we cannot be too sure about how results are accomplished, and how we are (intentional) accomplices into it. _Information Seeking_ is the term the Shah and Bender employ to refer to the interactions during which “intentionality matters”: “IS refers to an intentional action by a person and does not entail successful [information access]: just because [we] are seeking information that does not mean that [we] find it or that the information even exists.” (ibid). They also present the concept of _Information Retrieval_ as a subset of information seeking, cases where information being sought actually “exists”.^[The authors acknowledge the gradations and blurriness possibly implied here.] As well as the crucial one: _Information Filtering_, how a (e.g., recommender) system _(re)presents_ relevance to a user “without the explicit or expressed need, such as a query or a question.” (ibid.). Finally, _Information Access_: the “focused interaction” between a user and the information where relevance is sought. _Information access_ (IA) is the vocabulary antidote to their LLM critique, they suggest to reframe the landscape of LLMs towards an understanding of _types of access to information._ We endorse this idea.
By scanning the existing literature, Shah and Bender also diagnose some main aspects of what users seem to desire from information systems, these are: _relevance_, _novelty_ and _diversity_; _speed_ and _personalization_/_contextualization_; _interactivity_ and _transparency_. Framing these complex concepts through the paradoxes of the “common” and the “specific” we just saw, can leave one quite disoriented. As many “generalist” projects tend to reveal, designing for *all things* is rather difficult.^[The _No Free Lunch theorem_ (treated for example by Milan Stũrmer, forthcoming) is an exemplary case of this effect in evaluating optimization algorithms.] In contemporary information systems, particularly generative AI, this problematic image of “commonly desired features” is reflected in technical architectures through their statistical generalizations of vast, and diverse, but always specifically ideologically situated, existing human knowledge. These systems clearly normalize patterns of what e.g., visibility, speech and thought ought to be, rendering dominant perspectives as universal truths. What Shah and Bender note is that in more “traditional” information retrieval, the boundaries between user agency and algorithmic mediation were a little bit clearer, preserving space for users to question results against alternative frameworks.
We see this supposed earlier transparency as still highly questionable,^[All _established_ “knowledge” has always been a question of encryption, control and power.] which we treated in [[11 Post-Control Script-Societies]]. It is certain, however, that generative IA systems blur distinctions between chunking and parsing by, for example, synthesizing text that appears as “new information”: are we finding a new pattern or simply being fed an existing one in new attire? This is what the authors highlight as the decreased transparency and user agency in IA interactions, when compared to, e.g., search engines. The main difference being that this new generative enframing makes it increasingly difficult to _identify_ and _challenge_ the chunks embedded within them. For this reason, this thesis sees this as a philosophical opportunity which invites us to explore concepts in the ways proposed. _Through_ the impasse of the given technological enframing, it might be possible to gather some frictions and problems that allow new analytical entries into questions of _conceptual possibilism_ within the probabilistic frameworks that increasingly canalize attention.
Melanie Mitchell points out that in 1892, William James said (of psychology at the time): “This is no science; it is only the hope of a science”. She finds this a perfect characterization of today’s AI,^[She goes on: “Indeed, several researchers have made analogies between AI and the medieval practice of alchemy. In 1977, AI researcher Terry Winograd wrote, “In some ways [AI] is akin to medieval alchemy. We are at the stage of pouring together different combinations of substances and seeing what happens, not yet having developed satisfactory theories...but...it was the practical experience and curiosity of the alchemists which provided the wealth of data from which a scientific theory of chemistry could be developed”. Four decades later, Eric Horvitz, director of Microsoft Research, concurred: “Right now, what we are doing is not a science but a kind of alchemy” [86]. In order to understand the nature of true progress in AI, and in particular, why it is harder than we think, we need to move from alchemy to developing a scientific understanding of intelligence.” Mitchell 2021, p. 8.] we would say it is not even the hope of a science, but should be reclaimed for scientific purposes, as van Rooij et al. (2023) suggest. Computation, as an unfolding field, should be understood as the dialectics between implementation and abstraction (Agre 1997, p. 71). Both these activities are accessed through the sedimentation of metaphors into concepts, the semantic distribution of sociality, the spectral dimension of metaphysical speculation, etc. What is more, following Agre, is that the reciprocal influence between _machines_ and _ghosts_ (well, he says “ideas”)^[Little etymological difference, here, they are both apparitions: apprehensions.] should be at the center of attention in a context where we find ourselves in the face of “complex devices whose spirit often seems alien” (ibid., p. 315). Once we _point_ to and concretize them in novel ways, they are no longer inevitable, and we can perhaps modulate them. One way to slow things down is by complicating metaphors that might lead to new concepts—as they have unarguably historically done—and therefore to insist that we compute in silico and in the flesh; chunk and parse.
### Answering or postponing questions
 
>[Science produces] hitherto unthought-of analogies. It is always assumed, moreover, that there is, and can be, no way of ever computing analogy. The discovery of a new analogy is an ‘intuitive leap’, a ‘lucky guess’; and there can be no philosophy, so it is said, of lucky guesses...
>
>Masterman 2005 (original publication date unknown), p. 80.
 
>As there are good and bad infinities, there are also good and bad contingencies: luck or catastrophe.
>
>Hui 2019, p. 166.
 
If anything can be gathered from whatever this thesis is, it could be the message that artificial intelligence is a contingent _business_, but it _could be_ the (philosophy of) science of the contingent.^[Following the observations of van Rooij et al., 2023: it is not.] One massive, lucky metaphor. Defining AI, intelligence or artificiality is not dissimilar, essentially, from getting an answer to: **who, who, where, when, why are we and what do we want**? Screaming into the abyss and hoping for an echo, i.e., an analogy, a self-evidencing reflection of what we (think we) should be. Something always *negintelligible*. Our perspectivism, as explored throughout this thesis, is not a “human(ist)” one: every rendering of reality is an expansion of perspective itself, as a function. And perspective is anything which is written upon^[Chunked.] by that which reads it,^[Parsed.] and vice versa. When something is observed, it is not only disturbed but inevitably, automatically, has an effect on whatever is observing; interacting with it. Hui’s citation above continues with “[t]he organization of a machine should be valued by its capacity to deal with these different notions of contingency and their classification, instead of mere automatism.” (ibid.). The criticisms of “mere” automatisms are often of the nature that “human nature does not conform to the wishes of classifiers.” (Masterman 2005, p. 64). Throughout this thesis we have questioned what “mere” might entail.^[To remind: [[B The being of “mere” machines and “mere” propositions]].] What are we if not mere, automatic classifiers?
According to Hui, who follows Simondon on this, a machine that is _sensitive_ is able to distinguish pattern from noise, that is: a machine is something with an interested; _filtering perspective_ that renders some things relevant, salient, lucky, catastrophic, and others not. As we saw in [[05 Prediction]], this is Negarestani’s position, too (2018, p. 314). The contemplation of these issues, in light of the condition of AI (i.e., the creation of something “like us” but radically alien, too), therefore opens up a massive can of regenerative metaphysical worms. Questions of body and mind, of the parsing of spatiotemporalities, of identity/equivalence and difference, of self-reference, etc.^[As Varela puts it: “we cannot trace a given experience to its origins in a unique fashion. In fact, whenever we do try and find the source of, say, a perception or an idea, we find ourselves in an ever-receding fractal, and wherever we choose to delve we find it equally full of details and interdependencies. It is always the perception of a perception of a perception…. Or the description of a description of a description... There is nowhere we can drop anchor and say, “This is where this perception started; this is how it was done.” (1984, p. 318). This is why our take is to, following Hui, follow on the recursive doubling down of function upon function.] Basic, irreducible and inevitably paradoxical questions haunt our search. Distinguishing salience from non-salience is itself completely contingent; what we agree on today we might disagree on tomorrow. Who _we_ is, is completely up for grabs. Saying otherwise is installing a vision of how things ought to be, and that is something I walk away from.^[See: Ursula Le Guin’s short story “The ones who walk away from Omelas” (1973).]
This strange effect is what we have been calling the _poltergeist_ in the machine. As we have argued, the mind-or-AI-or-philosophy is a massively distributed, sedimenting process, when understood as an unfolding historical project which passes _through_ agents, none of them in full possession of it. Language, one of the most eloquent filters of mind, inarticulately arrests^[Note that we point to eloquent to highlight how it is through language that expression is possible, but since it always defers (Derrida) and sediments contextually (Wittgenstein), it is rather inarticulate: articulation is that which chunks, the totality of language is the eternal changing of chunks.] functional aspects through pointings, gestures, words, concepts, and is therefore a system capable of rendering meta-perspectives. Saying _that_, pointing at _it_, makes perspectives overlap: regardless of which future they each intuit to be evidencing. Saying _that sentence just now_ doubles down on this, enabling further meta-cognitions on it. This seems to be what fascinates-frustrates language-using creatures, it has haunted metaperspectives^[Self-evidencing, as seen in [[11 Post-Control Script-Societies]] renders perspective upon perspective. Effectuating a model which tracks what it inevitably incompletely believes is “out there” is perspective refining a perspectival function. By this very pronouncement, this metaperspective comes to mean another twist of the turn, simply because we are saying it’s possible to create a new metaperspective. What we do by contracting language into or through language models seems to be an aspect of this, too: no human being can read all the text that exists as language model’s training data, yet we look for interesting, deeper structures within these structures, rendering a metaperspective on language’s existing perspectives.] from their very inception: how come calling something **_something_** both localize *and* dislodge it from its “mere” material substrate? This strange function travels through meat and generations of meat.^[“Even the transformer that our central nervous system is, highly sophisticated in the order of living creatures, can only transcribe and inscribe [parse and chunk, in our terms] according to its own rhythm the excitations which come to it from the milieu in which it lives. ... [Technologies such as computers, a]s material extensions of our capacity to memorize, ... [through] the role played in them by symbolic language as supreme ‘condenser’ of all information ... show in their own way that there is no break between matter and mind, at least in its reactive functions, which we call performance functions.” (Lyotard 1988, p. 43). Lyotard notes this leads to questions he does not take up there and then, but we take them up in our analysis of function.]
This question quite literally presses itself up against meat whenever we try to make sense of the mammalian brain. If it supposedly houses important, even defining, aspects of the mind: what do we make of the rest of the body? According to Michael Levin, our conventional scientific ontology seems to hold that organisms are the “sculpted products of genetics and environment”,^[An ontology challenged by Simondon, too, as noted by Hui in the chapter cited above.] with the complex organ that is the brain, as “the unique seat of intelligence” (2025, p. 2). His research on bioelectricity and beyond challenges this framework, matching some of the observations we have been outlining in this research, too. Evolution seems to conserve abstract patterns such as **functions**, _sometimes_ form^[E.g., anatomical details, which can be thought of as subservient to function.] but certainly not _thing_ nor identity. This conservation of functionality, to Levin, “indicates fundamental symmetries between the self-construction of bodies and of minds, revealing a much broader view of diverse intelligence across the agential material of life beyond neural substrates.” (ibid.). While highly skeptical of the concept of _agency_,^[“Reason is the special embodiment in us of the disciplined counter-agency which saves the world.” (Whitehead 1929, p. 37).] we follow. In our context, taking such process- and system-oriented views has meant that we take concepts (as Masterman has suggested, see: 2005, pp. 79-80) as providing evidence to how (re)classification of chunks creates a malleable fabric of communication, which once so understood, can be effectively woven, parsed _differently_, opening metaperspectives to ever-more generative, possibilistic affairs (perhaps prying open _interconcept_ space (Wolfram 2023)). “In the stabilized life there is no room for Reason” (Whitehead 1929, p. 23). This does not give anything any _agency_, it rather decomposes the concept.
If information-processing (as communication: effectuated _difference_) is a fundamental feature of what we understand to be a _system_, then how to begin organizing system parts into hierarchies so as to understand _what processes what_? We have regurgitated this ancient question in the form of: does language have linguistic beings or vice versa? “[T]ransindividuality is constituted by the two poles of interiority and exteriority, which consist of a recursive movement: the interiorization of the exterior and the exteriorization of the interior.” (Hui 2019, p. 171). We can only understand how this functions in witnessing the *function* under *concrete* experimentation, and this always bites back. These questions are as banal as they are deep, but asking such questions renders a new vantage point, settles a new sediment. These questions allow for philosophy to _make sense_ of itself as (relevant to) AI; a new stratum of silica. _Philosophy must self-evidence as AI_: that is, philosophy needs to actively infer itself into the technical substrate that currently sustains what we call “AI”.
Lucy Suchman: “How is it that AI has come to be figured uncontroversially as a thing, however many controversies “it” may engender?” (2023, n. pag.). It is, in our take, also therefore necessary that we understand _all_ (its) conceptual processes as existing on a vast continuum, one which outlives any thinking being, and therefore one which proves itself to be a physical-metaphysical continuum. Suchman: “To let the term [of AI] pass is to miss the opportunity to trace its sources of power and to demystify its referents.” (ibid.). This is what part of the naturalizing proposed here has entailed, along with thinkers in AIF, a framework which provides a lot of ground for further analysis through recursive differentiations. This was one of the starting points of this project, and it is as banal as it is difficult: to recognize how we parse chunks the shapes of that which we do not know. AI being representative, in a way, of “everything”, since the AI project basically says “give me generalized generalization:” where everyone should be counted as an expert^[Complicating the idea of the possibility to communicate about concepts one lacks knowledge of, where the groundedness of these concepts should be a result of social externalism; deference to experts (e.g., Burge, 1979; Putnam, 1975, as cited in Butlin 2021, p. 3088).] because, essentially, the desire is always “we want ‘it’ to behave the way humans do.”
This condition stems precisely from the communicative function of concepts, which is to temper complexity towards predictable sameness: concepts create equivalences where there are none.^[“The lowest form of mental experience is blind urge towards a form of experience, that is to say, an urge towards a form for realization. These forms of definiteness are the Platonic forms, the Platonic ideas, the medieval universals.” (Whitehead 1929, p. 35).] This is how we have framed _metaphysics_ throughout this work: abstraction at the limits of abstraction is predictive, self-evidencing desire. Being linguistic transindividuals, the generative problems continue to emerge. If the (extended, enactive, etc.) body and its context is just as important as the brain for whatever it is we call “mind,” then: whatever the already-sedimented pragmatics-automatics will always render a very specific politics.^[Whitehead notes this much in the struggles of speculative reason in its encounter with experience (1929, pp. 49-51), or as Brassier puts it: “It falls to conceptual rationality to forge the explanatory bridge from thought to being.” (2011, p. 47, our emphasis).] These issues recursively bite their own philosophical tail, as an unavoidable aspect of communicating about this (i.e., ‘delineating’ the ‘elements’) is the reliance on—and creation of new—abstractions. It is perhaps in this way that (non)philosophy might be defined as the activity that goes on just to avoid having to keep silent (Laruelle (1998) 2016, p. 57).^[Or: “Reason is the organ of emphasis upon novelty.” (Whitehead 1929, p. 23). “Fatigue” is the antithesis of Reason.” (ibid., p. 26). “Provided that we admit the category of final causation, we can consistently define the primary function of Reason. This function is to constitute, emphasize, and criticize the final causes and strength of aims directed towards them.” (ibid., p. 29). Or: “[R]ationality does not deserve its name if it denies its part in the open passibility and uncontrolled creativity there is in most languages, including the cognitive. To the extent that it really does comprise such a denial, technical, scientific and economic rationality would deserve the name of ‘ideology’ (Lyotard 1991 (1988), p. 73).] These _salience-nonsilence_ abstractive chunks are—also understood by Masterman (2005) as guided by the _breath_ of creatures like us, again hinting at the _poltergeist animus_—understood by this project not only as everyday terms (“cat,” “tool”) and philosophical concepts (“linearity,” “soul”); as scientific and mathematical models (weather systems; the concept of _zero_ in all its formulations) but, crucially, as self-evidencing perception at large: without abstraction, i.e., basic continuity “paradoxically” sustained by intermittent contingency, there can be no perception.
The main research question this work has been after is/was: **if philosophy is the metacognitive perspective which aims at abstraction and concept-creation—both of which can be seen as different processes, because of this negintelligible ‘continuity vs. contingency’ condition—what kinds of abstractions can this project help engineer, in order to predict the further development of abstraction _in or of AI_, as mind “outside” or “beyond” the organism?** This is of important attention, given the dialectic that AI has inherited its “basic concepts” from philosophy: whatever unfolds _as_ AI might come to determine what becomes of “philosophy” in the (near) future. The abstractions we have speculatively proposed are: *chunk* and *parse* operations; _semantic noise_ (not our concept, but our analysis of it) ensuing in *language-modulation*; *functions* of perseverance/persistence (in a new non-derogatory nor reductive light); *active ignorance*, the *negintelligible*; *autosemeiosis*; *vantagepoitillism*; *xpectator*; anti-control (constraint acknowledging *non-agency*), and a few more. In chapters such as [[05 Prediction]], [[11 Post-Control Script-Societies]] and [[06 Principle of Sufficient Interest]] we explored how what AI points at a function of pure abstraction itself: undefined, permanently under development, and supposedly representing the one and only thing that cannot represent itself: the _poltergeist_. We also asked: what philosophical concepts has AI inherited, and how do these shape its development? We explored this question in quite some detail in [[10 Bias, or Falling into Place]] as well as in [[03 Semantic noise]]. The answer to the question: ‘what new dialectical relationships emerge when AI systems built on philosophical foundations begin generating their own abstractions?’^[This question is particularly important, given form and function transform each other, through metaphors, as technologies evolve. Also, noted by Lyotard on this dialectic: “[That which escapes, evades, etc.] is a resistance to clever programmes and fat telegrams. The whole question is this: is the passage possible, will it be possible with, or allowed by, the new mode of inscription and memoration that characterizes the new technologies? Do they not impose syntheses conceived still more intimately in the soul than any earlier technology has done? But by that very fact, do they not also help refine our anamnesic resistance? I'll stop in the vague hope, which is too dialectical to take seriously. All this remains to be thought out, tried out.” (Lyotard 1991 (1988), p. 57). An attempt was made, at least, even if too dialectical to take seriously, by this thesis.] is one of _interconceptual_ (Wolfram 2023) dimensions, and only time can tell. It is interesting to note that the most devoted of AI/LLM researchers are asking themselves similar questions as we type this.^[During the panel discussion on the ARC Prize 2024 (Tufa Labs, Zurich, January 25, 2025), panelists: Michael Hersche (Research Associate at IBM Research), Daniel Franzen and Jan Disselhoff (winners of the 2024 ARCAGI prize) and Tim Scarfe (MLST) ask “what will happen when, in the future, advanced models are capable of passing any reasoning benchmark we can come up with, yet we still do not have the generality that we are after?” (my paraphrasing). See: https://www.youtube.com/watch?v=mt3Im4j5iaQ, start at minute 28.] Future research (of the author and of the field) is already orienting itself towards this.^[Again, see for example Wolfram on interconcept space (2023) but also Cavia 2022.] Based on our own abstracting condition, inherently based on self-evidencing prediction, how might we predict the evolution of abstraction capabilities in future systems? This, as we have outlined, is particularly interesting if we look at the concept of _function_.^[Following Stiegler on technology and logos, Lyotard writes: “[The self-referential capacity of (language) technology] is exercised by remembering its own presuppositions and implications as its limitations ... [which] opens up the world of what has been excluded by its very constitution, by the structures of its functioning, at all levels.” (Lyotard 1991 (1988) p. 53. Aspects of this is what we have treated under the guise of _negintelligibility_.]
Why do we search for _meaning_? According to Brassier, meaning is what allows for index-extraction from an otherwise hostile and disinterested reality. “_Meaning is a function of conception_ and conception involves representation—though this is _not_ to say that conceptual representation can be construed in terms of word-world mappings. It falls to conceptual rationality to forge the explanatory bridge from thought to being.” (2011, p. 47, our emphasis). This is because the meaning-making of conceptual rationality, at least according to our interpretation of Sellars, and our AIF observations throughout this thesis, can be understood an as interested _extraction of function_, in service of its evolution. This is if we accept that we _give-or-take_^[This is meant as a generative pun on reason(s). Reasons operate within margins of error, ambiguity. (Sellarsian) dialogical reasoning involves negotiation, adjustments where we “give” in(to) some areas, desires, phenomena, and “take” away/from/part in others. Rational structures are fundamentally approximate; as they are always (probabilistic-possibilistic) *inferences*, and they function despite-because of this flexibility; background noise, contingency, etc. The wordplay hints at this slippage, at the necessary tolerance for ambiguity (apologies for this intolerant irrationality). We can only extract functional meaning from reality by accepting that our conceptual frameworks are always _give or take_. Functional extraction operates through these meditations.] reasons as structures which allow for the functional passing of messages, resulting in the construction of systems such as science, which can offer a retroactive meta-perspective on previous structures that led to their creation. It is this recursive unfolding which has led to a heightened awareness of the postmodern rendering, of our post-truth condition, our “crisis of meaning”, leading to attention to issues of “bias” and all manner of calls to transparency, explainability, legibility, etc. This is a fertile paradigm for a new turn, a new metaperspective. This is for future thought. It is relevant to mention that the overstretching of concepts we have exposed throughout this work can be subject to the charge Brassier, and many others, direct at Latour’s reductionism of _everything_ as an undifferentiated whole (2011, p. 52).^[Rendering a “flat” ontology.] However, what we can say is that the aim here has been the creation of productive tensions, whether lucid or confused. This is not _cognophobia_ (ibid.) or antirepresentationalist irrationalism: this is saying these things (scientific objects, philosophical concepts, everyday terms) are here and we can redefine, explain and therefore modulate them. But what allows for their construction, modulation, consideration, etc., is their unstable character as evolving, unfolding structures. We do not “surrender” to ineffability nor epistemic pessimism, but wish to designate novel areas of investigation, such as the _negintelligible_.
An abandonist, nihilist crisis of meaning might happen when function gets _stuck_ in an paradigm, spinning, and is incapable of passing through. When it cannot *actualize*, where actualization means coupling with the larger whole. Function, as the function of reason, is anarchic movement (Whitehead 1929). Function (e.g., displacement) gets at process (e.g., forced migration; war), and can help identify constraints, which in human cases give us clues about _unsettled_ politics (e.g., which abstract functions—e.g., predictive, self-evidencing identity-desires—lead processes where people forcefully, violently displace others?). Continuing with _**unsettling**_ abstractions, we can think of the “no self view”—from the Dao to Nietzsche to Nishitani to Metzinger—as a hint to _function_ being that which passes through. The function of AI as a marketing strategy is to razzle-dazzle (Suchman 2023, Bender et al., 2021). The function of AI as philosophy is to generate-explore concepts-abstractions. The function of AI as cognitive science is to better understand cognition and its (im)possible computation (van Rooij et al. 2023). The function of AI as _floating signifier_ (Levi-Strauss in Suchman 2023) is to create conceptual possibilities (Suchman sees many problems here). But the function of AI as AI is AI. The last example is evidence to how conceptual differentiation is needed, but the confusion between all concepts is evidence to this difference sometimes being effaced by functionality passing through them: it would almost seem as it if is more interesting that we _suspect_ there can be definitions, rather than any specific definition itself. This may be because the function of thought so far has been _not to stop._
As it is with most conclusions, this should only lead us to more questions. Language is an incredibly chaotic system. Take Wolfram’s famous Rule 30 cellular automaton (Wolfram 2002, p. 871), as an example of a simple language.^[See exposition at: https://mathworld.wolfram.com/Rule30.html.] If we were to have a formula for its ensuing chaos, we could know the answer to “what happens at point x after x steps” and jump ahead in time, predictively, provided we have a tried and reliable formula-function which would give us the shape of Rule 30. This would mean getting to the answer of “what happens next?” with a lot less computational effort than it takes running Rule 30 and seeing what happens at our point of interest. Wolfram shows that computational irreducibility implies that with chaotic systems, this is not possible. Some systems are irreducible in the sense that the only formula we have is running them and seeing what happens at x point. Do simple functions, such as pointing at something, mean simple computations? No, because simple rules can give us high complexity: pointing at something is already incredibly abstract and links vast histories of perspectives, arguably, all of them. What is interesting about understanding function through processes like cellular automata, which rely on the universal computation possibilities, is that we understand function _differently_ because once we have fixed substrates, chips, which we can make behave in repeatable ways by implementing defined functions (in cellular automata: this is what the specification of their unfolding is); then we create a metaperspective on the interest at hand.
This work explored, learned, failed and restarted many times, trying to get around these questions. In [[03 Semantic noise]] we explored how NLP praxis, unable to deal with the fact that a truly objective, unbiased, all viewing god’s-eye perspective is impossible, deals with its shortcomings by approaching intelligibility/intelligence in various self-defeating ways. This renders the refinement of both concepts; of intelligence and of artificiality, by way of the negintelligible excess created at their interstices; which we explored in [[05 Prediction]] and [[04 Concepts as pre-dictions]]. All action, all influence, all [[Vantagepointillism]] require perspectival movement (in the crudest of terms: a vector), which we treated in [[Xpectator]], [[Agency]], [[Language-modulating]] and other entires. Observing varieties of spatiotemporal inclination by exploring the [[Free Energy Principle]] (the topology of something blanketing itself off), Hegel (the tendencies of reason) and gradient descent (the tendencies of AI), and gravity (the tendency of it all), was the task of [[10 Bias, or Falling into Place]]. In [[06 Principle of Sufficient Interest]] we dealt with the way knowledge specializes socially by (trans)individuating itself through interested agents. In [[11 Post-Control Script-Societies]] we treated the having to come to terms with larger-than-life systems and how we are subject to them. In [[07 Phenomenology of Sick Spirit]] we explored the dialectics of the HPA-axis through a speculative neuroendocrinology, trying to explore how hormones tune cognition, among other things. In [[09 C is for Communism, and Constraint]] a few other things happened: we went for full poetic license to explore our constraints as language-using creatures, who (sometimes) attempt to cooperate through linguistic praxis. Finally, we concluded the thesis by showing what was common to the arguments in all the preceding chapters: the pull of the negintelligible.^[An idea was to add a note in the form of a ‘•’ to be placed wherever we—I—feel the negintelligible can be introduced in order to intuit something beyond what has been typed. We might pursue this idea in the living book, online version of the thesis.]
Negintelligibility deals, among other things, with the fact that one can’t choose to have an insight: an insight _happens to one._ Otherwise we would work very hard at producing permanent insights all the time, and in many ways we can say this is what systems like philosophy and science actually do. They are filters we create to temper our differences (and this effectuates itself through institutions, habits, etc.). The way we seem to do this is by relying on intelligible strategies that track the future through different predictability degrees. What challenges these tractions, and allows for function-evolution, is negintelligibility, something outside of ourselves, relinquishing us from the burden of localizable agency. The (social) externalization of thought is not a possibility but a fact. The movements of the extended mind, such as this text, these very letters, suffice to provide evidence for the phenomenon that I am not here saying them to you, you conjure a spectral voice which is neither mine, nor yours, nor anybody else’s. The challenge lies not in finding a homuncular agency to, e.g., this text. The interest of this thesis is to explore a space in which possibly stagnant concepts such as freedom and agency, much too present in AI, in politics, only point to unresolved and possibly unsolvable domination dynamics in the realm of differently-evolving creatures exploring possibility spaces. The hope has been that movement can be made towards non-individualistic thought outside the head; metasociality. This predicament is simple but understated, and ubiquitous. Being meat-based phenomena with the capacity to externalize language-outputs, which are unavoidably dialogical and materially distributed, offers the possibility for this meat to be modeled, molten, measured and massacred by the language that envelops it. Luck or catastrophe. We see promise in the idea of _language-modulation_, as was explored in [[09 C is for Communism, and Constraint]], where the current images of “choice” and “control” can become adjusted to/amended with a hard determinist clause, which strangely enables a new kind of modular versatility (perhaps another version of “control” itself). Not “freedom” through “unfreedom”. Something else.
 
### Distributed cognition, semantics and computation
 
>[T]here is no one single technology, but rather multiple cosmotechnics.
>
>Hui 2019, p. 226.
 
If “computers” were once humans which became tapes which became chips, then some kind of _function_ must have passed through all of these. The task is to modulate the semantic space of functions, where modulation implies both attending to the different modes afforded by a specific modality, as well as exploring their space of articulation, the variations and edges of possible syntheses. To make things less abstract: computation is not only an unfinished and definitely contingent vision of the possible spaces of logical articulations, but also dogs, dust, meat, abuse and UFOs (please note: this is not a flattening of these categories but an enumeration of diverse modes in a space of thought). Not only because these are unit-encodings; countable projectibles; collective semantic “decisions” about how reality is chunked, but also because these, too, are logically-embraceable if we construct conceptual structures according to which they are seen to **function** in specific ways. Conversing with these links is modulating them: the labor of engagement with terms *is* the function of these terms *is* their effectuation *is* their absolute power over the material dynamics that ensue. This work tries to come to terms with the implications of a computational-semantic-aesthetic engagement, with itself as computation, semantics and aesthetics (as equivalences which can be made to **differ**). Computation differs and equivalences, semantics differs and equivalences, aesthetics differs and equivalences, politics differs and equivalences, this reveals these as the functions underlying these processes we chunk and parse reality by. The drive is to present an image of abstractive, elaborated cognition (that is: labored through) which becomes an interlocutor between the deeply isolationist philosophical image of cognition and therefore of AI, and the ghostly, blurred and impossible image of AI presented in the popular realm, emerging out of Silicon Valley.
Looking into the possible origins what we have diagnosed (e.g., what would have to be such, such that concepts function as semantic attractors), this project turned to the various ways in which volition and inclination can be interpreted in the context of an anarchic semantics. In the context of the practical, electronic computation of what we call _semantics_ (i.e., what matters to humans at a particular moment in time), Hinton and Shallice (1991) made a significant leap through the development of feedforward networks with backpropagation—essentially a process of recursive filtering towards a return, where initial iteration and its filtering through recursion result in the refinement of teleological data structures: semantic attraction. As Juarrero (2023) explains, in the context of semantic attractors: these initialize/reset conditions and determine the different conditional probabilities which result in the extraction-production of features. Feedback loops, such as those of backpropagation, or those in human learning cycles, are instances of coherent, constrained dynamics (Juarrero ibid., p. 101) which result in the mereological, emergent properties that *are* semantic attractors, learned behaviors; habits or effects.
If we know the interests at hand, these can be understood as the refinement of specific _functions_. This work follows a strong sense of camaraderie with the proposals of coherence through constraint presented by Juarrero, though an important divergence is marked by this work’s tendency towards a proposal that _trans-verses_ ideas of autonomy and self-determination, in favor of contextual communal embedding. If computation is, due to possible demonstrations of its different incompletenesses, “an abstractive procedure of determination that always confronts indeterminacy” (Fazi 2018, p. 5), or the diagnosis of contingency itself (Cavia 2022), then it might make sense to confront this indeterminacy, not **only** dissociated from its context (as criticized by Kittler, 1995, 1997) but precisely as coupled to contrastive procedures with what is supposedly non-computation. All these share, after all, the same material substrate, and exist under the same constraints: information-processing. AI is no longer a “mere” computing machine but an _idea_ about what thought can possibly be. Therefore, our attempt has been to *ghostbust* something which haunts, something ungraspable and possibly even inexistent, as, perhaps, a way to get at the limit. Assuming there’s a limit (the ghost), where there is no limit (the ghost).
It has been difficult to narrow the interpretation to a single field, a single philosophy, an author, given the richness of the topic and its many possible openings. I have done _justice_ to nothing and also do not wish to wave any _flag_. We have traveled avenues as correlated and parallel as concepts and semantics, or as apparently—only superficially—disparate as gravity and neuroendocrinology. Closing down the possibilities offered by these speculative openings means doing injustice to the annoying dissatisfactions of this project: how can we project one _specific_ approach to image of the functions of language, as presented by current technological innovations, if we propose that it is indefinitely inherited, interpretably variable and in constant evolutionary flux? No system contains itself, not even in identity. The complexities ensuing from this and other paradoxes are publicly recognizable, yet in public it seems we tend to glide over them, _avoiding_ the abyss of eternal regress, in order for language to do something _else_, beyond itself. The ensuing (almost inevitably) utilitarian image of language, of language as something serving another purpose, has therefore been placed under particular scrutiny, as it intertwines (techno)utilitarianism, teleological accounts of life, cybernetic limits, and functional/purposeful/formal schemes of reason. This “something else” language tends to, is what we have explored in terms of linguistic functions: of concepts as semantic attractors, and in terms of conceptual interests, ultimately in terms of a tautological functionalism. Conceptuality as something forever unfinished is nothing new (Wittgenstein, Derrida), but we have reinterpreted aspects of these perspectives towards a different understanding that aims to connect language-modelling and speculative philosophies of language towards a new domain; _language-modulation_.
From the perspective of this research, it can be quite unambiguously stated that philosophy needs to reflect on the technoscientific imaginary, and vice versa. _Imaginary_ here is not meant to invoke an undertone of “social construction,” “fictitious myth” nor “naïve objectivity:” all perception is imagination, because, in AIF: all perception is hallucination; self-evidencing enaction. That which is imaginative pertains to all knowledge-seeking endeavors; it is the access _to_ and retrieval _from_ the combinatorial space so often presented as characterizing what is termed _intelligence_. The access to/retrieval from a space of possibilities which is _separate_ and experientially different from the inevitable state experienced through the senses implies, paradoxically, an unavoidable lack of access to what appears as given and contingent (Kant). The ability to know that one doesn’t know could be said to be one of the most basic requirements for knowledge-seeking (as presented in [[12 Negintelligibility]]), and the intentionality or volition that drives this process is undergirded by a multiplicity of intractable (if accessible at all) factors: including everything from the predictive drive behind abstract representation in scientific models to hormonal homeostasis in keeping an organism together. The challenge lies in understanding the relations between these by observing them as unfolding through ever-changing categories, which depend on constraints, to us, again: revealing the evolution of functions. Throughout the work, therefore, endless piñatas of philosophical perspectives have been bashed and brought to bear on particular scientific questions regarding intelligence: intentional behavior, the boundaries of the *mental*, the definitions of *explanation* or *understanding*, and even the exploration of the origins of different types of motivational states, based on a personal account of an experience with corticosteroids.
If there is anything like a summary to this work, it follows very much in line with Hui’s statements that “cybernetics is fundamentally a metaphysical project” (2019, p. 211) and that as “the accomplishment of metaphysics, [it] is the force unifying “humanity” through globalization and neocolonization.” (ibid., p. 215). This is how we have treated AIF, as our contemporary cybernetic entry: where we see its prowess but must be attentive to its problematic (political) neutralities (which can obliterate).^[Much of this was treated in [[08 Active ignorance]] and [[11 Post-Control Script-Societies]].] Through this, we have observed concepts are pre-dicates: predictively functional utterances. Predictive, again: in a very, very, very open interpretation, as the statistical assessment of well-founded or trivial probabilities and as explorative propositions, arguments. Arguments, which house concepts, enable further, more refined predictions, estimations based on conscious expectations (and below: (hyper)priors). Under _certain_ circumstances (e.g., [[Winograd schemas]]), these predictions fail to account for alternate, evolving modes of existence, ones that do not even fit the intentional bill of their makers. This project has obsessed over a few simple details, which merge mechanical, cybernetic objectivity to reasoning schemes in the flesh. The pursuit of AI, particularly in the realm of natural language processing, forgets that language is non-individual, dialogical. Everything an agent understands as a *reason*, is _always_ social and _always_ retroactive, and that pensive, narrative persistence owes everything to gravity. The thinking condition thinking the thinking condition can also be understood as a(n un)fortunate glitch in the simulation that is self-evidencing perception.
 
### Function and failure and closing an open-ended system
Our position, as has been remarked in various places, risks “imprecise” thinking, flatness, plenty of problems all around. This is a failure, and an opening for learning more in the future. The discussion has also been unavoidably selective, given the vastness of the available literature on the topic of intelligence, ranging from molecular biology to political science. What was studied here was aimed at finding points of agreement and divergence, particularly by observing the continuum between certain philosophical claims pertaining to *meaning* and *reason*, and certain (scientific) models and recent (AI) findings pertaining to *prediction* and *intelligence*.
 
>... consider that it makes no sense to attribute a failure to a river if it happened to run out of water; yet it would make sense to attribute failure to an animal’s actions if it was unable to satisfy its need of water. Arguably, in the latter case, it is even a case of intrinsic rather than just externally attributed failure; that there is something seriously going wrong would be manifested by impaired functioning of the body, and this in turn would be given to the animal’s own concerned perspective on the situation.
>
>Froese 2023, pp. 1-2.
 
We agree, insofar as stating there is a _concerned perspective_ means we can identify and distill _interests_, which all things—not just organisms—have, if they persist. Within our functional analysis, the distinction between attributing failure to an animal versus a river represents not an ontological difference but rather a perspectival-functional one, based on our experience as motivated, interested agents. The concept of “failure” itself can be applied to any system we see(k) function within: a river that no longer carries water could be characterized as “failing” in its nutrient-carrying or flowing function, just as an organism unable to hydrate itself “fails” in various of its homeostatic functions. For the river, the absence of water represents precisely the non-fulfillment of its defining characteristic. Which we observe from an interested perspective, one which has strong perseverance tendencies, extending far into the future (as we saw in [[05 Prediction]]).^[See also: [[Structural coupling]].] To remark on failure and function again: at first, tools such as Word-processors, grammar correctors, and currently LLMs, were to clean up our failures in language. Nowadays there exist “humanizing” tools which add typos and other errors to text in order to pass the work as human-typed. Where is “failure” to be located or understood here? This dissertation failed at plenty of things, but perhaps with some optimism we can distill our interests from understanding the failures: where else should I have gone? What is _wrong_ with the functionality of this work? Its awkwardness represents mistakes, errors, and possibly new ways around functionality.
The motivation that launched this project was the observation that even though the definition of a concept or specific word (such as _artificial intelligence_) may be highly contested and even fully absent, this does not deter publics and experts from employing it extensively.^[See Suchman 2023, or Mitchell 2019.] We repeat what others say, _not_ doing that seems very difficult: we’d have no language at all without feigning and ignorance. When employing terms, we make attempts at delineating the contours of one aspect or another that pertains to the topic at hand, but the essence of the term, its situatedness in the landscape of semantic possibilities, remains mostly “untouched” by any one perspective. This is one of the things we have referred to as remarkably _ghostly_. At the same time, all (linguistic) perspectives do _is_ modulate terms. Regardless of all the charges of “failure” against perspectival relativisms, I still find it difficult to deny how it seems to be precisely in the fertile eventfulness of an unstable _semantic attractor_ that meaning resides; in the functional _promise_ of meaning. Some exceptionally realist words are—_need to be_—highly specific: “tumor” determines where to cut. But tumors evolving in the wild are precisely those things which verge between benign and cancerous; morphological luck or catastrophe. We can determine local aspects of phenomena such as black holes, electrons, or simply: bricks. But the mouths that talk about these things also eat and laugh: where are the realist borders of these phenomena? Realism and anti-realism are not opposed, they simply fail to understand how the other predicts itself forward.
If we currently witness the predictive cybernetic domination of thought, through an “artificial selection enforced by the politics of transhumanism (e.g., human enhancement, genetic engineering)”, and if we “take seriously Wiener’s argument that the opposition between mechanism and vitalism is dissolved in cybernetics, [as] the completion of metaphysics begun in Hegel’s philosophy” then, Hui asks, how can philosophy still think? (2019, pp. 155-6). If philosophy is the _highest recursive form of thinking_ (ibid.) and according to Hui we must identify a new condition under which a transformation is possible^[“[...] in order to escape the enclosure of feedback loops.” Hui 2019, p. 156.], then why not look at the failing organism? Hui’s answer lies in looking closer at the _organizing inorganic_—the domain the organic, presented with as a new layer which comes to organize it—of which he notes:
>We are more than ever living in an epoch of cybernetics, since the apparatus and environment are becoming organismic. The environment actively engages with our everyday activities, and the advent of planetary smartification means precisely that recursivity will constitute the major mode of computation and operation of our future environment. The recursivity of algorithms equipped with big data will penetrate into every facet of human organs and social organs. The mode of participation of technology is fundamentally environmental while at the same time transforming the environment.
>
>(ibid., p. 163).
Participation begins in the act of multiperspectival witnessing (the creation of social organs), at perspective and its transindividuating articulation, as we saw in [[11 Post-Control Script-Societies]]. Attentional power dynamics shape the probability landscapes of future possibilities; whatever has our attention _already has us_. This is semantic bidirectional evolution between complex dynamical processes which might selectively become symbiotic or dominate one another, as we speculated in [[05 Prediction]]. Therefore, paying attention to the project of AI is, on the one hand, trying to modulate it, and on the other: unfortunately fueling the fire. This has vast consequences for power dynamics, especially considering our increasingly computationally-aware condition. The transition from apparently material (i.e., labor and discipline) to modulatable informational forms of power (i.e., societies of control) presents plenty of difficult questions and opportunities for the mechanics of perspectival attention. As constraints—the limits of sociality, of the environment, of modelling anything—become more palpable, to us, they continue to reveal how functions pass through them, and where they fail. Of course, we know very well from evolution that failure _really_ _is_ (and here we are full realists) is observer-relative.
### Infinities and mortalities
Rieder suggests that a way to think about changing conceptions is to follow Deleuze (1988) following Foucault’s _The Order of Things_. Deleuze reads the _épistémè_ situated in the seventeenth and eighteenth centuries, “through the notion of ‘unfolding’ and couples it with what he refers to as the ‘forces that **raise things to infinity**’” (Rieder 2020 p. 31, citing Deleuze 1988, p. 128, our emphasis in bold). The encyclopedic _épistémè_ “[e]pitomized by Linnaeus’s _Systema Naturae_ (published in twelve editions between 1735 and 1767), divided in the kingdoms of animals, plants, and minerals, ... is organized around categorization into a **timeless system**. [In this] representation, [entities] are merely positioned on it through the attribution of identity and difference with other entities, **in infinite variation**.” (Rieder, p. 30, our emphasis in bold). Here, in Rieder/Deleuze, we find a key for thinking about what the function that passes through the living is, and how it results in _ordering_ scripts, or in the organizing inorganic (Hui). The metacognitive tendency to infinity might be the semantic attractor, the meta-realization, the self-evidencing script of something which is finite (an organ(ism)), to be able to continue to _function_.
Mortal, limited things need to keep going, motivated by _something_. If that something is limited, the system effectively ends. Enter: infinity (or *nothingness*, for that matter, a possible other side of the metaphysical coin). The exertion of perspectival domination (or charity) _tends_ to infinity, or it wouldn’t **function**. The uncharitable problem today is, as Suchman notes, that the process of algorithmic intensification is not a solution but “a contributing constituent of growing planetary problems—the climate crisis, food insecurity, forced migration, conflict and war, and inequality” (2023, n. pag.)—concerns which become marginalized if our attention is taken up with AI framed as a supposed existential threat (ibid.). To proceed more charitably we might need to, as claimed throughout this work: reconsider the infinite tendencies of thought and modulate them by making them banal, funny (e.g., by naturalizing them), or by reframing traditional understandings of the infinite (again, if only just for bringing heaven down to Earth).
Rieder notes, still following Deleuze on Foucault, that this new “variational” _épistémè_, much like other _becomings_ of society, initially went _from_ the infinite: from God-given, to produced and organized by the human “processes of life, work, and language” (ibid. p. 31), to a _fini-unlimited_ understanding of (creative) order:^[A limited number yielding almost unlimited combinations: “a finite number of components yields a practically unlimited diversity of combinations. (Deleuze, 1988, p. 131, translation amended)” Deleuze quoted in Rieder, p. 31. The translation is amended from “unlimited infinity”, which Rieder finds less elegant, in light of the original “fini-illimité.”] a “permutative proclivity [where] guiding interests [...] drive how data are made meaningful” (ibid. pp. 31-32). This might sound like our state-space becomes reduced, in the face of infinite beginnings. But as complexity can result from “simple” combinatorial permutation (e.g., in LLMs and similar systems which are able to encompass more and more bits than any human brain ever could), perhaps the opposite is true. In exploring (linguistic, semantic, scientific) variations on a theme (i.e., the living), we rendered many types of infinities (quite literally so in mathematics). Even when god is dead (or finite), in realizing the varieties of mortal limits: self-evidencing at the system-level changes. A human life actualizes by self-evidencing, a language-system actualizes by self-evidencing through humans, a technological system self-evidences by combining previous systems. The processes of work, life and language that lead to new processes are engaged by things other than themselves. “[T]he definition of a mortal computer does not commit to implementation in any particular substrate/niche.” (Ororbia and Friston 2023, p. 22).
The development of the anatomy of the eye was not because a creature **wanted** to _see_ and accomplished this, it rather happened to emerge gradually as light-sensitivity became distilled through the bodies of different creatures. Now we have telescopes. This effect of function transfer is perhaps clearest of all in the pursuit of the abstract, blurry telos that is artificial intelligence, where most of the technical 20th century work leading up to it based itself on linguistic intuitions (symbolic and procedural thought, limited conceptions of common sense, etc.): where none of those things turned out to be “AI”. The function tries to find other substrates, other means. The novelty of the evolutionary challenge now is on _how_ it doubles down on itself as a function: instead of appearing to have (formal) limits, it is now more infinitely combinatorial than ever, and “it” invents itself (hence the ambitious designation of “machine learning”). This doubling-down effect is also apparent in phenomena such as live digital maps, which have become the reality of map _and_ the territory all in one: we start from concrete location, abstract, and end up at concreteness again.
We even have AI-aspiring tech companies calling themselves _Meta_. Control imposed, control presumed and modulated, control so transparent and “ethereal” that it has become an altogether different concept (Deleuze). Once an infrastructure of control becomes so pervasive that it is apparently indispensable and inevitable (many people “cannot imagine how they did things in the past” without *x* or *y* app or device), then that is _true_ Deleuzian control. But it is also actively ignorant treason. Maladaptivity. If I still buy from Amazon: am I just an unforgivable traitor? Knowing what it implies, why am I interested in what it has to offer? If destructive corporations, hoarding techlords, crooked institutions, etc., currently pool power by grabbing (attentional) control as opposed to disciplining, this should bring our attention to the disciplining of attention: can we disattend to these possibly maladaptive structures? Ignoring them might be the only way they will be rendered _evitable_.
If formal subsumption (being managed, knowing one is managed) becomes real subsumption (managing oneself), then once the management becomes invisible, self-performed, then, indeed: where is control to be found? The concept of control assumes there exists certitude or at least confidence in guiding a course of action, in sustaining an ideology, however undefinable or opaque. Control, most minimally defined as the exertion of a constraint on the dynamics of a system, needs the limit, _is_ the constraint, to begin with: reducing possibilities, creating degrees of freedom. If we cannot determine real agency, at the level of a “simple” machine (is the tape moving and evolving by the machine head, or vice versa?), changing the course of action might imply a new kind of _stasis_. We can “give up”, give in, and relinquish the idea of control, because we don’t have it, never had it, especially if it is the case that we sleep with the enemy night after night (which we do: how else are these words being typed and read? The systems enabling this are part of the destructive problem we are addressing). It is possible to proceed towards a new state, where if it is the case that systems have us rather than us having them: things can become more critically **immobile**, enabling contemplation of functions we are the unfolding substrate of. Radically different things are also possible.
What is also worth noting is that when a distinction is made between a “given” reality which can be built upon, and the “choices” of actors within it, we create not only a strange exceptionalism in favor of the actors versus the “natural” and inert background, but also an opposition between certain actors as deserving of more agency than others. We refuse that narrative: _Ni Metre, Ni Dieu_ (H. P. Duerr, 1974, p. 49). I am a traitor when buying from Amazon just as much as Jeff Bezos is in being its tyrant ruler. We both have our desires, -dividuated, myopic and introspectively contradictory as they are. The gesture here is towards the thinking of -dividualism as always having been the case: nothing has been decomposed, it was never composed to begin with. The cult of the individual and the cult of freedom can die and give way for the dethroning of massively pooled powers, as argued in [[09 C is for Communism, and Constraint]].
Bear in mind the suggestion is not that this is a particularly felicitous state. This situation continues to be, without a doubt, “the progressive and dispersed installation of a new system of domination” (Deleuze, 1990, p. 7)—only there’s nothing new about it, or worse: what is new is that it has indeed become so *transparently* obvious: “you paid for this” (Bezos’ own words, thanking the audience for facilitating his trip to the outer edges of the atmosphere). Social power has never been anything else other than power: a symmetry-breaking modulation of (social) constraints. We do not fight against a big other, we live in eternal dissatisfaction with our own mutually-dependent, decomposed condition. This may sound pessimistic but it is meant to be relieving. There can be other ways. A truly *proportional* communism (“to each according to…”), where perspectives explore surviving within and beyond constraints together, is one which takes rationality, that is: **ratio**, the chunking and parsing matter of fact of any persevering entity, _seriously_.
The main function of this project has been to elaborate an elucidation of the necessarily failing ungroundedness of the concept of (artificial) intelligence—and/or/as different variants of philosophical reason—borrowing from *and* questioning computer science, psychology, biology, philosophy; language-use altogether. The employment of these various bodies of knowledge is not only meant to provide a large enough context for the themes pertaining to intelligence, but also to allow these fields to converse with each other. Discussions can benefit from _not_ talking past each other, but from structural coupling revealing problems, frictions and incongruences, from the metaperspective of _functionality_. Experimental philosophy approaches such as this one might render productive tensions when parsed through LLM vector-space representations, possibly leading to novel conceptual insights, impasses or incursions into the methods and structures of either (LLMs or philosophy). Such reconceptualizations are already exposing how meat-based discourse depends on very particular ways to chunk and parse (all systems are biased: _need_ bias), just as LLMs depend on their own, **revealing hallucinations all the way down for both**. Future exchanges between meat and non-meat-mediated philosophy could present opportunities for conceptual reconstruction, where neither trivial legacies inherited from the past, nor traditionally understood as “computational” approaches, hold evolutive primacy.
The development of artificial intelligence—in its current statisticopredictive^[In the pejorative sense here, not the expansive sense explored in [[05 Prediction]].] guise—necessitates that anecdotal, marginal, failed, insufficient, etc., *intelligence parameters* become publicly discussed and parametrized.^[As Levin notes: “Intelligence [can be understood as] the degree of competency of navigating any space (not just the familiar 3D space of motility), including morphospace, transcriptional space, physiological space, etc., toward desirable regions, while avoiding being trapped in local minima. Estimates of intelligence of any system are observer-dependent, and say as much about the observer and their limitations as they do about the system itself.” (2022, p. 2). If this bidirectional, dialectical, perspectival condition is what we have to work with, then it ought to be as publicly shared/dialogized as possible, in order to promote the search of possible functions, not its limitation. How public _public_ really is, is another debate, we touched upon in the context of power in [[11 Post-Control Script-Societies]].] in order for _actual_ AI as (a tool for a social, slow cognitive) science to begin.^[Van Rooij et al., 2023.] If we accept—which in this project, we have done—a provisional definition of cognition as “the functional computations that take place between perception and action, which allow the agent to span a wider range of time ... than its immediate _now_ [and] which enable it to generalize and infer patterns from instances of stimuli—precursors to more advanced forms of recombining concepts, language, and logic.” (Levin 2022, p. 2), then even if the desire for the floating signifier that is generalized intelligence (i.e., _Artificial General Intelligence_) is ignored, “simpler” operations that pertain to (sustaining) the enunciating; modeling; translating and future-projecting of information structures are needed in the context of the (limited, conservative) performance of _any_ assessment.
The definition of parameters in the “general” scheme of “intelligence” observes at the very least two basic problems, namely that: **a**) intelligence itself remains undefined yet paradoxically tacitly accepted as an investigative context, and **b**) the objectivist development, application and evolution of its parameters come to constrain the possible anecdotes, accidents or imaginaries that _could_ pertain to intelligence. That is, if _intelligence_ is to be considered a possible and/or useful concept at all. Diagnostically thinking in broad strokes, we observe two symptoms in this situation, on the one hand: the technocapitalist AI-conglomeration of life proceeds by way of operations which ignore **a** but *thrive* on the noise of the case being **a**, while pursuing **b**. On the other hand, a heterogeneously noise-composed refusal of the constraining effects of **b** draws attention to the impossibility of a general solution to **a**, given that this would unavoidably result in imaginaries unsuitable for all that which cannot become parametrized. The reason our concepts are to remain necessarily contested grounds is because of **b**: none of this is news, this is the diagnosis from which this project was born.
Conceptual, linguistic-modulation has been at the core of this project. Rather than concepts explaining reality, we have focused on: how does the evolution of concepts itself narrate new realities? How might conceptual transformation itself serve as the generative mechanism for marginal or currently inexistent phenomena? Concepts, in this context, are dynamic evolutionary entities in their own right, rather than static tools, it is therefore relevant to try and observe what it is that their movement produces. Instead of asking how concepts map onto intelligible phenomena, we can explore how a concept such as the “negintelligible” might reveal new territories. If concepts are not explanatory devices but agents that bring forth their own domains, this thesis (and many others) are, perhaps, evidence to that effect/idea. Instead of deriving concepts from phenomena, we have sought to derive phenomena from concepts (often resulting in observations about functions and their ratios).
_Ratio_, chunking and parsing, is an effect of sensitivity to difference. Never before, perhaps, have we lived in an age where our object of study escapes and evades our dissection and understanding because of its speed and scale. What we call “AI” evolves faster than our capacity to chunk and parse it. This is, paradoxically, what led to its emergence in the first place: trying to formalize and reveal aspects of this very basic sensitivity to difference that characterizes adaptive cognition. Without ratio, we have nothing, but when our expanding cognitive undertakings reveal that ratios can be apprehended in so many different ways, the disorientation that ensues calls for the emergence of ever-novel meta-ratios (_concepts_), for vantage points to coordinate and unfold new metrics.
**Does philosophy, as the realm of pure abstraction, set the boundaries for what AI can achieve? Or could AI, as the mind abstracted, eventually limit what philosophy can explore?** Considering how our capacity for abstraction, for thought, rests on linguistic foundations, the ratios of which depend on the inevitable rhythms of our breath, on our surrendering to sleep, our (in)capacity for politics, and our mortal limits: we have reached an impasse where concepts as parsed by systems such as large language models, confuse traditional conceptual functions. All meat born today will not know a world in which non-meat things don’t talk back.
There is no other way to conclude than to say that each chapter argued for something completely different. And to hope that the reader refers to the living book, because this project will continue to evolve.
And to apologize for what may have been quite inarticulate, an overly speculative, at times.
<div class="page-break" style="page-break-before: always;"></div>
### Footnotes