# The Poltergeist in the Machine
### Sedimented, semantic, social and spectral perspectives on prediction in the artificial mind
So-called **artificial intelligence** (AI) can be said to have as its aim the understanding—and possible imitation—of certain information-processing aspects of the **brain**, primarily aspects we relate to the concept of **mind**. Not disregarding evidence for its legitimacy as foundational to what we call mind or cognition, this brain-centric view can be easily challenged if we observe the mind as a **pattern** distributed across various systems and spatiotemporal scales. An unavoidable aspect of communicating about patterns is the _predictive_ reliance on the capacity for **abstraction** (i.e., the delineation of transmittable, parsable chunks).
All perception _abstracts_ for experience to be possible at all, and one of the main arguments of this work is that the function of abstraction is the effectuation of preferences in favor of the function of **persistence**. Any perceptual persistence is enactive, perspectivally partial, ecologically embedded, spatiotemporally situated: abstractions can only exist as information processes which are subject to (cases and combinations of) _in_-definition, intractability, incalculability. Therefore, because of these limits, its effects are always the result of **speculatively predictive pattern recombinations**. This proposal renders a highly relativistic reality which can only be analyzed by comparing observer-dependent perspectives. To provide an entry into this perspectivism, this project follows an approach that aims at a scale-free analysis of the basic tenets of information-processing observable across all manner of abstracting entities (from bacteria to distributed semantic memory), that is: **Active Inference** (AIF), a novel field which presents an enactive, generative image of self-organizing processes that persist.
The main research questions urging this work are: is philosophy, conceived as vast predictive abstraction—understood under AIF as a process of projective self-evidencing—the **limit** of AI? Or, alternatively, could AI become the limit of philosophy? Secondly, what kinds of abstractions can this project help engineer, in order to predict the _further development of abstractions_ in/of AI as mind _outside_ or _beyond_ the brain, particularly given the dialectic that AI has inherited its “basic” concepts from philosophy? And finally: considering that philosophy is part and parcel of technocolonialcapitalist complex—particularly so in the context of AI—how to assess its functional implications herein? The proposal made by this project is that, by presenting a series of novel concepts which reorient traditional (sometimes unnecessarily entrenched and monodimensional) ideas about the functions of mind, we can envision novel (sociosemantic) possibilities and therefore challenge current (AI) imaginaries. The technical domain this project explores are processes of (learning) **dialogue**, broadly construed, and how conceptual-engineering occurs herein. The main conclusion of this project is that if _thought_ is what _matters_—and its evolving existence is subject to inevitable modulations—and if thought transfers between systems through languages, then a **functional** analysis of _how_ thought forms and transfers should play a more prominent role in “AI” considerations.
These questions will be extensively explored by tracing the implications of the observations above, and by proposing a series of concepts which enable new perspectives on the mind as an artificial, open-ended process. These perspectives are **sedimented** because they are assumed to emerge from a material history; **semantic** because they contemplate meaning(s) and possible teleologies; **social** because they are dialogical and distributed, and **spectral** because they are transcendentally, existentially haunting: they imply futures past and beyond our own.
%%
[[Abstract notes]]