**Links to**: [[03 Semantic noise]], [[Meta-language]], [[Language-modulating]], [[Modulations]], [[Script]], [[02 Introduction to the Poltergeist]], [[Speech]], [[Language]], [[Power]] [[Word]], [[Hegel]], [[Kripkenstein]], [[06 Principle of Sufficient Interest]], [[E Pointing]], [[Language models]], [[The Last Question]], [[Entropy]], [[Universe]], [[Teleology]], [[Question]], [[What is a question]], [[Principle of Sufficient Reason]], [[Semantic attractor]], [[Concept]], [[Attractor]], [[Score]], [[Script]], [[String]], [[Narrative]], [[Basin of attraction]], [[Cannalization]], [[Representation]], [[Autosemeiosis]], [[NLP]], [[Semantics]], [[Syntax]], [[Linguistics]], [[NLU]], [[Entropy]], [[Noise]], [[Probability]], [[Vector]], [[Vector space model]], [[Calculation]], [[Neural nets]], [[Evolution]], [[Degeneracy]], [[Redundancy]], [[Kripkenstein]], [[05 Prediction]].   ### [[Postulate]]: Natural language processing has become natural language programming. Hence our interest in language-modulation.   >In _The Human Condition_, published in 1958, Hannah Arendt is concerned with a world in which “speech has lost its power” to a “language of mathematical symbols which . . . in no way can be translated back into speech.” > >Halpern, 2015, p. 18.   Currently, the emergence of natural language-sensitive systems which are prompt-initiated (so-called _Generative AI_) is sedimenting as a new layer between existing language textures, transforming our relationship to expression and creation. The power of the _word_ has been a salient human preoccupation, and now we finally have magically generative acts: after much ado, _abracadabra_. From the way search engines guide our access to information, to the extraordinary capacities of _text-to-anything_ systems which turn prompts into worlds. Begging the questions, more than ever, of whether we are augmenting or giving up fundamental aspects of brain-based cognition: from the double-edged *pharmakon* (Plato through Derrida, Stiegler), to “Is Google making us **stoopid**?” (Carr 2008, my stupid adaptation in bold), to literal or metaphorical brain-rot. Another question is how to understand thought across generations, when whatever it is we call “thinking” trickles through different frameworks as technology changes. What becomes increasingly clear is that the challenge lies in (re)conceptualizing our meta-linguistic frameworks: understanding fundamental features of how language operates, circulates, and generates meaning _in general_. A meta-linguistic shift which becomes of crucial philosophical attention as we enter an era where directive, inquisitive and generative articulations are no longer just human-interpreted but are prying open a new dimension, executed through increasingly sophisticated systems. Language has arguably always been the prying lever of reality, if we cast a wide net over the concept: language as _pattern_—the basic texture, morphology, legibility of things—and/or _code_—the transductive capacities of said textures when combined.^[The distinction between the two is hard to draw, if we want to understand all being as becoming, but perception has a predictive predilection for imagining things as possibly static (from object permanence all the way to these letters remaining legible right here right now). Patterns would then be the static things which are actually always code, but perception stabilizes them as patterns _for the time being_, to be able to say something about them: to be able to _turn them into code_.] In his influential report “Augmenting Human Intellect: A Conceptual Framework” (1962), Douglas Engelbart framed human culture as the evolved augmentation of already-existing capacities to organize means for the comprehension of complexity, in an effort to problem-solve across domains. He suggested a framework not dissimilar to what we have just presented above: reality-modulating capabilities are defined in terms of four basic classes: artifacts (objects which manipulate objects or symbols); language (conceptual structuring, manipulation through concepts); methodology (procedures in problem-solving); and training (transference across human beings of all of the above).^[The original, for additional detail: “1. Artifacts--physical objects designed to provide for human comfort, for the manipulation of things or materials, and for the manipulation of symbols. 2. Language--the way in which the individual parcels out the picture of his world into the concepts that his mind uses to model that world, and the symbols that he attaches to those concepts and uses in consciously manipulating the concepts ("thinking"). 3. Methodology--the methods, procedures, strategies, etc., with which an individual organizes his goal-centered (problem-solving) activity. 4. Training--the conditioning needed by the human being to bring his skills in using Means 1, 2, and 3 to the point where they are operationally effective. (Engelbart, 1962, p. 8)] Scaling this to our preferred level of abstraction, where the basic common denominator “language” is the prime mover: if all is (symbolic) manipulation (i.e., artifacts and language) by way of procedure (i.e., methodology and training), then understanding the influence of systems on other systems depends on our understanding of how dynamic phenomena are perceptually stabilized in order to metabolize into different phenomena (e.g., how an object in the distance can become food, how a thought can become a book, how a hormone can lead to another human being, how a prompt can become a _metaprompt_, etc.). This is not dissimilar than saying that deep learning reveals deep structure (LeCun, Bengio & Hinton, 2015).^[“Deep neural networks exploit the property that many natural signals are compositional hierarchies, in which higher-level features are obtained by composing lower-level ones. In images, local combinations of edges form motifs, motifs assemble into parts, and parts form objects. Similar hierarchies exist in speech and text from sounds to phones, phonemes, syllables, words and sentences. The pooling allows representations to vary very little when elements in the previous layer vary in position and appearance.” (p. 439).] The labels we assign to phenomena exist in an effort to modulate the processes behind them, this is rather obvious in the context of prompting and meta-prompting: current trends reveal how preference tends towards generalizability; domain-unspecificity, rather than diversifying complexity. One prompt to rule them all. The risks of hegemonic homogenization are important to note. The drive guiding a lot of current research is trying to get at (some of the basic) patterns guiding language models: how (if at all) is generalization happening within them? In the context of this project, the desire is to speculate (witness and share patterns with vast future-casting possibilities, for now), but we can also point to empirical cases where the impetus is comparable, and the efforts are in experimental execution: one of the most famous one being the François Chollet’s _Abstraction and Reasoning Challenge_ (ARC) (Chollet, 2019, Chollet et al., 2024). Or claims such as: “Systematic generalization characterizes human language and thought, but it remains a challenge for modern AI systems. ... Advances in machine systematic generalization could facilitate improvements in learning efficiency, robustness, and human-computer interaction” (Ruis et al., 2020, p. 9). This impetus follows from meaning-agnostic measures of information, famously proposed by Shannon and Weaver, or as presented in Engelbart: “When a man writes prose text (a reasonably high-order process), he makes use of many processes as sub-processes that are common to other high-order processes.” (Engelbart 1962, p. 9). Casting all of this as a wide abstracting net also allows one to frame molecular assemblages (see: [[Assembly and assemblage]]), neuroendocrinological feedback loops (see: [[07 Phenomenology of Sick Spirit]]) and complex mathematical reasoning (see: [[Negation]], [[04 Concepts as pre-dictions]]) as operating under the same principles: patterns which are or become *code*: functions. That is: textures we are able to witness, textures which are metabolized into new textures. What is “useful” or “interesting” about this, is that we are able to become attuned to the tuning processes of language(s) across domains: how an everyday conversation has complex politics, deep metaphysics, and vast effects on the future (see: [[Modulations]], [[Language-modulating]], [[E Pointing]], [[Pronoun]]). In systems of human language recombination, one can imagine the passages from mere differential gesture (Derrida) ensuing in shared attention and cultural coordination (Tomasello) to layered, complex negotiations of reasons (Sellars, Dennett). How patterns inevitably yield novel patterns is a question of recombination and creativity; of invention and/or discovery. The transformation of natural language processing into natural language programming marks a shift in the relationship between expression and reality-formation. Language commonly understood as a medium of communication and representation, is now truly revealed in all its performative glory. This transformation recodes/repatterns the traditional gap between signifier and signified into a dynamic relationship between desire and its execution. Or as Murray Shanahan puts it: “prompt engineering ... will remain relevant until we have better models of the relationship between what we say and what we want.” (Shanahan, 2023, “Talking about large language models”, p. 4). This recasting can be understood as functionally paradoxical: as we increasingly articulate ourselves through these generative systems, we seem to become increasingly perplexed at our lack of understanding with regard to what it was that we thought we were doing before (otherwise known as the moving goalposts, McCorduck, 2004). This recursive relationship reveals that as we adapt our modes of expression to be more effectively processed by GenAI systems, and as these systems simultaneously shape how we think and communicate, we cannot but grasp at straws in a situation where patterns couple to patterns without clear orientation. For this project that means that this brings us back to analyzing the sociocultural, political dimensions of desire (see: [[09 C is for Communism, and Constraint|09 C is for Communism, and Constraint]] and [[06 Principle of Sufficient Interest]]). As the future of thought and its expression becomes increasingly constrained by how it thinks itself via thought as technically implemented, this means that AI is not just a speculative device (a mirror; Vallor, 2024) but also carries the risk creating a layer of fog impeding future-projection: “...as we come to rely on computers to mediate our understanding of the world, it is our own intelligence that flattens into artificial intelligence.” (Carr, 2008, p. 56). Not sure. **See also**: [[Pronoun]], [[Music as permanent revolution]]. %% https://arxiv.org/pdf/2310.04444 What’s the Magic Word? A Control Theory of LLM Prompting ____________________ Dennett in The Self as a Narrative Center of Gravity: "First of all, I want to imagine something some of you may think incredible: a novel-writing machine. We can suppose it is a product of artificial intelligence research, a computer that has been designed or programmed to write novels. But it has not been designed to write any particular novel. We can suppose (if it helps) that it has been given a great stock of whatever information it might need, and some partially random and hence unpredictable ways of starting the seed of a story going, and building upon it. Now imagine that the designers are sitting back, wondering what kind of novel their creation is going to write. They turn the thing on and after a while the high-speed printer begins to go clickety-clack and out comes the first sentence. "Call me Gilbert," it says. What follows is the apparent autobiography of some fictional Gilbert. Now Gilbert is a fictional, created self but its creator is no self. Of course there were human designers who designed the machine, but they didn't design Gilbert. Gilbert is a product of a design or invention process in which there aren't any selves at all. That is, I am stipulating that this is not a conscious machine, not a "thinker." It is a dumb machine, but it does have the power to write a passable novel. (IF you think this is strictly impossible I can only challenge you to show why you think this must be so, and invite you read on; in the end you may not have an interest in defending such a precarious impossibility-claim.)" (1986, pp. 6-7). The point he's making is not about prompt-engineering, but about (fictional) selves. What is interesting is that, while current language models _can_ spew out random stuff without inputs, what we are actually most interested in _are_ inputs. We want delineation, constraints, because we are focused towards a specific outcome. Unfortunately, only certain thinkers want truly 'random' stuff. Mostly artists, I'm afraid.