**Links to**: [[Language models]], [[The Last Question]], [[Entropy]], [[Universe]], [[Teleology]], [[000 Question]], [[What is a question]], [[Principle of Sufficient Reason]], [[Semantic attractor]], [[Concept]], [[Attractor]], [[Score]], [[Script]], [[String]], [[Narrative]], [[Basin of attraction]], [[Cannalization]], [[Representation]], [[Autosemeiosis]], [[NLP]], [[Semantics]], [[Syntax]], [[Linguistics]], [[NLU]], [[Entropy]], [[Noise]], [[Probability]], [[Vector]], [[Vector space model]], [[Calculation]], [[Neural nets]], [[Evolution]], [[Degeneracy]], [[Redundancy]], [[Kripkenstein]], [[002.1 Modulations]]. # 𝘗𝘳𝘰𝘮𝘱𝘵 𝘦𝘯𝘨𝘪𝘯𝘦𝘦𝘳𝘪𝘯𝘨 As mentioned in 2020, the future will be guided by prompts (strings, scores, etc.). See: [[002 Semantic noise]]. >“Dialogue is just one application of LLMs that can be facilitated by the judicious use of prompt prefixes. In a similar way, LLMs can be adapted to perform numerous tasks without further training (Brown et al., 2020). This has led to a whole new category of AI research, namely prompt engineering, which will remain relevant until we have better models of the relationship between what we say and what we want.” > >Murray Shanahan, “Talking about large language models”, p. 4, 2023. %% ____________________ Dennett in The Self as a Narrative Center of Gravity: "First of all, I want to imagine something some of you may think incredible: a novel-writing machine. We can suppose it is a product of artificial intelligence research, a computer that has been designed or programmed to write novels. But it has not been designed to write any particular novel. We can suppose (if it helps) that it has been given a great stock of whatever information it might need, and some partially random and hence unpredictable ways of starting the seed of a story going, and building upon it. Now imagine that the designers are sitting back, wondering what kind of novel their creation is going to write. They turn the thing on and after a while the high-speed printer begins to go clickety-clack and out comes the first sentence. "Call me Gilbert," it says. What follows is the apparent autobiography of some fictional Gilbert. Now Gilbert is a fictional, created self but its creator is no self. Of course there were human designers who designed the machine, but they didn't design Gilbert. Gilbert is a product of a design or invention process in which there aren't any selves at all. That is, I am stipulating that this is not a conscious machine, not a "thinker." It is a dumb machine, but it does have the power to write a passable novel. (IF you think this is strictly impossible I can only challenge you to show why you think this must be so, and invite you read on; in the end you may not have an interest in defending such a precarious impossibility-claim.)" (1986, pp. 6-7). The point he's making is not about prompt-engineering, but about (fictional) selves. What is interesting is that, while current language models _can_ spew out random stuff without inputs, what we are actually most interested in _are_ inputs. We want delineation, constraints, because we are focused towards a specific outcome. Unfortunately, only certain thinkers want truly 'random' stuff. Mostly artists, I'm afraid.