**Links to**: [[Choice]], [[Problem]], [[Capitalism]], [[Evolution]]. What? _How_ different? “Just do it.” I guess people understand that these are slogans. This is capitalism, and this is why we can’t have _good slogans_. >“As suggested by Nietzsche’s imperative of ‘thinking otherwise’ – meaning thinking not just against one’s own time, its suffering and needs, but also by means of and through the other – a system inevitably includes other, often mutually exclusive positions, most of them also opposed to his own.”^[The citation continues: “‘We have no right to stand out _individually_ (i_rgendworin einzeln zu sein_).’ Deleuze and Guattari have suggested that Nietzsche’s main concepts are inseparable from a theatre filled with various ‘conceptual personae’. These are not historical characters but the genealogically condensed ‘intercessors’ of our discourse, the real thinking subjects of enunciation or ‘thought-events’ by which the concepts come alive and become oriented. Conceptual personae are the powers of imagination that function as a compass in the determination of the undetermined concepts. For if the will to power together with the eternal return of difference is Nietzsche’s plane of immanence (and the critique of the will to truth is his image of thought), this plane is populated not only with repulsive concepts such as ressentiment and bad conscience, but also with the self-sufficient pretensions of all those who understand the will to power only from the point of view of nihilism.” S. van Tuinen 2023, p. 13.] %% #todo Add footontes missing from this doc: https://docs.google.com/document/d/1KO_qW0svX-BMOVwnorzoucADMZTegDUg9SlgPsrGozE/edit The Geist in the Machine: A Possibly Homeostatic Dialectics. S. J. Forgetting is to thinking what perspective is to vision: ![](https://lh7-us.googleusercontent.com/8cPynB_lgxj0Au1UtXN_5OTetOug6xaxJq0aRxrCeVL65C-66IMWOtQTmtybG0UrV40cq41-I6tgnH8b7bAUx_Cy0n1RAz3aIio05hRIcS6ryDgfx2pTqydYorlIxxbz5oub1Mo2KTH0-Udf4iRS8g) Fig. 1. D. Purves and R. Beau Lotto, 2002. The brown (top side) and orange (shaded side) squares on the cube are, for your computer (screen), the same color. [...] ideas which seem at first glance to be obvious and simple, and which ought therefore to be universally credible once they have been articulated, are sometimes buoys marking out the stormy channels in deep intellectual seas. Joseph Weizenbaum, Computer Power and Human Reason:  From Judgment to Calculation, 1976 A picture is both a surface in its own right and a display of information about something else. The viewer cannot help but see both, yet this is a paradox, for the two kinds of awareness are discrepant.  J. J. Gibson, The Ecological Approach to Visual Perception, 1979 As our machines increasingly read and write without us, as our machines become more and more unreadable, so that seeing no longer guarantees knowing (if it ever did), we the so-called users are offered more to see, more to read. The computer—that most nonvisual and nontransparent device—has paradoxically fostered “visual culture” and “transparency.” Wendy Chun, On Software, Or the Persistence of Visual Knowledge, 2005 A computer, then, does not simply have an instrumental use in a given site of practice; the computer is frequently about that site in its very design. In this sense computing has been constituted as a kind of imperialism; it aims to reinvent virtually every other site of practice in its own image. Philip. E. Agre, Toward a Critical Technical Practice: Lessons Learned in Trying to Reform AI, 2006 It is always power that is dissimulated behind objectivity or rationality when the latter becomes the argument of authority. Isabel Stengers, The Invention of Modern Science 1993 Who knows where Marx got this inheritance of the hold, from Aristotle denying his slave world or Kant talking to sailors or Hegel’s weird auto-eroticism or just being ugly and dark and fugitive. Like Zimmy says, precious angel, you know both our forefathers were slaves, which is not something to be ironic about. This feel is the hold that lets go (let’s go) again and again to dispossess us of ability, fill us with need, give us ability to fill need, this feel. We hear the godfather and the old mole calling us to become, in whatever years we have, philosophers of the feel. Love, S/F F. Moten & S. Harney, Undercommons, 2013 SPIRITUAL VERTIGO: The following text concerns itself with a number of tracings. These are accounts of concepts as they travel through different approaches to knowledge: objectivity, rationality, common sense. Their compilation in one sentence seems quite broad and untenable, each deserving of their own history, but for the purposes of this text they all converge towards the same end: these concepts are usually employed to sustain discourses of legitimacy, particularly within modern technical practices. This text will explore how these concepts have traced certain discourses, in an absolutely subjective and limited manner, sustained primarily by a desire to understand the effects that these historical developments have had on the rise of contemporary “artificial intelligence,” and what the current situation might mean for the future. To deliver an entertaining read a few thought experiments will be explored, and a mixed bag of authors will be called upon to examine conceptual movements. The reason the title for this piece is “the Geist in the Machine” is multifaceted: 1. In German Geist refers not only to the concept of what in English could be termed “spirit” but also “mind;” intellectuality. In English, as “ghost,” it can be traced as far back as the 14th century, referring to the transcendent realms of entities existing beyond the human; 2. Some of the work by Hegel has inspired many of the concepts that are developed hereunder. Principally, what this text inherits from Hegel is dialectical thinking, understanding how self-consciousness comes to be, the idea of a “rational community” possessed, so to speak, by a collective Geist (or spirit), and other details. Mostly, this text wants to point to Hegel as the driving force behind media philosophy. More on this later; 3. The “ghost in the machine” was originally coined by Gilbert Ryle to defy Cartesian mind-body dualism, whose work, via Arthur Koestler and others, inspired much of what behaviorism (Skinner et al) came to be based upon. Behaviorism was superseded by cybernetics, which, driven by wartime technologies (signal detection, motion tracking, self-regulation), opened up the space for conceiving of systems in terms of intentionality and prediction, which paved the way for AI; 4. In popular science-fiction, the ghost in the machine is often employed as a reference to allude to anything from the idea that robots may (one day) harbor souls or human-level intelligence or consciousness, to the threat of such a development, to the idea that humans beings are flesh-and-blood robots (mind-body dualism); 5. A ghost is not only something which haunts us but also something to which we tend (perhaps almost in the sense of the Freudian death drive, or a strange fascination for the uncanny). In this sense, existential mystery is the ghost is that which drives the pursuit of knowledge, the establishment of ‘facts’ one may grasp at, like straws. The ghost runs the machine, both literally and figuratively; 6. Finally, the title hints at the fact this abstract unknown—and perhaps unknowable—lurks in all of our technological proceedings—from mathematics to the application of band-aids—and it is something which is transparent, amorphous, essential and cannot be punctured by a spear, put in a box or trapped by a net. For an overarching framework, this paper follows critical technician/technical critic Philip E. Agre more than it follows any other thinker. In his paper “Toward a Critical Technical Practice: Lessons Learned in Trying to Reform AI” Agre explains: Computers are representational artifacts, and the people who design them often start by constructing representations of the activities that are found in the sites where they will be used. This is the purpose of systems analysis, for example, and of the systematic mapping of conceptual entities and relationships in the early stages of database design. A computer, then, does not simply have an instrumental use in a given site of practice; the computer is frequently about that site in its very design. In this sense computing has been constituted as a kind of imperialism; it aims to reinvent virtually every other site of practice in its own image. For Agre, critics need to become technicians, but more than that: technicians need to become critics. Back in 1997, in “Computing and Human Experience,” Agre develops an extended account of the implementation of automated systems in the late 20th century, and by following the work of philosophers such as Martin Heidegger, but also Michel Foucault and Jacques Derrida, he delivers a powerful blast at the seemingly uncons(cient)cious behavior in the technical field of AI.  With respect to the methodology of this text, it also follows Agre in that “[t]he very notion of methodology [...] supposes that the investigator started out with a clear critical consciousness and purpose” and how, actually, the development of “consciousness and purpose” usually takes form “through a slow, painful, institutionally located, and historically specific process.” Hence the earlier reference to how highly subjective and limited this account will be. I believe philosophy should not become schematized in such a way that it reflects the methods of practically-oriented approaches, because that does not leave much room for freedom of thought, let alone expression. The desire for a tabular-form philosophy is precisely what Agre would have argued against, and it is unfortunately this desire that can be observed across a multitude of practices which speak out to philosophy today: job interviews, grant applications, etc. (no surprise, schemes tend to appear whenever money is involved). A critique of schemes follows later on towards the end of the text, where they are understood under the large history of “nets” (which cannot be cast over ghosts). To introduce my topic, as I am guided by Agre, I will simply place his observations on rationality and mechanical models here below: Within academia, the early AI pioneers were largely engaged in a revolt against behaviorism. Behaviorism, in its day, had organized itself largely in reaction against the vague and unreproducible nature of introspectionist psychology. The metaphors provided by new technologies provided a means of placing mentalist psychology on a scientific basis, and a functionalist epistemology emerged to explain what it meant to offer mental mechanisms as explanations for experimental data. Although the participants in this intellectual movement experienced themselves as revolutionaries, mechanistic explanations of mind already had a long history, going back at least as far as Hobbes. Yet, in a curious twist, these mentalists acknowledged little inspiration from this tradition. Instead they reached back three hundred years to identify themselves with the philosophy of Descartes. Although Descartes' defense of a specifically mental realm in a mechanistic universe was a useful symbol for mentalists engaged in polemics against behaviorism, the principal appeal of Descartes' theory was not its ontological dualism, but rather the explanatory freedom that Descartes' dualism afforded. The theorizing of later mechanists, from Hobbes to Locke to the associationists and reflex-arc theorists, was severely constrained by the limitations of their mechanical models. Descartes, on the other hand, could prescribe elaborate systems for the conduct of the mind without worrying how these systems might be realized as physical mechanisms. He intended his rules as normative prescriptions for rational thought; they were explanatory theories in the sense that they would accurately describe the thinking of anybody who was reasoning rationally. Although nobody has mechanized Descartes' specific theory, the stored-program digital computer, along with the theoretical basis of formal language theory and problem-solving search and the philosophical basis of functionalism (Fodor 1968), provided the pioneers of AI with a vocabulary through which rule-based accounts of cognitive rationality could be rendered mechanical while also being meaningfully treated as mental phenomena, as opposed to physical ones. Historically, human beings have tended to envision narrow, flat, incomplete, system-like universes in order to achieve specific goals. In order to gain a bird’s eye-view understanding (=system) of specific developments in so-called Artificial Intelligence, and especially to better understand how to move forward (=goal), an engagement with philosophy is unavoidable. This is signaled not only by the fact that advancements in AI are reaching the point in which many ethical considerations need to attention—from the employment of domestic robots to the creation of actually conscious and/or intelligent life—but also by events like the hiring of sci-fi writers by the French Agence de Recherche Nationale for the purpose of scientific speculation, or the hiring of ethicists by the Pentagon, IARPA, DARPA and comparable institutions. Expanding the philosophical framework of questions concerning technology is not only a worthy intellectual endeavor, it’s also a means of slowing things down. Following philosopher of science Isabelle Stengers, this text could not be more aligned with a commitment to the fact that we need to slow science (and technology) down. Things are breaking much faster than they can be fixed. In her 2018 book “Another Science is Possible: A Manifesto for Slow Science,” Stengers follows Thomas Kuhn in asserting that the true strength of any particular scientific paradigm lies in its invisibility. Stengers stresses that such invisibility also results in “sleepwalking researchers” who, impeded by the logic of the current imperative—which is all about “gaining time, competition and speed”—decipher the world around them “in terms of opportunities.” While attributing the qualifier of “sleepwalking” might seem harsh and pejorative, one can observe this sleepy logic at work in many systems that guide our current engagement with the world, not only science. Free-market competition imagines simple actors and delivers counterproductive effects, democratic elections motivate politicians to engage in narratives that do not necessarily reflect their stances but are mere ‘tricks’ implemented for the sake of becoming re-elected, even democracy at large: Brexit voters do not even know what they are voting for, but vote they do! These machines are clearly overheating.  Earlier, in “The Invention of Modern Science” Stengers argues: If technoscience celebrates the terrible dynamic that makes the rational communicate with the irrational, the imperative to control and calculate with the establishment of an autonomous system, uncontrollable from the inside, which makes power and the absence of meaning coincide, then scientists, technicians, and experts are not subject to questioning, because, like everyone else, they are waiting for limits to the power of expansion of a dynamic that defines them beyond their intentions and their myths. The conclusion of this text will be that the risk in question, what is to be analyzed, is the possibility of losing a specific type of relationship with the passage of time. The machine’s “immediate” will become faster than the human’s, and because of this: the human will not even realize. Like the proverbial frog in a warm bath. This the problematic landscape sketched here does not attempt at producing a system for capturing, but a diagnosis for guiding. Taking Hegel as an ancestral systems, media and transformations thinker, one may engage with the promise of artificial intelligence in a way that veers off from the current technocratic paradigm—which imagines technologies as mere aids and extensions. Hegel not only inspired the work of many of the thinkers of technology in the 20th century (from Marx to Heidegger to Adorno and Horkheimer) but he could also be considered the binary-opposites thinker par excellence. Hegel’s pictures of consciousness and self-consciousness can lead one down unexpected paths of recognitive self-discovery. An overview of contemporary thinkers who employ Hegel or Hegelian motifs and variants will also ensue.  Considering what projects such as Google’s AutoML are trying to accomplish, simple data-feeding to a system which informs the user what the best possible function for processing the data is, one could gather that the history of coding has been/will be a short-lived one in comparison to the history of non-digital representation. This means—and it’s not just the doomsday researchers who express this, but also the tech-biased researchers themselves from Lex Fridman at MIT to Yan LeCun at Facebook—that the question concerning AI will prominently become more and more, indeed, about asking the right questions. This is where philosophy enters the stage. Contemporary machine learning cannot read the illusion which introduces this text, but if it keeps going down the same path, it probably never will In the realm of technical artifact and their handbooks one (used to) often encounter the very metaphoric “MANUAL OVERRIDE” section, which dealt with instructions for manually overriding a system once it malfunctions. For dealing with “mere machinery” it seems we’ve always been aware of the need for a big, red STOP button. This is designed into systems before running into a problem, before needing to suffer and patch it. When it comes to technological developments that have multiple applications, which are so-called disruptive, cross-platform—for the sake of a distinction within just this sentence let’s say techniques instead of finite machines—which are not easily delineable, our cognitive efforts fail to imagine the future. But this complicated fact doesn’t stop enthusiasts from building it. We have always already been stuck with an inability to predict events with vast ramifications. Like Heidegger said: “If we come upon three apples on the table, we recognize that there are three of them. But the number three, threeness, we already know.” But what also needs not be taken for granted is the fact that numerical intuition falls out after a certain degree of numerosity. For Hegel this numerosity is in fact an emergent quality which leads us to an experience other than that of quantity. This is a very basic fact about human engagement with the world that mechanistic understandings seem to leave behind. A grain of sand is not sandy. The collectively intuitive realization of this is, perhaps, what has led to the craze over “big data.” Computer scientist Pedro Domingos says it best: “People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world.”  Proceeding with caution, as an agent that gives reasons for causes and effects, one must limit engagements with the world by way of descriptions, borders, manuals and guidelines. However, once more, it’s funny but it’s true that guidelines for the ‘appropriate’ use of technologies such as robotics or so-called artificial intelligence are not yet in place. There’s no stop-button, there’s not even a manual. The few proposals that do exist, such as the one by the Foundation for Responsible Robotics—which claims it wants to do what Fairtraide did for coffee, unaware, perhaps, of the many failures of the Fairtrade certification—or the charter “strategy” of OpenAI—which, among other very dubious statements declares that “if a value-aligned, safety-conscious project comes close to building AGI [artificial general intelligence] before we do, we commit to stop competing with and start assisting this project”—appeal to things like safety and harm without reflecting on the ambiguity of these terms very much. They do, indeed, seem somnolent. Kentaro Toyama’s ‘slogan’ that “technology only augments what we already have” seems to fit the idea that humanity has been somnolent and belligerent from the start. Perhaps instead of trying to “fairtrade” AI designers of technology could follow scholars like Cynthia Dwork, or Batya Friedman and Helen Nissenbaum, who propose radical new frameworks for value-sensitive design. An ethical reconnaissance of AI is needed because when the fan hits the shit it will be too late. Some may argue it already is. In her 2019 book “Morphing Intelligence” philosopher Catherine Malabou argues that the steady implementation of machine-learning in human-dealing does offer the chance for thinking towards a new democratic engagement with the world, in which both machine and human actors reap the benefits of their enmeshment as a “collective intelligence.” For Malabou “the burning question today is humanity’s possible loss of control to machines,” which one will easily agree with as it’s rather an unremarkable insight she offers in her otherwise historical overview of different attempts of systematically grasping at the concept of intelligence. We should worry about the concept and phenomenon of “IQ,” as she does, since respected and influential scientists like physicist Stephen Hsu claim to have found the genetic marker for IQ and proposes we should start selectively breeding for ‘smarter’ babies. At the 2020 EU Data Summit in Estonia, after his public talk, I asked him if he didn’t consider this a very dangerous proposal to be making publicly, inequality is already a huge problem which will only be exacerbated by an initiative like this. He told me: “first you get your facts, and then you can do your ethics.” If this does not raise some eyebrows, I don’t know what will. ### Footnotes