**Links to**: [[Structural coupling]], [[Agent]], [[Model]], [[Abstraction]], [[Self]], [[Subject]], [[Behavior]], [[Freedom]], [[Control]], [[Volition]], [[Decision]], [[Free will]], [[Orientation]], [[10 Bias, or Falling into Place]]. Please see notes in [[Agent]] or [[Xpectator]] for different accounts of agency. We consider _agency_ a problematic conceptual proposal, as most of the time it seems it is interchangeable with, or comes in to play the role of, [[Free will]]^[Or humonculus, or a prime (and sometimes unmovable) mover.] in technoscientific times, but *pretends* not to. Causal efficacy is a thing, for sure. But if *agents* have *agency* then we ought to expand the category of _agent_ to include much more that it usually does. In the simplest of presentations, this project would like to describe agency as: **the term we use to assign *attentional* priority to an observation made, where the priority is to establish causal explanations and/or explanatory links in the interactions between two systems**. The system we allocate attention is the one with “agency.” In this way: one ball pushes another, the one pushing has agency. The human exerting an effect on their environment has agency. And so on. But, obviously: telling these systems apart is complex, and things are not necessarily causal or linear: it very much depends on the goals of the observing agents, to determine where agency is to be endowed. All these proposals are hypotheses about how things are coupled and influence each other. An observed Higgs boson has tremendous agency. The definition of agency proposed by AI researchers at Deep Mind, Oxford and Imperial College London does not answer the coupling and influence dynamics which emerge from the attention issues highlighted above, but at least they propose the (according to the authors) first formal causal definition of agents as: **[a]gents are systems that _would_ adapt their policy if their actions influenced the world in a different way** (Kenton et al., 2022, our emphasis in italics). Citing Dennett’s intentional stance (1987), they highlight how agentic systems “are moved by reasons ... the reason that an agent chooses a particular action is that it “expects it” to precipitate a certain outcome which the agent finds desirable.” (p. 1). Their description is experimentally very useful, they are able to produce and evaluate graph representations of ideal “agents” progressing through causal events. Since we cannot access other minds, we cannot know how or if things we interact with will or will not adapt their policy. We chunk, or anthropomorphize, our interest to scales parsable by human agents. The authors cite earlier characterizations of agents. In the intentional stance: “an agent’s behaviour can be usefully understood as trying to optimise an objective (Dennett, 1987)”, in cybernetics: “an agent’s behaviour adapts to achieve an objective (e.g. Ashby, 1956; Wiener, 1961)”, in decision theory (game theory and economics): “an agent selects a policy to optimise an objective” (here they cite several authors). And: “[a]gents are ways for the future to influence the past (via the agent’s model of the future) (Garrabrant, 2021; von Foerster et al., 1951).” We are in most agreement with the simplistic presentation of agents being _ways_ for the future to influence the past: if systems we are interested in house possible generative models (see: [[Generative model]]), then this sentence makes sense. Incoming uncertainty changes the past, changes the learning, adaptive generative model. In any case, take note of the “would” modality in Kenton et al.’s proposal. In order for modes to be conditionally examined: we need another agent. The problems, therefore, inherent to the idea of agency, remain psychosocial; perspectival (see: [[Vantagepointillism]] and [[Xpectator]]). All our (toy) examples, even when aiming for the cleanest of formal isolations—from liars to skeptics to demons to prisoners to chess to Merlin and Arthur to Alice and Bob, etc.—seem to depend on (partial) observers to tell the tale. A presentation of these problems in delineating social agency and individual-collective dialectics, is succinctly stated by Lucy Suchman (2023, p. 26). We are in full agreement: >... the concept of autonomy ... is very much tied to [17th century Enlightenement ideas] of the individual as being self-determined and independent. ... it carries with it that kind of valorization of the individual. > >Things get complicated when the idea of self-determination, for example, gets applied to collectives. We have autonomous universities, and we have the Zapatistas in Chiapas, Mexico, who are struggling for self-determination for themselves as a people, not as individuals. ... the shift to collectives takes us into the realm of relations, and then to the reconceptualization of autonomy. ... the idea of autonomy and independence is not about capacities that are inherent in the individual actor—or by that reasoning in the individual device or machine. Rather, our capacities for action come out of our relations, both with each other interactionally and with the material circumstances in which we act, the environments of our action. ... The problem is not that we as people lack abilities, but that we live in normatively configured environments that disable us. [A] relational approach to thinking about autonomy [is] incredibly important. These power dynamics are treated in more detail in [[11 Post-Control Script-Societies]]. ### Motivated coupling Some experiments have shown how the *effects* of a sense of exertion of control motivates xpectators to (continue to) engage in purposeful action (Penton et al 2018). Interestingly, all these words lead to nowhere, they bottom out at things we cannot define: how do we understand control? Motivation? Purpose? Etc. Desires are intransparent, opaque, and chasing them takes us to their evolutionary foundations, or similar speculations. Difficult questions. See also: [[Structural coupling]]. More on coupling and system-distinction in: [[Markov blanket]]. This entry will continue in the future.   ### Footnotes