**Links to**: [[Inclination]], [[Disposition]], [[Belief]], [[10 Bias, or Falling into Place]], [[03 Semantic noise]], [[Reference]], [[Modulations]]. The concept of _bias_ is treated in the entry “Falling into Place.” ### [[Question]]: Is bias just a problem of understanding initial conditions and attractors, or do we mean-intend-intuit something else by it? ### [[Postulate]]: Even if it is, we are each stuck at different attractors when trying to understand this. _The problem of bias in AI_: calls for attention to bias in the development of non-human cognitive systems are necessary and effective, yet they often tacitly contraband an implied assumption of possible objectivity, neutrality or common sense (a specific _image of thought_). Attention to the inevitability of bias (to embeddedness, provenance, histories, ontological models, implicit or explicit desired outcomes, etc.) is more important than calls to mitigate it. Attempts to computationally model the effects of differently biased systems and their possible effects when coupled to other systems is paramount here. _Short presentation, numbering corresponds to slides which are missing here._ 01 Ambiguity is generative (i.e., all-encompassing, infinite) _and_ reductive (i.e., opaque, vague), depending on how we choose to approach it. When our chances of distinguishing between two things are divided 50/50, with .5 probability, everything is—i.e. both options are—equally likely or _equiprobable_. At the same time, this means we do not gain any valuably distinguishing difference, any _information_. 02 Here, even though there’s no motion in this image {rotating snakes}, your visual system interprets this scenario as a situation which is very much in motion. There’s something brains-body systems do to make meaning emerge out of ambiguity, even (sometimes _especially_) at a lower “pre-cognitive” level like this one, where you cannot modulate (please note I avoid “choose” or “decide”) whether you interpret it as motion or not. 03 Of course, this is also evident at the level of words/concepts, as we look deeper into the structures of language, things get increasingly complex (eternal regresses, tautologies, etc.), and irreparably _ambiguous_. 04 “The trophy would not fit in the suitcase because it was too big?” What was too big? Well, to get around this ambiguity we draw meaning from past experiences with suitcases, right? 05 And even with words that seem like they represent ‘stable’ objects: take pretty much any word, deconstruct, trace its etymology and analyze it, and you will find spatiotemporal (_fundamentally_ gravitational) metaphors at its origins. And what is a metaphor, or by and large: any analogy, if not a pair of things defining each other, something rather ambivalent, indeed. 06 So, to return to the start, if there is anything that has defined the recent discourse on the organization-representation of data is the concept of _bias_, and how to avoid it. At large, in computer science and engineering, ambivalence has always been that which needs to be avoided, precisely because we tend to prefer _organization_, because “information” is but a mere pattern that should be considered meaning-agnostic (IT). 07 But the context makes the information, right? And the context is that of “bias”, whatever makes a system be organized as a system, unavoidably so. And while it’s good to be aware of it, it seems most efforts cannot look beyond the idea of simply neutralizing bias, towards an objective, universalizing realm that should apply to all. 08 Political awareness is important, it is a good thing that many information-organization debates in AI and data-science have become deeply attuned to the concept of bias as something to be avoided. But, again, more often than not speaking from a perspective “biased” by the idea of objectivity and thus universality. 09 But bias should imply a degree of seeing _as_, it should be embraced, not bypassed, so we may modulate it. Nothing can possibly ever exist in isolation, as Juarrero holds: _context changes everything_ (2023). Perhaps we’re biased to think things can be context-free so because we’re able to represent things like this, where isolation seems realistic and attainable {diagrams}. 10 But to give one very simple example of bias: {spoon-feeding robot}. 11 Perceptual biases are at such a fundamental somatic level that we might not even recognize them. You’ll be surprised to find that a neural network trained to predict the next couple of frames in videos actually predicted the following image as being in motion {rotating snakes}. 12 Even something as fundamental as this effect gets transferred onto our designs without us realizing how. We should acknowledge bias affirmatively, doing so will increase awareness of our differences, so we can actually consider each other in more depth and, most importantly, with attention to and care for assumptions, provenances, directions, etc.