**Links to**: [[Learning]], [[06 Principle of Sufficient Interest]], [[Cuts from E Principle of Sufficient Interest]], [[Cognition]], [[Adaptation]], [[Evolution]], [[Education]].
### [[Postulate]]: Experiencing _as_.
If there’s one thing I’ve (really appreciated having) learned (only as I turned 30 or so), is that learning how to learn is a very valuable process. The situation of having metalearned versus _not_ having learned it is analogous to cooking with a blindfold or without. I feel I became aware of this really late.
But, also, I haven’t experienced many situations in which I was taught how to learn. It is presumed children know how to learn and learn how to know (_Turing take the wheel_). And everyone acts _as if_ it’s business as usual. But the world is mad.
Additionally, it has also seemed to me, in the (meta)learning journey, that children get taught most facts about the universe as if they are unchanging. This contradicts the basic principle behind our supposed success as a species: eternal adaptation by ever more innovative (and often contradicting) means. Perhaps one of the first things they should tell you when you’re just about starting to speak is: “_some_ aspects of crude matter look as if they are unchanging, and persist, somehow. Everything else is up for grabs.”
We seem to seek explanation by exploiting the behavior of a (semi-)witnessed phenomenon under the predictability of another witnessed phenomenon which implies another phenomenon, and another, and another, and eventually that of regularity over spacetime, hence its predictability. _Hume take the wheel._ (See also: [[05 Prediction]] and [[Predicate]]).
But, again, why are concepts such as regularity or repeatability transcendental biases in scientific parlance and performance? If no two things are the same (see [[Equivalence]], [[Difference]]), is the only “safe” bet to continue to assume that this be so in order to control specific, limited aspects the phenomena that happen to us? _Pascal take the wheel._
[[Sameness]] permeates everything. Difference compels.
This note continues, dealing w/ PP and learning, but the draft version cannot be published in its current form, it’s a mess. More soon.
%%
[[ErasmusX meeting]] presentation march 2024
1. What, exactly, is learning? I would like to ask you to reflect for a few seconds. If you tried to learn anything about learning just now, you might have realized that learning is, at the bare minimum, **change**. You start with some given knowledge, and you want to change that knowledge, for whatever reason.
2. In the context of machine learning, there are a lot of learning problems. Researchers who get sidelined and fired from major places like Google, etc. keep presenting these problems, but nobody seems to be learning. READ OUT SLIDE
3. I want to show you a specific study that I wrote about, which is one of the very first babysteps of contemporary language-modelling. This is a Winograd schema, a natural-language-processing test for machines which has been compared to the Turing test, that is: something that we consider important for machine learning to accomplish.
1. The idea here is that humans understand the difference between the two sentences, while natural-language-processors such as LLMs could find this challenging because there is an ambiguous pronoun there. READ SLIDE
2. However: if we understand learning to be the capacity to change, then it actually becomes very difficult to say that humans are capable of disambiguating these schemas! Why: because times change, ideas change, and the context is always very variable when it comes to communication. Demonstrators, for example at Occupy sessions here at Erasmus, are not violent. In this case, it was the university who has violent, as we know.
3. Unfortunately, much of machine learning has not learned this, and this is why LLMs are often very, very uncreative and disappointing.
4. The authors of the groundbreaking paper which led to things like the language models we use now also fail to understand this, when they say that “humans do not require large supervised datasets to be competent, fluid, etc.” — this is absolutely wrong. Humans exist in learning networks of sociality, such as the home, the neighborhood, the institution and their histories, and these are MASSIVE datasets, which require a lot of supervision.
5. This is where it becomes interesting to think about bias, and I would like to already pass this question onto Joao (whose work I am very appreciative and extremely in support of), because: it is impossible to be unbiased when every statement we make, every move inside or outside language, is always situated and has unavoidable preferences, conscious or unconscious, known or unknown. How do we make sure we work towards acknowledging this, not trying to become fooled by the promise of an impossible objectivity or neutrality?
6. The joke, remains the same: why do we want to increase scale and speed for bad decisions? This is the main question behind language-modeling today. Do we need scaling up, or do we actually need to rethink what, exactly, we are scaling?
______
Some problems w/ NLP history (fools, unfaithful wives, but there are countless of examples)
Highlights from sustainability: impact bigger than the airline industry (van Wynsberghe, Crawford, other one), and [[Highlights for impact of genAI]]
Things to check in advance
Major problems w/ VC initiatives, and especially OpenAI
Take notes from email to Gijs and Liesbeth about AI in the classroom.
https://www.erasmusx.io/project/chatgpt-in-higher-education
check also other projects https://www.erasmusx.io/projects
Chatbots (I will call them this because this is what they are: the presentation of statistics as dialogue, whether humans are any different from this is an unanswerable question at this point, the only hint I will give at this, in the depths of the research I go further, is that humans as statistical machines that halt because of sleep, wear and tear, etc., and are also of a different speed than machines: this is why we build machines) are useful “sparring partners”. This is the conclusion, and the main thing I want to say today.
Capital-driven initiatives that speculate, gamble on the future, are very volatile and thus dangerous (Durand ref). OpenAI is a problem. So is Eurudite. Moving fast and breaking things is what malfunction essentially is: [The motto “move fast and break things” is often associated with Facebook, as it was one of the company’s core values until 2014](https://www.snopes.com/fact-check/move-fast-break-things-facebook-motto/)[1](https://www.snopes.com/fact-check/move-fast-break-things-facebook-motto/). [It reflects the idea that innovation requires taking risks and experimenting, even if it means making mistakes or disrupting the status quo](https://hbr.org/2019/12/why-move-fast-and-break-things-doesnt-work-anymore)[2](https://hbr.org/2019/12/why-move-fast-and-break-things-doesnt-work-anymore). This shameless experimentalism, which is far from scientific, is the ruling ethos behind most globe-engulfing AI research today. This should not become the template for universities. Unfortunately, at Erasmus, a highly capitalist enterprise, a place where student protests bring in the armed forces who drag student bodies out of buildings and where you cannot even eat your own food on the campus’ central point canteen, continue to try to convince us that this is not a place of study but a place of business: a place where you are first an obeying, paying customer, and even then there is no guarantee that you get what you pay for (examples).
Prompt-engineering (what I said 10 years ago), essentially: understanding how a machine works so that we can ask it the right kinds of questions, are our only way around the current crisis in the pedagogical employment of LLMs. Lady Lovelace already said this much, quoted by Turing: a machine can only perform what we know how to ask it to perform. The rest is all gambling.
As Miriyam Aouragh explains: the paradigms of e.g. white privilege need to be countered w…
One of the knee-jerk fears of Generative AI in the classroom is the classic new technology fear: “who will be doing the thinking?!” This is a fair criticism, the automation of everything means we are done existing and life has become irrelevant. However, in the realm of education, it is incredibly useful to be able to interact with a dynamic, intellectual sparring partner, so long as this is done collectively, outside the paradigm of testing individual student capability. Precisely because no actor acts singularly, and we exist in an interdependent, solidary condition as humans, we should not fear becoming “overly reliant” on somebody or something else’s efforts. We should already worry when we exist in a situation in which dramatically underpaid people support our daily activities, here on campus, by cleaning and maintaining everything running while we lounge and distractedly consider the future of humanity. It is rather sad.
Students can be critical, and should be critical, and this criticality is what needs to be taught in order to develop the capacity to discern interesting from not-so-interesting AI-generated content.
**Take highlights from impact of genAI, education section** and note on fact-checking recommendation and comment, in order to end with:
- Would you recommend the university to invest in a calculator that only _sometimes_ gives you the right answer?
- Why would we, then, recommend a language model that contains not only plenty of inaccurate information, but is moreover a minefield of racist, sexist, and other unavoidably baked-in biases?
It is this question that I would like to discuss with you all.
Paviljoen can’t eat your own food
People walking around leaving trash behind letting the cleaners do the work
Not leaving clean toilets behind
etc.
what is wrong with you?
https://www.wired.com/story/women-in-tech-openai-board/
https://www.erasmusmagazine.nl/en/2023/10/11/erasmus-has-its-own-chatgpt/ --> super sympathetic to Joao’s work, but we need a better understanding of BIAS
“Another advantage is that answers should be less biased. “Academic research is the only input. The ELM is also less America-centric. For example, ChatGPT will sometimes give American answers to Dutch legal questions.” However, Gonçalves can’t guarantee that the ELM will never use ‘racist’ language, as happened in a presentation of a Google language model. “EUR researchers sometimes conduct research involving old documents, which can be racist, so that language could be reproduced by the ELM.”
At the same time, the ELM is less heavily censored than ChatGPT. Gonçalves adds, “With ChatGPT, for example, texts containing hate speech are not permitted in answers. We want researchers to be able to research everything, also hate speech. So we look for a balance between that academic freedom and preventing the spread of hatred.””
LLMs are way overhyped (Emily Bender, a.k.a. _Stochastic Parrots_ PI).
ZF: Synthetic text-production is very useful when you need a string of characters that resemble something that came before. But science is all about that which is _different_ from what came before. And taking a bag of words, shaking it up and adding a chatbot interface that sort of gives a semblance of coherence, is such a scientific _lie_.
Let’s not be dragged by the hype, and be a little more critical: following Bender, let’s stop using the word AI, it is only hype. Talk about probabilistic media synthesis, using x, y or z axioms. Explain what you are talking about, stop contributing to the bad, capitalist hype.
One-size-does not fit all (Karen Hao): we cannot build a system that is based on the statistics of the past. There is no such thing as “good” prompt-engineering when the probabilities that are being queried are based on very problematic histories (racism, sexism, etc.). There is no such thing a “unbiased”.
(Ahmed)
(Gebru)