about

On this page are some resources for Critical AI Literacy (CAIL) from my perspective. Also see: the project homepage and this press release on our work.

As we say here, CAIL is:

an umbrella for all the prerequisite knowledge required to have an expert-level critical perspective, such as to tell apart nonsense hype from true theoretical computer scientific claims (see our project website). For example, the idea that human-like systems are a sensible or possible goal is the result of circular reasoning and anthropomorphism. Such kinds of realisations are possible only when one is educated on the principles behind AI that stem from the intersection of computer and cognitive science, but cannot be learned if interference from the technology industry is unimpeded. Unarguably, rejection of this nonsense is also possible through other means, but in our context our AI students and colleagues are often already ensnared by uncritical computationalist ideology. We have the expertise to fix that, but not always the institutional support.

CAIL also has the goal to repatriate university technological infrastructure and protect our students and selves from deskilling — as we explain here:

Within just a few years, AI has turbocharged the spread of bullshit and falsehoods. It is not able to produce actual, qualitative academic work, despite the claims of some in the AI industry. As researchers, as universities, we should be clearer about pushing back against these false claims by the AI industry. We are told that AI is inevitable, that we must adapt or be left behind. But universities are not tech companies. Our role is to foster critical thinking, not to follow industry trends uncritically.
See more at — and please cite — the preprint here:

In general, have a look at the various resources here, such as the blog posts and preprints and published papers to understand our various perspectives.

video talks & interviews

blogs, news, & opinion pieces

research

Abstract: Critical Artificial Intelligence Literacies (CAILs) is the collection of ways of thinking about and relating to so-called artificial intelligence (AI) that rejects dominant frames presented by the technology industry, by naive computationalism, and by dehumanising ideologies. Instead, CAILs centre human cognition and uphold the integrity of academic research and education. We present a selection of CAILs across research and education, which we analyse into the following non-orthogonal dimensions: conceptual clarity, critical thinking, decoloniality, respecting expertise, and slow science. Finally, we note how we see the present with and without a wider adoption of CAILs — a fundamental aspect is the assertion that AI cannot be allowed to drive change, even positive change, in education or research. Instead cultivation of and adherence to shared values and goals must guide us. Ultimately, CAILs minimally ask us to contemplate how we as academics can stop AI companies from wielding so much power.

"The important dimensions of CAILs across research and education; clockwise from 12 o’clock: Conceptual Clarity is the idea that terms should refer. Critical Thinking is deep engagement with the relationships between statements about the world. Decoloniality is the process of de-centring and addressing dominant harmful views and practices. Respecting Expertise is the epistemic compact between professionals and society. Slow Science is a disposition towards preferring psychologically, techno-socially, and epistemically healthy practices. The lines between dimensions represent how they are interwoven both directly and indirectly." (Guest et al., 2025, figure 1)

icon
icon

"A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI" (Guest et al., 2025, figure 1)

Abstract: Under the banner of progress, products have been uncritically adopted or even imposed on users — in past centuries with tobacco and combustion engines, and in the 21st with social media. For these collective blunders, we now regret our involvement or apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not considered a valid position to reject AI technologies in our teaching and research. This is why in June 2025, we co-authored an Open Letter calling on our employers to reverse and rethink their stance on uncritically adopting AI technologies. In this position piece, we expound on why universities must take their role seriously to a) counter the technology industry's marketing, hype, and harm; and to b) safeguard higher education, critical thinking, expertise, academic freedom, and scientific integrity. We include pointers to relevant work to further inform our colleagues.

"Below some of the typical terminological disarray is untangled. Importantly, none of these terms are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe." (Guest et al., 2025, table 1)

Term Description Resources
Artificial Intelligence (AI) The phrase 'artificial intelligence' was coined by McCarthy et al. (1955) in the context of proposing a summer workshop at Dartmouth College in 1956. They assumed significant progress could be made on making machines think like people. In the present, AI has no fixed meaning. It can be anything from a field of study to a piece of software. Avraamidou (2024), Bender and Hanna (2025), Bloomfield (1987), Boden (2006), Brennan et al. (2025), Crawford (2021), Guest (2025), Hao (2025), McCorduck (2004), McQuillan (2022), Monett (2021), Vallor (2024), and van Rooij, Guest, et al. (2024).
Artificial neural network (ANN) First proposed in McCulloch and Pitts (1943), it is a mathematical model, comprised of interconnected banks of units that perform matrix multiplication and non-linear functions. These statistical models are exposed to data (input-output pairs) that they aim to reproduce. While held to be inspired by the brain, such claims are tenuous or misleading. Abraham (2002), Bishop (2021), Boden (2006), Dhaliwal et al. (2024), Guest and Martin (2023, 2025a), Hamilton (1998), Stinson (2018, 2020), and Wilson (2016).
Chatbot An engineered system that appears to converse with the user using text or voice. Speech synthesis goes back hundreds of years (Dudley 1939; Gold 1990; Schroeder 1966) and Weizenbaum's (1966) ELIZA is considered the first chatbot (Dillon 2020). Modern versions can contain ANNs in addition to hardcoded rules. Bates (2025), Dillon (2020), Elder (2022), Erscoi et al. (2023), Schlesinger et al. (2018), Strengers et al. (2024), Turkle (1984), and Turkle et al. (2006).
ChatGPT A proprietary closed source chatbot created by OpenAI. The for-profit company OpenAI has been steeped in hype from inception. It does not provide source code for most of its models, violating open science principles for academic users. OpenAI reported $5 billion in losses in 2024 (Reuters 2025), and has received $13 billion from Microsoft (Levine 2024). Andhov (2025), Birhane and Raji (2022), Dupré (2025), Gent (2024), M. T. Hicks et al. (2024), Hill (2025), Jackson (2024), Kapoor et al. (2024), Liesenfeld, Lopez, et al. (2023), Mirowski (2023), Perrigo (2023), Titus (2024), and Widder et al. (2024).
Generative model A specification on the type of statistical distribution modelled; typically contrasted with discriminative model. ANNs can be generative (e.g. Boltzmann machines) or discriminative (e.g. convolutional neural networks used for classifying images). In the context of generative AI or generative pre-trained transformer (GPT), this phrase is used inconsistently. Efron (1975), Jebara (2004), Mitchell (1997), Ng and Jordan (2001), and Xue and Titterington (2008).
Large language model (LLM) A model that captures some aspect of language, with the term 'large' denoting that the number of parameters exceed a certain threshold. Modern chatbots are often LLMs, which use ANNs, along with a graphical interface so that users can input so-called text 'prompts.' LLMs can be generative, discriminative, or neither. Bender, Gebru, et al. (2021), Birhane and McGann (2024),Dentella et al. (2023, 2024), Leivada, Dentella, et al. (2024), Leivada, Günther, et al. (2024), Luitse and Denkena (2021), Shojaee et al. (2025a), Villalobos et al. (2024), and Wang et al. (2024)

Abstract: Psychologists — from computational modellers to social and personality researchers to cognitive neuroscientists and from experimentalists to methodologists to theoreticians — can fall prey to exaggerated claims about artificial intelligence (AI). In social psychology, as in psychology generally, we see arguments taken at face value for: a) the displacement of experimental participants with opaque AI products; the outsourcing of b) programming, c) writing, and even d) scientific theorising to such models; and the notion that e) human-technology interactions could be on the same footing as human-human (e.g., client-therapist, student-teacher, patient-doctor, friendship, or romantic) relationships. But if our colleagues are, accidentally or otherwise, promoting such ideas in exchange for salary, grants, or citations, how are we as academic psychologists meant to react? Formal models, from statistics and computational methods broadly, have a potential obfuscatory power that is weaponisable, laying serious traps for the uncritical adopters, with even the term `AI' having murky referents. Herein, we concretise the term AI and counter the five related proposals above — from the clearly insidious to those whose ethical neutrality is skin-deep and whose functionality is a mirage. Ultimately, contemporary AI is research misconduct.

"Core reasoning issues (first column), which we name after the relevant numbered section, are characterised using a plausible quote. In the second column are responses per row; also see the named section for further reading, context, and explanations." (Guest & van Rooij, 2025, table 1)

Uncritical Statement Possible Response
Lies, Damned Lies, and Statistics

"AI products are outside my expertise but I think it is useful to deploy them."
As a matter of fact these products are statistical models, akin to logistic regression, which all psychologists even undergraduate students are required to have a familiarity with. Additionally, it is required to know the differences between models used to perform statistical inference and those that are models of cognition. As is knowing basic open science principles. Therefore, it should come as no shock that assuming the mantle of the non-expert here is inappropriate, and in fact may even be a form of QRP to abandon critical thinking.
Displacement of Participants

"I can use AI instead of participants to perform tasks and generate data."
The provenance of the data used in these models indicates it is not ethically sourced, falling below standards for our discipline, involving sweatshop labour and no consent for private data used in experiments. The output can contain direct original input data (i.e. double dipping), but smoothed to remove outliers, conform to our pre-existing ideas of what it should look like (data fabrication), and all-round irreplicable. Psychology is meant to study humans, not patterns at the output of biased statistical models.
Outsourcing Programming to Companies

"I can use AI for programming experimental paradigms and statistical analyses."
This is an example of the field’s backsliding from adopting open science and programming skills. No formal specification will be given for code generated from a corporate-owned opaque model. The psychologist now has no reason to learn how to engineer software, and disturbingly might as well switch back to propriety software like SPSS which at least has documentation and explicit versions. Code at the output will be plagiarised, making it time-consuming to check compliance with our needs than if we wrote the code ourselves, and violating openness.
Ghostwriter in the Machine

"I can use AI for understanding the literature and for scholarly writing."
This practice implicates a swathe of issues akin to automating the paper mill. First, the literature is screened by corporations, which have every reason to control the output of the model to suit their needs or minimally to ignore output issues, such as sexism. Second, the fabrication of non-existent citations which makes claims worse than baseless because they appear supported by prior work. Third, the dislocation of text from the literature since no provenance can be established, resulting in plagiarism.
The End of Scientific Theory

"I can outsource verbal theorising to AI or use it as a formal cognitive model."
This not only adds to the dislocation of work from its evidential and historical basis, but also it impedes our theorising about phenomena and systems under study. In this context, we are interested in human-understandable theory and theory-based models, not statistical models which provide only a representation of the data. Scientific theories and models are only useful if we, the scientists who build and use them, understand them in deep ways and they connect transparently to research questions. AI product use is absconding scientific duty.
Equivocation of Human-Human & Human-AI

"I can study people using chatbots as if they are socially interacting."
Seeing client-therapist, student-teacher, patient-doctor, friendship, or romantic relationships as equivalent to those between people and artifacts is both a form of dehumanisation and a hollowing out of the target of study in social psychology: the relationship between people and other people. It is important to study the relations between humanity and machines and the social interactions mediated through technology — but to place interactions with chatbots in the same category as those between people assumes and risks too much.

Here are some related academics' and scholars' websites (feel free to contact me to add more):

sign the Open Letter

Colleagues and I have written and published: Open Letter: Stop the Uncritical Adoption of AI Technologies in Academia.

Please consider adapting the letter for your employer; here are some allies' efforts inspired by our letter:

have you considered not using AI?

I made some posters, in the style of this website, to use to attract attention to more critical thinking about AI: Download them as PDFs here. The QR code points to: