about CAIL
On this page are some resources for Critical AI Literacy (CAIL) from my perspective. Also see: the project homepage and this press release on our work.
As we say here, CAIL is:
an umbrella for all the prerequisite knowledge required to have an expert-level critical perspective, such as to tell apart nonsense hype from true theoretical computer scientific claims (see our project website). For example, the idea that human-like systems are a sensible or possible goal is the result of circular reasoning and anthropomorphism. Such kinds of realisations are possible only when one is educated on the principles behind AI that stem from the intersection of computer and cognitive science, but cannot be learned if interference from the technology industry is unimpeded. Unarguably, rejection of this nonsense is also possible through other means, but in our context our AI students and colleagues are often already ensnared by uncritical computationalist ideology. We have the expertise to fix that, but not always the institutional support.
CAIL also has the goal to repatriate university technological infrastructure and protect our students and selves from deskilling — as we explain here:
Within just a few years, AI has turbocharged the spread of bullshit and falsehoods. It is not able to produce actual, qualitative academic work, despite the claims of some in the AI industry. As researchers, as universities, we should be clearer about pushing back against these false claims by the AI industry. We are told that AI is inevitable, that we must adapt or be left behind. But universities are not tech companies. Our role is to foster critical thinking, not to follow industry trends uncritically.See more at — and please cite — the preprint here:
- Guest, O., Suarez, M., Müller, B., et al. (2025). Against the Uncritical Adoption of 'AI' Technologies in Academia. Zenodo. https://doi.org/10.5281/zenodo.17065099
Here is a wonderfully done interview by Kent Anderson and Joy Moore, where we got to speak at length on these topics:
In general, have a look at the various resources here, such as the blog posts and preprints and published papers to understand our various perspectives.
blogs
- Guest, O., van Rooij, I, Müller, B., & Suarez, M. (2025). No AI Gods, No AI Masters. https://www.civicsoftechnology.org/blog/no-ai-gods-no-ai-masters
- Merchant, B. (2025). Cognitive scientists and AI researchers make a forceful call to reject "uncritical adoption" of AI in academia. https://www.bloodinthemachine.com/p/cognitive-scientists-and-ai-researchers/
- van Rooij, I. (2025). AI slop and the destruction of knowledge. https://doi.org/10.5281/zenodo.16905560
- Suarez, M., Müller, B.,Guest, O., & van Rooij, I. (2025). Critical AI Literacy: Beyond hegemonic perspectives on sustainability. https://doi.org/10.5281/zenodo.15677839
- van Rooij, I. & Guest, O. (2024). Don't believe the hype: AGI is far from inevitable. https://www.ru.nl/en/research/research-news/dont-believe-the-hype-agi-is-far-from-inevitable
research
- Guest, O., Suarez, M., Müller, B., et al. (2025). Against the Uncritical Adoption of 'AI' Technologies in Academia. Zenodo. https://doi.org/10.5281/zenodo.17065099
- Guest, O. (2025). What Does 'Human-Centred AI' Mean?. arXiv. https://doi.org/10.48550/arXiv.2507.19960
- van Rooij, I. & Guest, O. (2025). Combining Psychology with Artificial Intelligence: What could possibly go wrong?. PsyArXiv. https://osf.io/preprints/psyarxiv/aue4m
- Forbes, S. H. & Guest, O. (2025). To Improve Literacy, Improve Equality in Education, Not Large Language Models. Cognitive Science. https://doi.org/10.1111/cogs.70058
- Guest, O. & Martin, A. E. (2024). A Metatheory of Classical and Modern Connectionism. PsyArXiv. https://doi.org/10.31234/osf.io/eaf2z
- van Rooij, I., Guest, O., Adolfi, F. G., de Haan, R., Kolokolova, A., & Rich, P. (2024). Reclaiming AI as a theoretical tool for cognitive science. Computational Brain & Behavior. https://doi.org/10.1007/s42113-024-00217-5
- van der Gun, L. & Guest, O. (2024). Artificial Intelligence: Panacea or Non-intentional Dehumanisation?. Journal of Human-Technology Relations. https://doi.org/10.59490/jhtr.2024.2.7272
- Guest, O. & Martin, A. E. (2023). On Logical Inference over Brains, Behaviour, and Artificial Neural Networks. Computational Brain & Behavior. https://doi.org/10.1007/s42113-022-00166-x
- Erscoi, L., Kleinherenbrink, A., & Guest, O. (2023). Pygmalion Displacement: When Humanising AI Dehumanises Women. SocArXiv. https://doi.org/10.31235/osf.io/jqxb6
Here are some related academics' websites (feel free to contact me to add more):
- Refusing GenAI in Writing Studies: A Quickstart Guide, by Jennifer Sano-Franchini, Megan McIntyre, & Maggie Fernandes
- AGAINST AI, by Anna Kornbluh, Krista Muratore, & Eric Hayot
sign the Open Letter here!
Colleagues and I have written and published: Open Letter: Stop the Uncritical Adoption of AI Technologies in Academia.Below are some posters to use to attract attention to more critical thinking about AI: Download them as PDFs here. The QR code points to: Against the Uncritical Adoption of 'AI' Technologies in Academia.




