It has come up a number of times both on social media and in private conversations that especially junior people in academia, such as PhD candidates, are unable to not use AI. That is so-called artificial intelligence chatbots or similar products (see the this paper here to help understand what this means). They tell me that they feel so much direct or otherwise pressure from peers to use AI that they fear for their careers if they don't. If this describes you or addresses worries you've heard from others, then this is for you.
So, I will assume that at some point, you came across work like mine here and want to protect yourself as well as be an ethical actor:
- Guest, O., Suarez, M., Müller, B., et al. (2025). Against the Uncritical Adoption of 'AI' Technologies in Academia. Zenodo. https://doi.org/10.5281/zenodo.17065099
To address worries about these pressures and explain how the in principle rejection is actually the only way, which I describe here and elsewhere, I dedicate this short piece to you.
So like I said I assume you care about the environmental impacts, about the harm to other humans, especially in the Global South, who are directly harmed by their involvement in these technologies, and about the damage to creative industries, and more. I think you get these, deeply, and in addition also worry about yourself.
I have heard from fellow Cypriots, for example, they worry Western Europeans and USAmericans are hypocrites for asking you not to use these so-called tools. That the people telling you to abstain probably use these tools anyway or that at least others you may compete with are using them, and so you might conclude this will harm you and your career. You are very right to worry about your career, your skills, and your ability to navigate an extremely damaged and unjust world compared to that of previous generations. And yes, most Europeans and most USAmericans, especially in academia are indeed hypocrites.
In many senses, as is often the case with AI, we've been here before. What do I mean? Let me tell you.
During my computer science undergraduate degree, I had a huge culture shock. I guess I was naive, but it never occurred to me, until I saw it, how much my fellow students were cheating. Perhaps it's all neutral and at that stage means little, but still I think my shock captures something meaningful. What they were doing, my first such exposure, was that they were sharing validation tools with each other (that only one person had written) to test their code for assignments. Impressive that an undergraduate thought to write a thorough unit test suite, I guess, in Scheme — but like I said, it made me feel uncomfortable. It took away, they took away from themselves, it seemed to me the ability to write such a tool themselves or just somehow complete the assignment alone. I now know this is collusion, a form of cheating, a violation of academic integrity.
But feel free to leave aside the rules, or focus on their spirit. As an educator, I can promise you the whole point of such assignments is to learn through doing it yourself. If you do not do that you have missed the opportunity to learn something, likely a useful skill, or to practice part of a skill you already have and thus refine your ability, or to test yourself on something you already may know but of course cannot really know if you are still an undergraduate. Cheating here actually can damage you by robbing you of this opportunity, a chance to test yourself and receive feedback on your attempt. This is one of the reasons you came to university after all: to interact with lecturers, for them to give you feedback, and for you to learn as much as you can from it. Nobody else is harmed by you not learning something. Perversely, if I bring back your frame above of the competitive job market, they may even benefit! Because some of them did indeed not only complete the assignment, but they also thought of and built a tool to check their answers. They learned.
If you are in your late teens or early twenties and/or in an undergraduate degree programme this might be one of, if not the only, time in your life you will have no other major responsibilities other than to learn these intellectual skills. If you are in a full-time PhD, this is possibly the only time in your career — trust me! — where the largest part of your week can be dedicated to learning these foundational skills. And they will serve you well in academia, especially if you plan to stay. The skills I learned at these career stages are my most treasured and clearly ones that opened doors. "I can code this from scratch" is way more impressive than you might think given toxic industry pro-AI nonsense. The same goes for writing clearly, reading and comprehending journal articles, thinking things through, and on and on.
So as I have outlined above, and as you already know given how much mining for rare earth minerals and content-moderation of these models harm, using AI is not victimless. Often you not only are the product — yes, they will steal and sell your data — but you are also robbed of learning. When the hype inevitably dies down, as it has done many times before, through AI summers and winters (see my talk here if you were not aware of these cycles), you want to have skills. You want to have a degree that matters.
This brings us to perhaps a lesser victim in your calculations: the complete destruction of the university as an award-granting body if everybody uses AI to cheat. So when you want to advance in your career from student to academic faculty, or use your degree in any way, something might collapse on the other end of the equation: your degree itself. If degrees are meaningless because they do not reflect skills anymore, then academia is also damaged beyond repair; as well as society in general in our ability to have experts.
For more on this issue of respecting expertise, see this section here, which is based on:
- Guest, O., Suarez, M., & van Rooij, I. (2025). Towards Critical Artificial Intelligence Literacies. Zenodo. https://doi.org/10.5281/zenodo.17786243
If you need to cite my ideas on why cheating is both self-harm and harmful to institutions and society, please cite:
- Guest, O., Suarez, M., Müller, B., et al. (2025). Against the Uncritical Adoption of 'AI' Technologies in Academia. Zenodo. https://doi.org/10.5281/zenodo.17065099
In addition, the following work also has many of the ideas from above:
- Guest, O. & van Rooij, I. (2025). Critical Artificial Intelligence Literacy for Psychologists. PsyArXiv. https://doi.org/10.31234/osf.io/dkrgj_v1
- Suarez, M., Müller, B.,Guest, O., & van Rooij, I. (2025). Critical AI Literacy: Beyond hegemonic perspectives on sustainability. https://doi.org/10.5281/zenodo.15677839
- Guest, O. (2025). What Does 'Human-Centred AI' Mean?. arXiv. https://doi.org/10.48550/arXiv.2507.19960
- Guest, O., Suarez, M., & van Rooij, I. (2025). Towards Critical Artificial Intelligence Literacies. Zenodo. https://doi.org/10.5281/zenodo.17786243