top of page

Give Cheats a Chance

  • Stone Paper Cloud
  • Mar 26
  • 4 min read

Updated: Apr 5

Reframing the conversation about AI and Cheating in the Classroom


student with a cheat sheet hidden in their hand during an exam

I was at a friend’s house last week when her Year 10 daughter came back from school in high dudgeon; she’d been given an automatic Friday night detention because her recent essay had scored 7% on the AI checker, thereby exceeding the 4% threshold outlined in her school’s policy.


Now, for full disclosure, her essay was actually 100% AI generated - “Mum would normally have helped, but she was out so I used ChatGPT!” I know - quite a bit to unpack there, isn’t there? You might well argue that her resentment was a little misplaced, given, you know,  the facts, but she’s always had a nice line in righteous indignation. And she argued that she’d done a pretty good job of prompting GPT to beat the checker. 


Not quite good enough, clearly, but I’m sure she’ll improve. As will GPT, of course.


Talk to teachers about the advent of generative AI in the classroom and the first word that comes up is always ‘cheating’. I do understand why - cheating and plagiarism are very important issues and I don’t want to suggest otherwise.


At the same time, we shouldn’t pretend cheating was invented by AI. The thousands of Kenyans earning their living from writing school and university essays for other people couldn’t have done so without a ready market.


I think the need to protect academic integrity has never been greater. My problem with the 4% policy is that it frames the conversation about AI in schools entirely within the context of cheating.


That is unhelpful for three reasons.


The first is that you enter an arms race that you can’t win and one that is based entirely on mistrust. (Is there any other kind of arms race?) The better the detectors get at identifying AI generated material, the better the generative bots get at fooling them. Now, in an abstract sense, I quite enjoy the circularity of AI bots learning from AI bots how to avoid detection by AI bots but it’s not going to help anyone learn very much. 


Initially, it is tempting to believe the detection rates claimed by the AI checkers, but, in fact, the evidence to the contrary is overwhelming. If institutions like Vanderbilt University are turning off Turnitin and MIT Sloan are stating bluntly ‘AI Detectors Don’t Work’  then I think we should probably take heed. 


Famously, when the US Constitution was fed into GPTZero it returned a 96.21% certainty that it had been written by AI. The .21 looks convincingly precise doesn’t it? Until you remember the US constitution was probably written with a turkey feather.


It’s not just that the checkers are not as good as they claim. More worryingly, evidence is emerging to suggest that the checkers are adversely biased against certain groups of students. Recent research from Stanford found that non-native English writing was consistently mis-classified as AI generated when native writing samples were more accurately judged. There’s further evidence of bias against neuro-divergent students.  


To be fair to them, Turnitin do discuss the problem of false positives on their website. It’s just that very few teachers will have the time or bandwidth to really get to grips with the issue. They’re much more likely to accept that accurate-looking percentage at face value.



The second problem with framing our response to AI in terms of deception and cheating is that we miss the opportunity to openly discuss the many ways that AI could be a positive force within education. I wrote recently about the rapid and accurate assistance that AI gave a student who was struggling to understand ‘Romeo and Juliet’. That’s a very simple example but the scope for generating efficiency, learning and, dare I say it, fun in the classroom is endless if we can free ourselves from the fear of cheating for long enough to imagine the possibilities.


Helping our students to develop proper AI literacy requires us to be transparent about the times when it is helpful to use AI, the times when it is unethical and the times when it is positively damaging to their learning. We often say we want our students to be more reflective about their learning; this is a perfect opportunity to put that into practice. We will miss that opportunity if we focus only on cheating.



The third problem is that we avoid confronting the ways in which task-setting and assessment should and must change in a world with generative AI.


I’ve always thought that assessment is the ‘designated driver’ of teaching, sitting in a corner nursing a diet coke while all the attention is lavished on the curriculum and pedagogy. Maybe, now is the time for assessment to steal some limelight.


If we’re honest, not many of us think deeply about what a particular task will really tell us about a student’s ability. Instead, we tend to rely on tried and tested methods, such as the good old 5 paragraph essay. 


But, with AI, we have to face up to the possibility the old methods may not work so well any more. The 5 paragraph essay, for example, is particularly easy for AI to generate; there are so many examples on the web that it literally learnt to write them at its motherboard’s knee. Maybe it’s time to bid it farewell, fond or otherwise.


We need to be more thoughtful about what we actually want to know about a student’s ability at that moment and then how we will reliably discover that. 


The key is going to be to design assessment methods that fully integrate the durable skills we always talk about when we are thinking about the curriculum but which we tend to forget about by the time we consider assessment. We know, for example, that collaborative skills are highly prized by employers. We also know that collaborative tasks tend to be more ‘AI-resistant’ than many others. How can we design rubrics that properly reward students who are able to collaborate effectively? 


This is an opportunity to re-design our practices so we shift away from those tasks that are AI-vulnerable and towards a more holistic assessment which builds a picture of a student’s abilities over time and across a much broader range of competencies.


Thinking we can go on as before and trust an AI-checker to help us out isn’t an option. AI is here to stay. Our students are already using it. The sooner we embrace it positively the better.


Comments


bottom of page