Bits on Bots: To Run or Not to Run... the Detection Report?

guest_blogger
Expert
Expert
1 0 1,030

JDuncan Headshot.jpgJennifer Duncan has been teaching English for twenty years, first at Chattanooga State Community College in Tennessee and now at Georgia State University’s Perimeter College in Atlanta, GA where she is Associate Professor of English.  For the last ten years, she has focused exclusively on online teaching concentrating on creating authentic and meaningful learning opportunities for students in composition and literature courses.

 

The latest episode of Abbott Elementary – a truly hilarious show that you should probably watch instead of reading this blog – hits on the topic of AI detectors. As Jacob, a History teacher at the school, demonstrates the newest AI detector to his class of eighth graders, he discovers his colleagues have been using AI to write responses to his newsletters. He’s deeply offended, but it’s a sitcom, so we can all predict exactly how the episode will end.  

The problem, of course, is that accurate AI detectors are as fictitious as the class sizes at this TV school (seriously, how does a public school class have only ten students?) The truth is that AI detectors don’t work. Don’t get me wrong; I’d love to have a tool that immediately and accurately identifies AI-generated text. I’d also love to teach in a classroom next to Sheryl Lee Ralph and have visits from a wise-cracking janitor every day, but none of these things is likely to happen.AI image generated by craiyon.comAI image generated by craiyon.com

So, should we even bother with AI detection tools? I’m on the fence about it. Just as I can’t figure out why a school that couldn’t buy nap mats for kindergartners in season one can afford a major software upgrade, I’m not sure AI detection systems are a great use of our students’ tech fees. The news is flooded with stories of students protesting academic integrity charges based on AI detection, and students should not have to fight accusations based solely on auto-detection reports which basically amount to one robot pointing a finger at another robot. 

AI detection systems are full of false positives – a fact that can be confirmed by a quick Google search (and is there any more reliable standard than a Google search?). The news reports don’t make students stop cheating; they do make faculty and universities look like the enemy, and, when they get it wrong, as they often do, it fractures relationships in a way that won’t be fixed by a few jokes and heartfelt conversation after the second commercial. Tuition does not entitle students to specific grades, but it does entitle them to be part of a scholarly conversation with other humans who are invested in their educational growth.

That conversation is where a detection tool may be helpful – not as evidence, but as a talking point. AI detectors note common language patterns and flag them as potentially AI-generated. Put your own writing into one, and it’s likely that some portion of it will be flagged, not because you cheated but rather because language has recognizable patterns. It’s why we can finish each other’s ---.

Detectors are great at noting template-style writing – explicit announcements of what an essay will do, signpost transitions, restatements of common phrases and ideas - but this doesn’t necessarily indicate a student is cheating. We rely on patterns every day to make sense of the world; it’s how we know Jacob and his colleagues will be friends again by the show’s end.  

Students use patterns and predictable writing for lots of reasons: their overworked high school teachers may have used automated feedback systems which rewarded predictability; they may be second language learners who learned English through patterns; they may be neurodivergent learners. These underserved students are already at risk of educational bias, and accusing them of cheating just pushes them further outside the academic circle.

So, to run the report or not? Well, a report can be used to provide feedback, but just like students can’t give their writing over to a bot, we can’t give over our assessment of the writing to one either. 

We can, however, use a report as a conversation starter with a student – not necessarily a conversation about cheating, but a conversation about their writing. Help students identify the patterns or word choices flagged as robotic. Then, discuss why these patterns are problematic or if they are appropriate for the situation. Talk to them about their writing choices and acknowledge that sometimes our choices will raise an AI detector’s flag, not because they’re the wrong choice, but because they’re the predictable one. And predictability does have a place in writing. A process essay shouldn’t sound like a William Faulkner book chapter – at least not if anyone is going to read it.

I certainly don’t want students using AI (to quote Barbara Howard, “That privilege is reserved for teachers”), but when they do, we should deal with students as individuals, not as computer-generated reports. Will my strategy succeed at preventing all cheating? Of course not, but “If you come back here tomorrow, ready to do your job, having not given up on yourself or that student, well, that is not a failure. Sometimes that’s what success looks like” – Barbara Howard.