Bits on Bots: The Sisyphean Syllabus

guest_blogger
Expert
Expert
1 2 1,542

JDuncan Headshot.jpgJennifer Duncan has been teaching English for twenty years, first at Chattanooga State Community College in Tennessee and now at Georgia State University’s Perimeter College in Atlanta, GA where she is Associate Professor of English.  For the last ten years, she has focused exclusively on online teaching concentrating on creating authentic and meaningful learning opportunities for students in composition and literature courses.

 

 

Every teacher who is talking about AI is also talking about cheating.  My response in these conversations is pretty cynical: “cheaters gonna cheat.” Somewhere in ancient Sumeria, scholars earnestly etched out the first cuneiform lesson plans while their students passed papyrus crib sheets and used their barley money to outsource their homework. Cheating is an ancient tradition as sacred to assignments as procrastination. It’s not that I’m giving up on academic integrity; it’s just that my time and mental health can’t survive if I focus for too long on policing cheating.

The reality is that if the goal is just a grade, it is faster and easier to cheat, and with generative AI advancing faster than Hermes on roller blades, there’s very little chance that I’ll catch it. Instead, I need a modern hoplite spear to deflect their temptation to cheat to protect myself and my students from the weaknesses that may lead them to seek solutions from ChatGPT instead of their professor.

I’m getting to a point here … syllabus policy statements. Revising our academic integrity policies has never been a more Herculean task and, at times, it will feel like a Sisyphean one. Where do we even start? I suspect that if we start with a list of restrictions and prohibitions, especially when it comes to generative AI, we essentially create a challenge to the cheaters to see if they can outsmart us – and they can.  Instead of setting up an adversarial system, however, an academic integrity policy can be reframed as an agreement – a description of what students have the right to expect of me, their classmates, and each other.

Like all modern writers tasked with creating a document with which they have no context or experience, I started my new policy statement by invoking Athena, sacrificing a USB drive, and consulting ChatGPT. It’s the 21st-century version of seeking wisdom from the Oracle, only with fewer riddles and more cat memes. To be honest (which is sort of the point), it generated some great ideas, the most compelling of which being the need for transparency.  

Full transparency in teaching requires that I disclose to my students that generative AI is constantly improving, which means that course policies may change during the semester. Most importantly, any policy I create must affirm students’ rights to expect clear instructions about when and how AI is allowed, and it must offer grace when mistakes are made. Their responsibility is to honestly disclose when and how they use AI in their work. In my class, they do this by answering reflection questions before they submit every assignment, but that’s just one method.  Ryan Watkins' "From AI to A+"and Erika Martinez’s “Guidelines for Generative AI Use” each provide excellent starting points for clarifying AI usage for both teachers and students.

My disclosure strategy seems to be working, and I’ve learned a lot. The reflection questions reinforce my optimistic philosophy that most students don’t cheat from malice or deception, but from misunderstanding or even from fear.  When students tell me what they have done with the AI, we can discuss what is and is not appropriate.  For example, one of my students showed me how she used ChatGPT to define some rhetorical terms she was too embarrassed to admit she didn’t know; another student used it to explain the opposing point of view to his argument so that he could counter it.  I didn’t tell my students to do either of these things, but they taught me great ways to use the tool to enter the rhetorical conversation.  On the other hand, when a student showed me the “help” she received from ChatGPT, I had to address the fact that she had pretty much let the AI write her entire essay; however, because she was honest – and honestly confused – we worked together to develop an appropriate plan before she was required to redo the entire assignment.  No Fs were needed; some education was, but that’s what I’m here for – literally.

So, like a Pedagogical Prometheus, I’m handing my students a fiery gift – the knowledge that generative AI exists and I don’t know everything about it – in hopes that they’ll use it to illuminate their writing and not burn down the entire course. We can even improve their writing and my teaching while avoiding angering the academic gods, one honest conversation at a time.

2 Comments
aswetnatex23
New Contributor
New Contributor
  1. Open the ICM Admin Script Editor.
  2. Create a new script or modify an existing one.
  3. Define the user variables you want to log (e.g., UserID, CallDuration).
  4. Assign appropriate values to these variables in the script.
  5. Implement a logging mechanism within the script.
  6. Specify a file path and name for the log file.
  7. Ensure the log directory kit has write permissions.
  8. Format the log entries clearly with variable names and values.
  9. Include a timestamp for each log entry.
  10. Test the script to verify that logging works correctly.
  11. Save and deploy the script in your ICM environment.
  12. Monitor the log file for accuracy and consistency.
 
RogerLeen
Migrated Account
This is a practical and thoughtful way to deal with AI in education. I like how the writer focuses on trust and open communication instead of strict rules. Let students explain how they use AI to learn what’s okay and what’s not. This approach helps students learn better and shows that AI can be useful when used correctly.