Open AI and College Composition (Part 2)

0 0 372

The news about Open AI’s ChatGPT changes on a daily basis, so good or bad ideas about how to address it may be quickly upended. However, I would like generate what I hope is a useful list of ways our colleagues in various writing-based fields are addressing the challenges, and opportunities, this new technology presents. In this month’s post, I’ll begin with strategies that view the artificial intelligence chatbot as, at best, a nuisance, and at worst an existential threat to the future of education—an invention that is harmful rather than salutary. Next month, I’ll focus on approaches that actively seek to engage with ChatGPT, to make it a tool for learning.

Do nothing.

In “Why I’m Not Worried About My Students Using ChatGPT,” University of Wisconsin-Madison philosophy professor Lawrence Shapiro reckons that if only about 6 of the 28 students in his class are likely to cheat with ChatGPT (his calculations are a bit mysterious), then “It makes no sense to me that I should deprive 22 students who can richly benefit from having to write papers only to prevent the other six from cheating (some of whom might have cheated even without the help of a chatbot).” He argues that “the cheaters are only hurting themselves — unless we respond to them by removing writing assignments from the syllabus.” His conclusion: “I say, who cares?”

Responding to Shapiro’s article, former English instructor Jane Leibbrand writes that “Lawrence Shapiro’s head is in the clouds.” From her experience, she says she can assure Shapiro “that close to 100 percent, if not all, of his students will use ChatGPT if they have access to it to write themes for his class. This technology is too much of a temptation for anybody not to use it.”

Whether the percentage of likely cheaters is 20% or almost 100%how could we ever know?—doing nothing seems like a poor option. As high school English teacher Peter Greene writes in Forbes, ChatGPT is “not the apocalypse, but it’s not a nothingburger, either.”

Don’t grade.

If doing nothing is not a realistic option, what should an instructor do? One idea making the rounds is to eliminate grading altogether. Dartmouth philosophy professor and cognitive science program chair Adina Roskies is blunt: “I certainly am not interested in grading a lot of papers that are written by a machine because it’s extremely time consuming…And it’s not a learning experience for the student, it’s a huge waste of time.” She muses: “Maybe I’ll just stop grading, because it’s not about the grading. It’s about learning stuff and learning how to think about stuff.”

The case against grades—they are biased against certain groups, they cause too much stress, they don’t accurately measure performance, they take the fun out of education, etc.—has been made for years, and abolishing grading might well make academic life easier for both students and teachers. For some students, it could shift their focus from easy last-minute ploys to complete an assignment (like using ChatGPT) to the more rewarding endeavor of actually learning the course material.

However, there’s no guarantee that such a transformation would take place. If students are given course credit without having to do any work, if there is no organized way of assessing and evaluating their progress, the meaning of a degree would, in the eyes of many future employers, be minimal.

Grade using narrative evaluations.

Many students and professors, especially those enrolled in STEM courses, would openly rebel against doing away with grades, but there is an alternative: narrative or performance evaluations. This option may make more sense in the humanities where, after all, such evaluations have been around for a long time at institutions like Antioch University, Evergreen State College, and the University of California, Santa Cruz. Clearly, this method of grading would not eliminate ChatGPT cheating, but it would insist that instructors provide a written overview of their students’ class performance, which could address lack of drafts, major inconsistencies between in-class and out of class writing, and so on.

Use AI to fight AI.

If technology is the problem, might it also be the solution? In February, ChatGPT introduced a new tool, AI Text Classifier, which helps detect the difference between human and computer-generated text, and no doubt other plagiarism detection software focusing on AI speech will soon be on the market. Still, these products are themselves fallible. Ian Bogost notes in The Atlantic that “As OpenAI explains, the tool will likely yield a lot of false positives and negatives, sometimes with great confidence. In one example, given the first lines of the Book of Genesis, the software concluded that it was likely to be AI-generated. God, the first AI.”

Forbid the use of ChatGPT.

While the gesture may seem quixotic, many institutions have already forbidden the use of ChatGPT by their students. In January, for instance, New York City Schools banned ChatGPT. A spokesperson for the schools claimed the move was the result of “concerns about negative impacts on student learning, and concerns regarding the safety and accuracy of contents.” Oxford and Cambridge both recently prohibited ChatGPT, declaring its use would be considered “academic misconduct.”

And yet outlawing ChatGPT will not affect the very students that are most likely to use it. It certainly won’t make it go away. If the solution were that simple, there would be no international furor—and no need for this blog post, or the one next month, where we’ll look at Open AI as a source of positive discussion and learning.