Open AI and College Composition (Part 1)

davidstarkey
Author
Author
2 0 388

The last time I remember technology and composition in such apparent conflict was in the early 1990s, when my colleagues and I wondered if the grammar and spell check tools provided by Word and WordPerfect gave students with access to these programs an unfair advantage over their less tech-savvy peers. Of course, using software to correct subject-verb agreement errors seems positively quaint in comparison with what students can accomplish using Open AI’s ChatGPT, and the many AI-driven programs that are sure to follow in its wake.

Not surprisingly, the response to artificial intelligence as a generator of text—among both teachers and media commentators—has been overwhelming. In December, novelist Stephen Marche declared “The College Essay Is Dead,” while high school teacher Daniel Herman concluded that ChapGPT signaled “The End of High-School English.” Some Twitter users were downright apocalyptic, and even Open AI’s own CEO, Sam Altman, acknowledged, “The bad case—and I think this is important to say—is, like, lights out for all of us.”

Initially, I didn’t think these doomsayers were far from wrong. One evening, a colleague and I sat down with our computers and tried to stump ChatGPT. Could AI perform a rhetorical analysis on an article she assigns each semester? It could. The grade? “This is an early assignment in the semester, so I’d say at least a ‘B.’” But how would AI do when faced with personal writing? After all, a computer program doesn’t have any life experiences to draw on, so I asked ChatGPT to write a thousand-word essay on the biggest challenge it had ever faced and what it had learned from that challenge. A couple of minutes later, I learned that AI’s biggest challenge had been the death of its mother from cancer when AI was a young teenager. The lessons learned were hardly earth-shattering—the preciousness of life, the need to stand on one’s own two (virtual) feet—but they were the sort of responses one might expect from the prompt I had posed.

Right away, my friend and I wondered: If a computer program can respond effectively to assignments like those we gave it, should those assignments be changed? Maybe our first attempts were flawed. However, as we worked variations on standard first-year essay prompts, ChatGPT kept responding in what we admitted was an “acceptable” fashion. Granted, AI was lousy when it came to documentation, and it tended to come up with the most obvious responses to our questions, but the reasoning was sound more often than not, and sentence-level errors were generally absent. 

Clearly, we didn’t want to dive headlong into what a special session at CCCC calls “crisis-speak.” Philosophy professor Lawrence Shapiro argues that “the cheaters are only hurting themselves—unless we respond to them by removing writing assignments from the syllabus.” Focusing solely on plagiarism runs the risk of depriving students of the writing practice many of them so desperately need. Moreover, as Chris Gilliard and Peter Rorabaugh write in Slate:

Although plagiarism is an easy target and certainly on the minds of teachers and professors when thinking about this technology, there are deeper questions we need to engage, questions that are erased when the focus is on branding students as cheaters and urging on an A.I. bakeoff between students and teachers. Questions like: What are the implications of using a technology trained on some of the worst texts on the internet? And: What does it mean when we cede creation and creativity to a machine?

Nevertheless, pretending that AI doesn’t exist and carrying on as before is not a realistic option. Therefore, in the months to come, I’ll be looking at some of the many ways instructors are responding to one of the biggest pedagogical curveballs most writing teachers have ever faced.