Open AI and College Composition (Part 4)

davidstarkey
Author
Author
0 0 776

My previous three posts have looked at pedagogy and plagiarism, the nuts and bolts of AI in first-year writing class. However, I have, in a sense, been skirting around the core question for instructors and students of college composition: Do artificial intelligence programs make the teaching and learning of “academic writing” irrelevant? If ChatGPT can respond in a reasonable and clear fashion to most college writing prompts—and will doubtless do so with much more style and substance in the future—is there any need for students to take on that task themselves?

Cynically, one might argue that just as Spell Check and autocorrect made proofreading skills less essential for developing writers, the far more sophisticated ChatGPT obviates the need for college composition altogether.

Of course, as any instructor knows, spelling and grammar checks are only semi-successful without human oversight, and if a program as simple as spellcheck is flawed and in need of human assessment, how much more crucial will it be for the members of Generation AI to be able to evaluate, analyze and call out AI writing gone wrong. The skills that AI exhibits with such apparent effortlessness—the ability to summarize a complex topic, say, or formulate an argument—are the same skills humans will need to assess the validity of those summaries and arguments. Indeed, the power and potential of ChatGPT will require humans to become smarter, more aware and less credible than we have been to date.

Perhaps just as important as strengthening our cognitive skills will be the need to double down on the development of our non-cognitive abilities, and instructors will have to model behaviors they want their students to emulate. Ray Schroeder, a senior fellow at UPCEA, the online and continuing education association, urges faculty to “continuously grow our own personal, uniquely human, capabilities such as our ethos, empathy, care and insight into our fellow humans. These will continue to set us apart from AI, for a while.”

The ability to experience feelings and sensations, which no trustworthy source has yet made for ChatGPT, continues to be the dividing line between Us and It. “The difference between the AI and the human mind is sentience,” says Boris Steipe, a professor of molecular genetics at the University of Toronto. “If we want to teach as an academy in the future that is going to be dominated by digital ‘thought,’ we have to understand the added value of sentience—not just what sentience is and what it does, but how we justify that it is important and important in the way that we’re going to get paid for it.”

Hopefully, becoming more human will help steer us away from one of the gravest sins fomented by the AI revolution: having less trust in our students. Yes, some of them are blithely using ChatGPT without regard for the ethical implications of cheating, but many others are not, and we do the educational process a disservice when we carelessly assume the worst about what is happening outside our classrooms.

Two recent Washington Post articles describe the damaging effects of AI-related mistrust. In one, a high school senior has her essay erroneously flagged by Turnitin’s new AI-writing detector. The student, Lucy Goetz, comments that being “caught” by Turnitin, which claims only a 98% accuracy, is frightening because “There is no way to prove that you didn’t cheat unless your teacher knows your writing style or trusts you as a student.” As the article’s author points out: “Unlike accusations of plagiarism, AI cheating has no source document to reference as proof.”

And then there is memoirist and teacher Brian Broome, who asked his students to write a poem, then reflexively assumed one person had cheated because his poem was so strong, yet he was “a taciturn and unassuming young male student.” When he realized his mistake, Broome apologized to the student, who was “flattered by the praise” and indicated that “he wants to write more.” Broome concludes: “I can only surmise that, because of my mistake, he now knows he’s quite good at [writing]. And if that’s what progress looks like, I’ll take it.”