ChatGPT Take 2: or Meet the New HAL, Same as the Old HAL

jack_solomon
Author
Author
1 0 598

Student reading textbook with phone in lap, browser open to ChatGPT.jpg

A number of years ago in those long-lost days before Turnitin.com, I found myself experiencing a distinct sense of deja vu as I was reading a batch of student papers written in response to an assignment in my literary criticism and theory class.  Haven't I read this sentence before? I asked myself, and decided accordingly to conduct a Google search on it to see what would happen.  You already know where this is headed:  the search turned up an entire student essay that had been posted online in response to a similar assignment at a different university.  This compelled me to look over all the papers I had already read and graded, while putting me on the alert for those that I hadn't read yet.  In the end, I found five papers, out of a class of 35 students, in which pretty much the entire online paper had been copied and presented as original student work.

I am reminded of this experience as I ponder the significance of an opinion piece by Inara Scott (an associate dean for teaching and learning in the College of Business at Oregon State University) published recently in Inside Higher Education.  Entitled "Yes, We Are in a (ChatGPT) Crisis," Scott's essay is a clarion call to everyone in higher education about just how big a problem (not "challenge;" problem) ChatGPT already is.  Lest you think that I am exaggerating, I am going to quote an entire paragraph of Scott's in full: 

"Back in January, I, like many others, thought we could design our coursework to outwit students who would rely on AI to complete their assignments. I thought we could create personalized discussion questions, meaningful and engaged essay assignments, and quizzes that were sufficiently individualized to course materials that they would be AI-proof. Turns out, I was incorrect. Particularly with the arrival of GPT-4, there is very little I can assign to my undergraduates that the computer can’t at least take a stab at. Students may have to fill in a few details and remember to delete or add some phrases, but they can avoid most of the thinking—and save a lot of time. GPT-4 can write essays, compare and contrast options, answer multiple-choice questions, ace standardized tests, and it is growing in its capacity to analyze data—even a lot of data—that is fed to it. It can write code and make arguments. It tends to make things up, including citations and sources, but it’s right a lot of the time" (ChatGPT is causing an educational crisis (opinion) (insidehighered.com)).

What I find especially striking about Scott's observations is how they go beyond the concerns of composition instructors to encompass, at least potentially, pretty much every subject taught in our universities.  The apparent fact that ChatGPT can take multiple choice and standardized tests, as well as write code, indicates that it has already invaded the terrain of STEM coursework, which can be heavily dependent on multiple choice and standardized testing.  At the same time, as Scott points out, there appears to be no way that instructors of wholly online courses can control the situation at all, short of failing every exam that shows over 50% of AI-generated content. This would probably not go over very well with administrators who have come to rely on online course offerings more than ever, in what I have heard called "the post-COVID" era, and who are also under extreme political pressure to show ever-increasing levels of "student success."

Now, Scott is so worried about the future of higher education in such a climate that she offers both short term and long-term solutions to the problem as she sees it—solutions that I will let you read and judge for yourself.  For myself, I will only say here that what America is facing today is not only an educational crisis; it is a cultural crisis of immense significance.  For here is a paradigm shift to beat all paradigm shifts, a prospect that seems to fulfill the nightmarish vision of Kubrick and Clarke's 2001: A Space Odyssey.  Will our class rosters present us with students who are all (in effect) named HAL?  And with ChatGPT writing code, will even the white collar, postindustrial workforce also be composed of indistinguishable HALs?

I would suggest asking your students to write an essay contemplating such a world, but since the likelihood is that you will only get back an AI-generated essay, I will refrain from that.  It would make a dandy in-class discussion topic, however.

And, not so by the way, this blog, though written on a computer, is entirely human authored.  But if some bot were to pick it up and toss it into the giant aggregation maw that feeds AI development, it could end up in a student essay some day in response to an assignment on artificial intelligence and culture change.

Oy vey.  Somehow, I don't think that HAL would say that.

Photo by Shantanu Kumar (2023), used under the Unsplash License.

About the Author
Jack Solomon is Professor Emeritus of English at California State University, Northridge, where he taught literature, critical theory and history, and popular cultural semiotics, and directed the Office of Academic Assessment and Program Review. He is often interviewed by the California media for analysis of current events and trends. He is co-author, with the late Sonia Maasik, of Signs of Life in the U.S.A.: Readings on Popular Culture for Writers, and California Dreams and Realities: Readings for Critical Thinkers and Writers, and is also the author of The Signs of Our Time, an introductory text to popular cultural semiotics, and Discourse and Reference in the Nuclear Age, a critique of poststructural semiotics that proposes an alternative semiotic paradigm.