- Our Mission
- Our Leadership
- Diversity, Equity, Inclusion
- Learning Science
- Webinars on Demand
- Digital Community
- English Community
- Psychology Community
- History Community
- Communication Community
- College Success Community
- Economics Community
- Institutional Solutions Community
- Nutrition Community
- Lab Solutions Community
- STEM Community
“New forms of media have always caused moral panics.” So begins a 2010 op-ed Steven Pinker wrote in response to concerns about potential negative impacts of digital technologies on habits of reading and thinking. I began assigning that essay, along with Nicholas Carr’s point-by-point response, in my FYC courses in 2011, but by 2017, their argument appeared to have lost its relevance. My students engaged, instead, in discussions of influencers, fake news, fact checkers, Tik Tok, social media regulations, Elon Musk, and Twitter.
But this past fall, I started to hear rumors of something new, something different, and something that might herald the demise of writing instruction (or even higher education) as we know it. This apocalyptic AI tool is called ChatGPT. Since it was made publicly available in November of 2022, we have seen a steady stream of essays, blogs, Tweets, chats, webinars, and podcasts addressing the perils and possibilities of the chatbot (as in the growing collection of resources here). At my department’s faculty meeting last month, we got a quick demonstration, followed by suggestions for policies and syllabus statements.
At that January meeting, with deadlines looming and a new semester about to begin, some of us were thinking, “What is this? I do not have time to deal with yet another disruption to my syllabus and pedagogy! Is it really such a big deal?”
And as I looked at my blog schedule for this semester, I considered asking ChatGPT to compose this post for me—knowing I could then meet the deadline and make a point (even though I wasn’t sure what the point should be). Instead, I asked for a title. Here’s what it generated: "Navigating the Boundaries: The Perils and Promise of ChatGPT in College Writing Classrooms.” It sounds catchy enough.
The best way to learn about ChatGPT is to try it for yourself. Take just a few minutes (it’s fast!) and give it one or more of your writing prompts or exam questions. Query the program about a research project or request a title for your next blog post. Or you can just ask it how it works:
Once you’ve played with the program a bit, compare experiences with colleagues and friends. I’ve found colleagues and family members using ChatGPT as search engines (“How can I teach semicolons creatively?” “What’s a good sermon illustration on overcoming anxiety?”), study and preparation (“What sorts of job interview questions should I expect based on this job description?” “Give me some practice sentences to transcribe in IPA, along with the answers”), or writing models (“Write a thank-you note as a follow-up to my job interview”). I’ve had colleagues who have asked for definitions, outlines, or group activities.
Within a short time of exploration, you will probably encounter the program’s accuracy problem. AI experts call this hallucination: at times, the program will simply make things up—sources, quotes, or statistics. I recently gave the program a discussion board prompt I use in my FYC class. Here’s what it generated:
Unfortunately, the quoted sentence does not actually appear in Alexie’s essay. So, I pointed this out to ChatGPT:
Yet again, the quoted sentence does not actually appear in the essay, although it is certainly related to the theme of the essay. After a second “confrontation,” ChatGPT stated that it had “misunderstood the prompt,” something my students have said before, too!
With that in mind, I’ve invited students to explore the technology with me this semester. They will find the “hallucinations,” and we’ll discuss privacy and integrity. My students will be assessing an essay generated by the chatbot, and they will experiment with ChatGPT as a brainstorming tool (considering if and how to cite the results). We will also ask the program to provide feedback on a draft and compare its feedback with our own. We’ll ask some tough questions: how does the technology support our thinking—and how might it limit our thinking?
And perhaps I’ll ask students to re-visit the Pinker/Carr debate. After all, our technological tools can exert a profound influence on the way we interact with the world and the way we think. Pinker closed his celebration of digital technologies this way:
The new media have caught on for a reason. Knowledge is increasing exponentially; human brainpower and waking hours are not. Fortunately, the Internet and information technologies are helping us manage, search and retrieve our collective intellectual output at different scales, from Twitter and previews to e-books and online encyclopedias. Far from making us stupid, these technologies are the only things that will keep us smart.
In the age of ChatGPT, I wonder if my colleagues and students will agree. I’m still thinking about it.
What about you? If you have insights or classroom ideas, please share!
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.