-
About
Our Story
back- Our Mission
- Our Leadership
- Accessibility
- Careers
- Diversity, Equity, Inclusion
- Learning Science
- Sustainability
Our Solutions
back
-
Community
Community
back- Newsroom
- Discussions
- Webinars on Demand
- Digital Community
- The Institute at Macmillan Learning
- English Community
- Psychology Community
- History Community
- Communication Community
- College Success Community
- Economics Community
- Institutional Solutions Community
- Nutrition Community
- Lab Solutions Community
- STEM Community
- Newsroom
- Macmillan Community
- :
- English Community
- :
- Bits Blog
- :
- Experiences vs. Training Data (or, Why I am Still ...
Experiences vs. Training Data (or, Why I am Still Resisting AI in My Corequisite Classes)
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
Have you read Susan Bernstein’s recent post about AI? In it, she shows how she uses ChatGPT to help her students understand what writing is—and what it’s not.
If asked why I don’t encourage my students to deploy generative AI in the early stages of their writing, I can certainly articulate my skepticism. But I find that anecdotes are more powerful:
One of my corequisite students and I had an amazing conversation recently. She had submitted a draft of her difficulty paper (I’ve written about it here), in which she argued that the difficulty in the article I assigned was related to the character and ability of the author (a well-known linguist). Most of my students dislike the assigned article at first, and most will freely admit in the difficulty paper that they find the topic and style to be dreadfully dull. But I had never had a student attribute both arrogance and incompetence to the author, until I received this draft.
When I challenged her on the fairness of criticizing the writing ability of a well-respected scholar, she shot back with “how could he claim to be a good writer when he couldn’t articulate his point with clarity and interest to his audience?” We then talked about who his intended audience was—clearly not first-year writers. She kept pushing, noting places that led her to characterize this scholar as someone who simply liked to hear himself talk and who was infatuated with his own ideas.
Our discussion turned to ethos: how writers use language to construct an ethos (in ways that fit their audience or discipline) and how we as readers also construct an ethos for the author (interpreting in ways that align with our disciplines or social identities). Listening to her speak, I could see how her previous literacy experiences led to her negative view of the author; she, in turn, realized that her language choices in the difficulty paper would lead her audience (a professor) to construct a particular image of her as writer—an image that she might not be happy with. After our conference, she began to revise.
Her revision was stunning in its maturity. It maintained the candor and wit that she had displayed in the first draft, but she thoughtfully negotiated with a reader who might disagree, balancing confidence with humility (and a few persistent issues with mechanics).
So, let’s go back to the question of generative AI as a “writing tool” in corequisite classes. Had my student turned to ChatGPT to assess the difficulties in this particular scholarly essay, we would have never discussed language choices, the construction of an ethos, and the ways our experiences shape our reading. I am not sure what either of us would have learned.
Synthetic text—the output of generative AI—cannot reveal a consciously constructed ethos, for the algorithm which composed it does not think. As meaning-making creatures, we may of course attribute an ethos to such synthetic text, but the text remains synthetic, “extruded” (as linguist Emily Bender calls it), and not constructed via intentional lexical, syntactic, and rhetorical decisions.
I suppose it’s tempting to suggest that turning to AI will help my student “discover” potential connections or approaches she might not otherwise have considered; after all, the training data that feeds generative AI is far more extensive than what she could have possibly experienced for herself. But she has had experiences that have shaped her thinking, not “training data.” Those experiences are rich with memory, context, nuance, and emotion—and my hope is that her writing experiences in my class will add to that store of connections, so that she can draw on those when she writes in the future.
I’ve heard that students ask ChatGPT for an easy-to-understand summary of difficult articles so that they can get important concepts more quickly. And certainly, there are times when I use abstracts (composed by authors) to determine whether or not I want to labor through a complicated text. But I wonder what is lost—and what formative experiences we will not have—when we bypass the work (and joy) of difficult reading.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.