-
About
Our Story
back- Our Mission
- Our Leadership
- Accessibility
- Careers
- Diversity, Equity, Inclusion
- Learning Science
- Sustainability
Our Solutions
back
-
Community
Community
back- Newsroom
- Discussions
- Webinars on Demand
- Digital Community
- The Institute at Macmillan Learning
- English Community
- Psychology Community
- History Community
- Communication Community
- College Success Community
- Economics Community
- Institutional Solutions Community
- Nutrition Community
- Lab Solutions Community
- STEM Community
- Newsroom
- Macmillan Community
- :
- English Community
- :
- Bits Blog
- :
- Writing as a Process of Discovery with ChatGPT
Writing as a Process of Discovery with ChatGPT
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
Writing as a Process of Discovery with ChatGPT
Neurodivergent Teaching
Like nursing professor Kim Mitchell, I was initially intrigued by ChatGPT, and to be honest, annoyed as well. ChatGPT seemed like yet another new variable to consider, not unlike inadequate classroom heating and ventilation systems, in returning to in-person teaching in the wake of the pandemic. But I wanted to avoid tapping into the deep anxiety around ChatGPT as an existential threat to my job as a writing teacher. Budget cuts and austerity are actual existential threats. ChatGPT, at least for the moment, just seemed like another new technology I would need to learn.
Just as Professor Mitchell has done, I tried an experiment early this semester and broke the process into steps. The steps are a means of organizing my thoughts, and also of explaining to first-year writing students my own processes of learning about AI. Here’s what I did, as recorded in my teaching journal:
- I asked ChatGPT to write an essay prompt for me based on the initial reading for Writing Project 1.
- I had to ask ChatGPT to do several revisions. Most of the prompts invited students to write very general essays that could avoid engaging with or even using the course reading for the first essay. In other words, there was a great possibility that the writers' essays would be as generic as the prompts. For example:
- For several prompts, writers could give the appearance of writing an analytic essay offering little to no engagement with the course reading.
- When I asked ChatGPT to include a brief narrative component in the prompt, it produced a prompt that could be written as a narrative, again with no engagement with course reading.
- In asking to add a component on how the 20th-century author of the reading might have revised their work to account for current events in 2024, ChatGPT replied that my request was “too speculative” to use in a writing prompt.
- I admit that ChatGPT hurt my feelings with that response, but it was very instructive for understanding the limits of AI.
- Based on ChatGPT’s inadequate responses, I tried to fix my own Writing Project 1 prompt to make it as impervious as possible to the AI’s machinations.
- This included emphasizing the speculative section as an important part of the Writing Project.
- I fed my revised prompt with specific requirements into ChatGPT. ChatGPT returned an extremely general essay that did not address the specific requirements.
- Although writers asked for model essays for Writing Project 1, I decided to wait to provide sample essays until after discovery drafts were composed.
- While conventional wisdom suggests that Gen Z learns best with specific directions, including “models,” I still want ungraded opportunities to wrestle with writing as a means of discovering their own thinking in response to a prompt.
- Is this an open invitation to secretly using AI? Even if it is, the next step in the process seeks to ameliorate that possibility.
- After discovery drafts were completed, I offered writers grading criteria for Writing Project 1, and two sample essays to grade.
- In small groups, I invited writers to discuss what grades they would give the essays, and why.
- I stressed that the essays were samples, NOT models.
- For the two samples, I used the ChatGPT essay and a strong student essay from a previous semester. I did not tell writers that the first essay was composed by AI.
- In class discussion afterward, writers and I compared notes about grades. Almost universally, both essays earned “A”s from students.
- Both essays were detailed and offered many examples. The first essay (the ChatGPT essay) even used subheadings.
- Not surprised with the results of this experiment, I revealed that the first essay was composed by ChatGPT and that it would have received an “F” because it did not follow the grading criteria or the requirements of the prompt:
- At first glance the essay seemed perfect, with no mistakes. Yet the actual views and feelings of the writer were absent. The audience could not discern the writer’s attitude toward the subject of the writing.
- The many examples were very general. The essay listed examples rather than explaining specifics.
- There were no quotes or summaries or paraphrases from the original source.
- There were no in-text citations and no Works Cited list.
- Although it seemed long, the essay was generated in 15-point font, while the requirement specified 12-point font.
So did the experiment “work”? Yes, or, I think it did, at least initially. I hope that I conveyed, with the help of ChatGPT, that writing is more than a finished product. Writing is also a process of growth and discovery, and perhaps the ever-shifting processes of writing are the most difficult lessons to teach and learn. ChatGPT was great at offering products, prompts and completed essays both. But writing as an embodied experience is not (yet?) within ChatGPT’s range.
With that in mind, the experiments–and hopefully the discoveries–continue.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.