-
About
Our Story
back- Our Mission
- Our Leadership
- Accessibility
- Careers
- Diversity, Equity, Inclusion
- Learning Science
- Sustainability
Our Solutions
back
-
Community
Community
back- Newsroom
- Discussions
- Webinars on Demand
- Digital Community
- The Institute at Macmillan Learning
- English Community
- Psychology Community
- History Community
- Communication Community
- College Success Community
- Economics Community
- Institutional Solutions Community
- Nutrition Community
- Lab Solutions Community
- STEM Community
- Newsroom
Bits on Bots: How to AI-Proof Any Assignment
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
Jennifer Duncan has been teaching English for twenty years, first at Chattanooga State Community College in Tennessee and now at Georgia State University’s Perimeter College in Atlanta, GA where she is Associate Professor of English. For the last ten years, she has focused exclusively on online teaching concentrating on creating authentic and meaningful learning opportunities for students in composition and literature courses.
Okay, this title is obviously a false promise. As we move through a year of AI in our classrooms (or trying to keep AI out of classrooms), we can agree that it’s unlikely we can AI-proof all our assignments – by the time you finish reading this blog, AI will likely have learned to do something new.
There are, of course, some instructors who will revive their old school writing lessons, and there is absolutely nothing wrong with returning to paper and pen compositions. In fact, we know there are important cognitive links between the physical act of writing and learning. Handwritten assignments, however, do not fully address the issues with AI plagiarism, and students still need the skills of writing and delivering content in the digital age. Instead of implementing wacky formatting guidelines or having students hide key words in texts to prevent pasting from ChatGPT, I’m creating assignments that outsmart the AI by moving students into higher level thinking. The basic premise of my new process is simple: find out what the AI can do and ask the students to do one thing more. In Bloom’s speak, I ask them to perform one task higher on that ubiquitous chart.
But how to know what AI can do? Just ask it, and, like a villain willing to reveal a plan in an elaborate monologue, AI will tell you everything you need to thwart it. Feed it a writing assignment and prompt it to break the assignment into individual tasks identifying the cognitive skill required to complete each task. Next, ask “Which of these cognitive skills are you able to mimic? Which of these tasks can you complete? What are your weaknesses?” ChatGPT is great at remembering, understanding, applying, analyzing, evaluating, and creating – at least that’s what it told me when I asked what levels of Bloom’s taxonomy it was able to mimic. Recalling facts, summarizing information, identifying patterns or trends, and differentiating between fact and opinion are part of a research process, but they are not the whole of the writing experience.
According to the monster itself, ChatGPT has no emotional intelligence or creativity. Because it trained by indiscriminately consuming massive amounts of data, it can only imitate, not generate any novel ideas or solutions. It does not fully understand context the way humans do, and it has no independent critical thinking skills, so when we target our assignment to the top of the pyramid – where writing teachers tend to play anyway – we design assignments that the generative LLM can’t yet mimic.
Here's an example: I routinely ask students to create an annotated bibliography as part of their research process. What parts of this are easily produced by ChatGPT? It can summarize and evaluate the credibility of a source, and, if students take the time to feed it the entire text of all of their sources, it may even be able to identify patterns between them, so I allow the students to use the AI to do those steps. They must properly cite their AI usage, and, of course, they must take responsibility for the accuracy of the summary. If the AI hallucinates, it’s up to them to detect it.
Now, it’s time to level up the assignment. Require students to explain why they chose their sources (emotional intelligence and critical thinking) and how they connect to the larger argument the student is trying to make (contextual understanding). Ask them to connect each source to at least one other of their sources explaining how they fit in conversation. This gets students beyond basic understanding and puts them in charge of applying and evaluating which, in this case, also includes reflection and contextualization.
When I tested this out with my students, their annotated bibliographies deepened in terms of their critical thinking, and they were much more selective in terms of what they added to their source lists. Their focus was less on correct MLA format (a skill that is entirely useless to them outside of my class) and more on how to use sources to build their argument. Students also reported that they learned “how to use AI as a tool and not as a crutch” and that the requirement to do more self-reflection during the assignment gave them “a deeper understanding of [their] capability to be creative in [their] writing.” Wasn’t that the point of the assignment after all?
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.