-
About
Our Story
back- Our Mission
- Our Leadership
- Accessibility
- Careers
- Diversity, Equity, Inclusion
- Learning Science
- Sustainability
Our Solutions
back
-
Community
Community
back- Newsroom
- Discussions
- Webinars on Demand
- Digital Community
- The Institute at Macmillan Learning
- English Community
- Psychology Community
- History Community
- Communication Community
- College Success Community
- Economics Community
- Institutional Solutions Community
- Nutrition Community
- Lab Solutions Community
- STEM Community
- Newsroom
- Macmillan Community
- :
- English Community
- :
- Bits Blog
- :
- Reflection Assignments and Open Disclosure of AI
Reflection Assignments and Open Disclosure of AI
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
In a recent Chronicle of Higher Education piece, Marc Watkins shared an AI-Assisted Learning Template that he uses to encourage critical reflection and open disclosure of AI use on writing assignments. I find much to admire in his approach: he pushes students to consider assignment goals and the extent to which an AI tool helped them achieve those goals. In short, he asks them to reflect critically on their learning.
And to the cynics who suggest that students will just use generative AI to compose these evaluations, Watkins admits the possibility. But he suggests most students are “eager for mechanisms to show how they used AI tools,” and he clearly wants to resist “adopting intrusive and often unreliable surveillance” in his classrooms.
I applaud the learning focus embedded in his assignment design, as well as his rejection of detection-obsessed pedagogy. But I still have concerns.
First is the notion that we “teach ethical usage” primarily by ensuring students do not cheat. I think discussions of ethical AI use must include transparency in what we ask the AI to do and how we use the outputs, of course, but there are other ethical issues as well. In my current corequisite FYC course, for example, we are looking at how generative AI platforms were developed and trained: was labor (particularly of minority groups), privacy, or copyright abused in the process? Will my use of the AI expose my words or data in ways that I cannot control? If I want to use the tool to brainstorm for a few minutes, are there environment, worker, or privacy costs to that session? Is there a risk that my use will perpetuate biases or misinformation? Ultimately, is the AI the best tool for the task at hand? If not, is using it for the sake of convenience an ethically defensible choice? My students and I have also asked this question: do we have a right to know when we are interacting with synthetic (i.e., machine-generated) text or images?
The AI-Assisted Learning Template could be modified to include some of these reflections, so that learning issues are coupled with broader ethical concerns. I have included a similar exercise with the AI-themed assignments in my FYC course this term. But another issue lingers for me: how much reflection will actually occur in assigned self-evaluations?
McGarr & O’Gallchóir (2020), Thomas and Liu (2012), Ross (2011; 2014), and Wharton (2012) are just some of the researchers who have suggested that assigned (and assessed) reflections are inherently performative: students may be just as much focused on managing their perceived image through skillful self-positioning as engaging in deep reflection. Students want to position themselves positively, as those who made a good effort and determined to learn something—deploying what Thomas and Liu (2012) call “sunshining” to characterize their efforts, challenges, potential failures, and ultimate progress.
My own research in the language of student reflections suggests that students make linguistic choices that distance them from agency over decisions and outcomes. They also foreground what is perceived as desirable—effort, open-mindedness, resourcefulness, and willingness to learn or grow. Many find it hard to acknowledge uncertainty about what they learned in a given assignment; after all, doing so might be perceived as criticism of the assignment or failure to learn—and engaging in either one is risky for students. But learning insights don’t always arrive on schedule, packaged neatly prior for submission by the due date. Still, my students have been trained since middle school to assert with confidence (and deferential gratitude to the teacher who provided the opportunity) that they have indeed learned, just as they were supposed to. Check that box.
I am not suggesting that the template is without value or that reflective writing needs to be scrapped altogether. FYC certainly has a rich history of reflection-infused pedagogy that cannot be ignored. But as we adopt (and adapt) technology, tools, and templates, let’s consider how to ensure that students are empowered to question such technology, tools, and templates—promoting honest and authentic reflection insofar as that is possible.
How do we do that? I don’t have all the answers, but I agree with what Marc Watkins says: we should “shift focus away from technology as a solution and invest in human capital.” Ultimately, he says, he should honor the “irreplaceable value of human-centered education.” Exactly. Thanks to Marc for opening space for this conversation.
Photo by Solen Feyissa on Unsplash
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.