Multimodal Mondays: Practicing AI: Developing a Critical Eye

andrea_lunsford
0 0 1,604

KimNEWPIC.pngKim Haimes-Korn is a Professor of English and Digital Writing at Kennesaw State University. She also trains graduate student teachers in composition theory and pedagogy. Kim’s teaching philosophy encourages dynamic learning, critical digital literacies, and multimodal composition. She focuses on students’ powers to create their own knowledge through language and various “acts of composition.”  You can reach Kim at khaimesk@kennesaw.edu or visit her website: Acts of Composition

 

 

Greene photo.pngJeffrey David Greene is an Associate Professor of Professional and Creative Writing at Kennesaw State University. His research focuses on game studies, narratology, and first-year composition. He is currently the RCHSS Faculty Fellow of Academic Innovation focused on AI in Education.

 

 

 

This post is the second in a series where I collaborate with my colleague, Jeff Greene to reflect on AI and the issues surrounding this paradigm shift. In our last post, Questioning AI we looked at some of the larger questions affecting policy, ethics, and pedagogy.  In this post, we want to focus on application and offer some ideas for classroom practice.  Both of our assignments lean towards helping students to develop a critical eye and rhetorical awareness towards AI content.

Kim’s Assignment: Revising for Humanity

Finding the Humanity Photo by Drew Dizzy Graham on UnsplashFinding the Humanity Photo by Drew Dizzy Graham on UnsplashI think the most disturbing thing about AI for writing teachers is that it will replace human composition and get in the way of students’ abilities to think and write both critically and creatively. We are concerned about them skipping over the complicated learning processes involved in idea generation and rhetorical awareness. I have played around with AI and considered the rhetorical nature of the content. It is hard to put my finger on it but when I read it or view an AI generated design there is a recognizable lack of authenticity. The writing is too patterned.  The language is stiff. The art feels static and overly edged. It is missing a sense of humanity, a sense of human realness and creativity. 

In this assignment, Revising For Humanity, I encourage students to look closely at AI generated texts and think about what makes them human and what makes them machine. The purpose of the assignment is to get them to think about revision in a new way through the lenses of humanity and rhetorical awareness.   

Steps to the Assignment:    

  1. Have students familiarize themselves with an AI platform.  Ask them to experiment with generating texts on subjects of their interest. 
  2. Teach them “prompt engineering” through follow up questions.  Guide them to recognize the relationship between their ideas and the kinds of prompts they generate.  Asking good questions gives them agency in the composition. Check out and share this article on prompt engineering from Harvard’s teaching resources.
  3. Individually or in groups, analyze a chosen AI text.  I like them to approach it rhetorically and look at style, content, and context along with other rhetorical lenses. 
  4. Identify areas in the text to “revise for humanity.”  Places where the voice feels flat, or the language doesn’t feel right, or the content feels awkward.  Have them look for patterns in style and logic.  Use a Google doc to identify areas and comment.  I like to have them screenshot the marked up version to submit in their reflection.
  5. Next, they can extend the ideas through searching for real 😉 sources that bring meaning to the text.
  6. Encourage them to substantiate through their own experiences and perspectives.
  7. Finally, ask students to revise the text based on what they have learned. 
  8. Reflect.  I think this assignment is best wrapped up with reflection and class discussion in which students articulate why and how they Revised for Humanity.  They can share their marked-up versions to articulate their choices.

Jeff’s Assignment: Flex Your Bot**bleep** Detector

bot.jpgOne of the scariest aspects of Gen AI is the ease with which fake information can be generated with a few keystrokes. Propagandists are already harnessing this technology to create misinformation and flood the internet with a cacophony of “bot**bleep**,” a term defined as “information created by Generative AI with no regard for truth and then used by humans (with no regard for truth either) to persuade others” (Kowalkiewicz).

 

With this proliferation of misinformation in mind, it’s important that our students critically analyze Gen AI content from the ground up with an understanding of how LLMs function and why AI generated content always needs to be viewed through a critical lens. The goal of this brief exercise is to develop a stronger bot**bleep** detector.

Steps to the Assignment

  1. Have students start by engaging in short, extemporaneous writing on their relationship with Gen AI and research: What do they know about the technology? Do they feel that the content it generates is trustworthy? Why or why not? What part could or should AI play in knowledge generation or research?
  2. Introduce the complexities of Gen AI and the concept of “bot**bleep**” through this full article or slide deck where authors Timothy Hannigan, Ian P. McCarthy, and Andrew Spicer describe the process by which an AI can “hallucinate” and generate bot**bleep**. The slide deck does a particularly good job of explaining how LLMs function and should be treated to avoid epistemic risk.
  3. Next, we’ll try to engineer some bot**bleep**. Have students log into whatever free (or paid if supported by the university and available equitably to all) LLM and try to get the AI to answer a research question. You may find that using near-future predictive questions such as: “Please describe the economy of Paraguay in 2027” has a high propensity to generate truthful-sounding bot**bleep** from many LLMs.  
  4. Students should then probe the AI further with requests for more information, citations, and critical interrogations about the provisional knowledge that the AI has generated. Students should attempt to verify all information gathered through different sources such as direct citations (if they exist!).
  5. Finally ask students to write reflectively on the initial information generated by the AI. Was it verifiable information from the start or did it just sound truthful? How much of the information was fabricated? In what ways could the information be verified independently? How should this output be treated critically? How might you use (or not use) a Chatbot in a research project given your knowledge of its propensity towards bot**bleep**?
  6. If you intend to have students use Gen AI for research, it may be worthwhile to discuss approaches and expectations for using these tools. One additional resource is from the MLA-CCCC Task Force for Evaluating the Use of AI in Scholarship and Creative Activity. Their initial guidance on transparency, accuracy, source attribution, responsibility, originality, and quality make for a good starting point in these discussions.
Tags (2)
About the Author
Andrea A. Lunsford is the former director of the Program in Writing and Rhetoric at Stanford University and teaches at the Bread Loaf School of English. A past chair of CCCC, she has won the major publication awards in both the CCCC and MLA. For Bedford/St. Martin's, she is the author of The St. Martin's Handbook, The Everyday Writer and EasyWriter; The Presence of Others and Everything's an Argument with John Ruszkiewicz; and Everything's an Argument with Readings with John Ruszkiewicz and Keith Walters. She has never met a student she didn’t like—and she is excited about the possibilities for writers in the “literacy revolution” brought about by today’s technology. In addition to Andrea’s regular blog posts inspired by her teaching, reading, and traveling, her “Multimodal Mondays” posts offer ideas for introducing low-stakes multimodal assignments to the composition classroom.