Multimodal Mondays: Questioning AI – Where Are We Now?

0 0 373


Kim Haimes-Korn is a Professor of English and Digital Writing at Kennesaw State University. She also trains graduate student teachers in composition theory and pedagogy. Kim’s teaching philosophy encourages dynamic learning, critical digital literacies, and multimodal composition. She focuses on students’ powers to create their own knowledge through language and various “acts of composition.”  You can reach Kim at or visit her website: Acts of Composition




Greene photo.png


Jeffrey David Greene is an Associate Professor of Professional and Creative Writing at Kennesaw State University. His research focuses on game studies, narratology, and first-year composition. He is currently the RCHSS Faculty Fellow of Academic Innovation focused on AI in Education.




Although versions of AI technology have been around for a while, the cumulated conversations are reaching a peak as we try to make sense of the chaos, hesitantly embrace it, and question its impact. What started out as a fringe conversation has worked its way into many of our cultural and educational institutions. Now these conversations have reached mainstream status and are on the minds of teachers, students, and a range of professional and personal communities.

Almost a year ago I wrote the post, “What We Fear, We Draw Near: Challenging AI and Chat GPT” (March 2023).  In that post, I contemplated questions in uncharted territories with both cautious optimism and a mind towards exploration of possibilities:

We will experience disruption and this tool does present us with real concerns. It is undoubtedly a major paradigm shift that asks us to rethink much of what we know about the teaching of writing. We wonder how it will challenge issues of plagiarism and intellectual property. We recognize the potential threat to students’ abilities to write and think critically on their own. We worry about a world where creativity is merely an algorithm, and the humanity of our discipline is lost. 

So, where are we now – a year later?  After an influx of articles, conferences, new tools, and worldwide conversations we are starting to figure it out. We are approaching it from different angles, policies, practices, and new ethical considerations both in and out of the classroom. My university has created channels for discussing the implications of AI. My colleague, Jeff Greene, is at the center of these conversations on our campus and received an AI fellowship to take a deep dive to explore the complexities of AI and develop an AI toolbox for teachers and students. Jeff and I speak often and lately, our conversations focus on questions revolving around this work. As a teacher of multimodal composition and a mentor for new teachers in the field, these questions are important to me. I am happy to collaborate on this post and that he has allowed me to pick his brain with some of the driving questions that have defined our discussions. What follows are some of these questions along with his answers:

Q: What do multimodal composers need to know/consider about AI?

A: That there are many opportunities, pitfalls, and ethical considerations around using Gen AI for multimodal projects. On the one hand you have a technology that can help students to quickly develop images (Dall-E, Midjourney), audio (MusicGen, AudioGen), or text (ChatGPT, Claude) very rapidly and by only using natural language prompts. It’s extremely powerful for developing a variety of content, but there are also significant ethical issues in how these models were developed, trained, and deployed. 

Q: How can we integrate AI into our multimodal classrooms to enhance learning experiences?

A: As an instructor, the first step is to decide exactly how much (or how little) Gen AI is appropriate in your classroom given your pedagogical goals and the needs of your students. At a bare minimum, I think instructors need a syllabus statement on AI that lays out their expectations for AI use in their class. Below are several helpful resources put out by different institutions to aid you in crafting a syllabus statement:

Next, an instructor may want to consider adding specific AI statements on individual assignments. Different multimodal assignments likely require different levels of Gen AI. For some assignments, you may want to be maximally restrictive when it comes to Gen AI, and for others you may actually be encouraging students to use AI in specific ways such as using Dall-E to develop images for a digital storytelling assignment.

Finally, if you’re going to develop an assignment or unit that deploys a specific Gen AI platform, consider offering class-time for tutorials and experimentation and also be aware that many of these tools are in beta and can rapidly go from “free” to “paid” status. I had this happen mid-semester with an AI tool and it threw a wrench in my course preparation and plans.      

Q: What ethical considerations and conversations do we need to bring into composition classes—particularly in relation to content creation and multimodal composition?

AI generated image from the prompt:  “Make me an image of an erudite chihuahua in a lab coat grading essays”    GPT-4/Dal-E, OpenAI, 21 Mar. 2024, generated image from the prompt: “Make me an image of an erudite chihuahua in a lab coat grading essays” GPT-4/Dal-E, OpenAI, 21 Mar. 2024, There are so many ethical considerations when it comes to Gen AI right now that it’s hard to find a place to begin.

Students first need to understand how the technology works (on a basic level) and then the ethical considerations in terms of how these models were trained. It’s integral that everyone (not just students) understands that in building these models, companies like OpenAI fed ChatGPT massive amounts of “content”--text, art, etc.--without the consent of the millions of writers, artists, and content creators. This is a huge ethical issue that many stakeholders are currently challenging legally

There are also individual ethical issues with how the human component of ChatGPT was trained. As an example, in order to train “toxic” material out of ChatGPTs model, human trainers had to endure a variety of horrific content for very little pay. In addition, the tools themselves can be biased and still display toxic or inaccurate information despite their training.

We need to encourage students to consider citation and attribution practices for both text and visual artifacts.  For example, the image above was created with the prompt: “Make me an image of an erudite chihuahua in a lab coat grading essays.” This image was generated using GPT-4/Dall-E. and includes attribution through the citation in the caption. Creating images with your class and then discussing the copyright issues surrounding the development of Dall-E/Midjourney may be a useful way to explore the ethics of Gen AI. 

For students in our classrooms, there are also originality/ownership issues when it comes to creating content with Gen AI.

For example, what does it mean to create content with AI? At what point is the content mine vs. the AIs? If I just use Gen AI as an ideation tool, but create all the content myself, is the content still mine? What if I write an essay in concert with Gen AI–having it punch up my sentences or make suggestions on improvements/edits– who owns it? How do I properly cite ChatGPT? Is Dall-E actually creating anything if it’s simply deconstructing and remixing nearly endless pieces of previously published art? These questions are worth exploring with your students.

Q: How does AI impact research practices (location of sources, prompt engineering, etc?)

AI can be a tremendous tool for student research as long as it’s properly introduced and contextualized. Much like conventional search engines, Gen AI can be a good starting point for a research project to locate primary and secondary sources or simply for ideation. At this point, I’ve been recommending Claude over ChatGPT or Perplexity because Claude has been developed on a constitutional AI, which basically means it tries not to do many of the bad things that ChatGPT does such as creating fake sources or making stuff up. In addition, Claude is pretty good about providing specific citations on command and also admitting when it doesn’t have a specific source for the information it has delivered. 

But Gen AI can also do other cool things when it comes to research. For example, you can easily upload a spreadsheet to Gemini AI (Google’s AI tool) and have it rapidly develop visuals from a dataset. You can also give ChatGPT a rubric and have it “score” something (writing, etc.) and provide formative or summative feedback. 


These questions barely crack the surface of the complexities related to AI. Our conversations are merely a start, but we are all participants (both students and teachers) in this ongoing conversation. As Jeff points out, AI is an expanding field and conceptual framework that is constantly changing. We, as a discipline and as a culture, have much to ponder. 

Stay tuned for our next post . . . where we share a couple of hands-on, classroom activities that use AI in interesting ways.

Tags (3)
About the Author
Andrea A. Lunsford is the former director of the Program in Writing and Rhetoric at Stanford University and teaches at the Bread Loaf School of English. A past chair of CCCC, she has won the major publication awards in both the CCCC and MLA. For Bedford/St. Martin's, she is the author of The St. Martin's Handbook, The Everyday Writer and EasyWriter; The Presence of Others and Everything's an Argument with John Ruszkiewicz; and Everything's an Argument with Readings with John Ruszkiewicz and Keith Walters. She has never met a student she didn’t like—and she is excited about the possibilities for writers in the “literacy revolution” brought about by today’s technology. In addition to Andrea’s regular blog posts inspired by her teaching, reading, and traveling, her “Multimodal Mondays” posts offer ideas for introducing low-stakes multimodal assignments to the composition classroom.