-
About
Our Story
back- Our Mission
- Our Leadership
- Accessibility
- Careers
- Diversity, Equity, Inclusion
- Learning Science
- Sustainability
Our Solutions
back
-
Community
Community
back- Newsroom
- Discussions
- Webinars on Demand
- Digital Community
- The Institute at Macmillan Learning
- English Community
- Psychology Community
- History Community
- Communication Community
- College Success Community
- Economics Community
- Institutional Solutions Community
- Nutrition Community
- Lab Solutions Community
- STEM Community
- Newsroom
- Macmillan Community
- :
- English Community
- :
- Bits Blog
- :
- Bits Blog - Page 2
Bits Blog - Page 2
Options
- Mark all as New
- Mark all as Read
- Float this item to the top
- Subscribe
- Bookmark
- Subscribe to RSS Feed
Bits Blog - Page 2
mimmoore
Author
11-06-2024
08:27 AM
While reviewing first drafts of a source-based essay from my corequisite students, I noticed how often they introduced quoted material without much integration or context. In general, they had mastered the use of a signal phrase, but they used such phrases without providing any context to indicate the relevance of the source or the expertise from which the author spoke (again indicating that the internet is, for many of them, a flat landscape). In a subsequent class, I revisited source integration via short summaries and author credentials. At a few minutes after 8 o’clock that morning, I could see I was rapidly losing the class. I stopped and asked them a simple question: “How many of you took the interstate to get here this morning?” Several raised their hands. “So did you use an on-ramp when you got on the interstate?” Nods all around. I asked them what would happen if they skipped the on-ramp and tried to enter the interstate using a right turn. “We’d hit someone and cause an accident,” they laughed. Exactly: you have to speed up and merge into the flow of traffic. I explained that introducing quoted material is similar to a car merging onto the interstate: it has to merge into the flow of traffic by using the full on-ramp. The proverbial on-ramp in writing is the way you set up a context for the quoted material. Did the analogy lead to sophisticated integration of quotes? Of course not. But in the next drafts, I could see the students’ attempts, however rough, to add context. Analogies are ubiquitous in college classrooms. After all, students make sense of new concepts and skills in light of what they already know. And over my career as a writing teacher, I have often resorted to analogies to teach writing concepts: a choppy paragraph is similar to what happens when I try to drive a car with manual transmission (being proficient only on an automatic), or like trying to climb a stairwell with missing steps. I explain the information deluge that occurs when corequisite students neglect paragraphing or end punctuation this way: what if I am asked to drink an entire pot of coffee at one time, rather than in measured sips? I’d choke on one of my favorite things! Sentences weighed down by unneeded adjectives or adverbs remind me of swimming in the Gulf of Mexico when the seaweed is heavy: it’s hard to see where I stand because the seaweed obscures the sand. And of course reading a text whose author has not yet found a thesis reminds me of what happens when my GPS malfunctions: I just don’t know where I am going. Or perhaps it’s more like being in a corn maze in the fall: I have no sense of where to go next because I don’t know what the big picture is. If the writer could just take the view from above—from a hovering drone—I would be much more willing to follow. There are days when I wonder whether I am (at 57) too far removed from the experiences and digital spaces that my students share to make meaningful connections with them. But I have found that analogies cross generational divides, even as vocabulary and lived experience seem further apart. Shared laughter over my silly analogies—or my students’ adaptations of those analogies—is a reminder of what has not changed, and what we have in common. What are your favorite analogies in the writing classroom? I would love to hear from you.
... View more
Labels
1
0
372
guest_blogger
Expert
11-01-2024
01:52 PM
by Christina Davidson, University of Louisville This post is part of an occasional series affiliated with the Writing Innovation Symposium (WIS), a regional event with national reach that takes place annually online and in Milwaukee, WI. In 2024, Christina was a Bedford/St. Martin’s WIS Fellow. Learn more below and in posts tagged “writing innovation” and “WIS.” Large Language Models (such as ChatGPT) became widely available in November of 2022. Since that time, students have been exploring their use and are eager to learn more. As a veteran composition teacher and member of a WPA team, I was hopeful to find a way to address student GenAI curiosity in my own classroom. AI and Writing (2023), Sidney I. Dobrin’s classroom-friendly text, was exactly what I needed . In the second half, which focuses on “opportunities and applications,” Dobrin borrows a powerful metaphor from GenAI expert Cath Ellis. He compares approaches to writing to ways of summiting Mt. Everest. For Ellis, writing is akin to climbing just as GenAI is to riding a helicopter to the mountain top (60). Each option leads to the summit or goal, yet they provide contrasting experiences—and very different opportunities to learn. Together, Dobrin’s book and Ellis’s metaphor gave me an idea for getting writers to think about GenAI and the writing process. The 75-minute workshop I designed, “Process, Post-Process, and GenAI,” starts with a focus on writing. In fact, I don’t even mention GenAI until the closing discussion. Instead, I start by asking participants to use materials I provided (i.e., blank paper, markers, colored pencils) to draw any task they perform in which several paths could lead to the same result. Here are a few memorable examples from my first workshop which occurred at the 2024 Writing Innovation Symposium, held at Marquette University. Thoughtful participants drew multiple ways to learn a new language, different methods for preparing rice, and various paths to explore inside an open-world game, just to name a few. Wesley Fryer - EdTechSR Ep 308 Exploring AI in Education After drawing, participants share in small groups to kick off the discussion. This is a great time to “work the room” and see which drawings might be best shared with the entire group. The resulting conversation opens a dialogue in which participants analyze their own writing process and how it might compare to one of the “paths” in the drawing. The conversation is not meant to imply a favoritism for a certain method or path (helicopters are certainly useful machines, as is Duolingo), but to increase the critical way students consider the writing process. It’s the first step toward engaging with the most essential question at the close of the workshop–What might happen if change my writing process to include GenAI? On my campus, most FYC students are familiar with process-oriented pedagogy from prewriting and drafting, to revising, editing, and finally “publishing” or submitting their work. Our students might imagine each of the steps the hiker must complete before the summit is reached, just as they might imagine the work that is placed into completing a final draft. However, we know the hike to the top of a mountain is rarely, if ever, a straight line–and our writing processes aren’t straightforward either. Just as the hiker may need to navigate a blocked trail, so, too, the writer must negotiate struggles in completing a draft. As we close our discussion, I return to one of my favorite examples. At my first workshop, one participant drew the creative choices players make in open world games. He charted several “mini-bosses” and side quests, which we might imagine as rounds of peer revisions, writing center visits, or conducting additional research while writing a large paper. If a GenAI tool could take the gamer straight to the credits, clearly much would be forfeited. Similarly, there’s much to be lost when a writer uses ChatGPT to create a “final” draft. The example also illustrates how a post-process approach to writing is highly contextual and social, two areas where GenAI just isn’t as helpful. The workshop ends with reflective writing concerning our shared discoveries through discussion. I have been encouraged to hear how quickly FYC students identify the critical human element they wish to retain in writing. Most participants agree that LLMs can be useful for some writing tasks, but preserving agency over their personal writing often remains at the center of student concern. Are you interested in fostering conversations about process and GenAI tools in the classroom? I would encourage you to try my exercise in your own classroom and to let students discover for themselves how the most memorable processes are ones that meander and land in unexpected places. I often recall an exasperated FYC student that lamented, “It took forever to write this paper!” I quickly responded, “Lucky you!” because I knew this student had gained so much learning in that work. To follow C.P. Cavafy’s poem “Ithaka,” I reminded her of the opening line—“As you set out for Ithaka, hope your journey is a long one.” A writing process full of discovery, invention, and reinvention is one I encourage in my FYC course—and although GenAI can be helpful on a step of that iterative journey toward destinations unknown, the understanding of what “these Ithakas” mean is only known to the writer who writes them. The theme for WIS ‘25 is mise en place, a culinary term for putting things in place before cooking, especially in a professional kitchen. For us, it’s a metaphor for getting ready to write as well as a pathway to exploring the interrelationship between writing and food. Join us online or in Milwaukee, WI, January 30-31, 2025. Proposals are welcome through 10/25 and, for undergraduate writers, through 12/13. Registration opens in early November.
... View more
0
0
481
guest_blogger
Expert
10-30-2024
10:00 AM
Jeanne Beatrix Law is a Professor of English and Director of Composition at Kennesaw State University, focusing on generative AI (genAI) and digital civil rights histography. Her AI literacy work has global reach, including multiple presentations of her Rhetorical Prompt Engineering Framework at conferences like Open Educa Berlin and the Suny Council on Writing. She has led workshops on ethical genAI for diverse institutions and disciplines at Eastern Michigan, Kent State, and CUNY’s AI Subgroup. She and her students have authored publications on student perceptions of AI in professional writing. Jeanne also co-authored The Writer's Loop: A Guide to College Writing and contributed to the Instructor's Guide for Andrea Lunsford's Everyday Writer. She has authored eight Coursera courses on genAI and advocates for ethical AI integration in educational spaces in both secondary and higher education spaces as a faculty mentor for the AAC&U’s AI Pedagogy Institute. In my inaugural post a couple of weeks ago, I began with November 2022, which is when the public first gained access to ChatGPT. Today, I want to fast-forward to August 2023 and report out some data that our Kennesaw State University research team conducted. As the Chief P.I. for the project, I led a team seeking to measure first-year students’ attitudes towards generative AI (genAI) use in their academic and personal writing.[i] I had experimented with AI-infused teaching already, but this was a first attempt at systematic inquiry into how students might be using generative AI in their writing. A Disclaimer I completely understand the challenge of translating data into actionable steps, especially when introducing generative AI into college classrooms. We’re all navigating new ground here, working to sift through what the data tells us and how it aligns with the dynamic needs of our students and teaching goals. It’s a learning curve, but together, we can explore strategies that are both data-informed and practically applicable. Engaging with data-rich content can often feel overwhelming, especially when every point seems crucial. Recognizing this, my aim here is to distill the most significant insights for clarity and relevance. Rather than inundating readers with exhaustive detail, I’ll focus on the key elements that reveal meaningful trends and implications. Let’s approach this data thoughtfully, with a critical eye on the broader narrative it suggests. What We Did & What We Found We surveyed students in the fall and spring semesters and distributed digital surveys through instructors in Composition I and Composition II courses. Around 1,550 students answered the surveys. Some of our findings were expected; for example, more than 92% of students reported being aware of generative AI. Many of our findings, however, surprised us. Almost 40% of students (average of fall and spring collections) reported that they use genAI tools in their personal writing, while 35% reported using genAI in their academic writing. More than 75% of students surveyed believe that genAI is the future of writing. Even more so, the qualitative sentiment analysis gave us deeper insights into the nuanced understandings and writing practices of these students. You can read more about preliminary (2023) insights in the DRC Blog Carnival. Students felt that generative AI (genAI) was helpful for brainstorming and idea generation and was beneficial for generating ideas, structuring thoughts, and helping overcome writer's block. They further appreciated genAI's ability to provide different perspectives or suggestions which can be developed further. GenAI was commonly mentioned by students as a useful tool for grammar corrections and refining sentence structure. They seemed to view this practice as analogous to other writing aids that do not compromise academic integrity. Several responses highlighted the utility of genAI in quickly gathering preliminary research or understanding basic concepts, which can be helpful in laying groundwork for more in-depth investigation. On the flipside, students reported negative sentiments towards genAI as well. Their responses indicated that they considered genAI use as cheating when it was used to replace personal effort. A significant concern was that using genAI to write essays or complete assignments was also cheating. Students emphasized that submitting work generated by genAI as one's own undermines the purpose of education and diminished individual understanding and effort. There is a widespread belief that AI impedes the creative process and that relying on genAI for academic tasks can lead to a decline in students' own creative and critical thinking skills. Many students saw the use of genAI in academic settings as a moral issue, arguing that it promotes laziness and dishonesty. The concern is that it allows students to bypass learning and understanding, leading to a lack of genuine academic growth. The trends in sentiments led us to speculate that, while students did find ethical use cases for generative AI, they also understood its limitations. For us, this was an “a-ha” moment, where the lore around many faculty campfires that told a narrative of “rampant student cheating” simply wasn’t accurate. The students we surveyed demonstrated deeper understandings and uses of generative AI and sought guidance from faculty. In fact, more than one third of students we surveyed wanted to learn more about genAI and would take a class on it. Preliminary Takeaways Initial analysis indicates that students need guidance on ethical AI use, and that faculty have opportunity for key input to: help students scale ethical generative AI use; help students understand the human-centered ethics of AI outputs; cultivate students’ digital literacies and prepare them to thrive in AI-infused workplaces. It’s important to note that what motivated our research team was workplace and industry data showing trends in generative AI demands and how teens are interacting with generative AI tools. What’s Next? In my next post, I will dig deeper into what we have used this data for so far. A preview: supporting students in their AI literacy journeys through rhetorical prompt engineering and OER custom GPTs. Stay tuned; thanks for reading. [i] Our initial research team included: Dr. John Havard, Dr. Laura Palmer, James Blakely, and myself. We have since added Dr. Tammy Powell, Ahlan Filtrup, and Serenity Hill Our IRB#: FY23-559
... View more
Labels
1
0
2,214
stuart_selber
Author
10-29-2024
10:00 AM
This series of posts is adapted from a keynote address given by Stuart A. Selber for the Teaching Technical Communication and Artificial Intelligence Symposium on March 20, 2024 hosted by Utah Valley University. This is the fifth post in a series exploring five tenets of a Manifesto for Technical Communication Programs: AI is evolutionary, not revolutionary AI both solves and creates problems AI shifts, rather than saves, time and money AI provides mathematical solutions to human communication problems AI requires students to know more, not less, about technical communication < You are here To explain this tenet of the Manifesto, I want to share the results of a study I conducted with Johndan Johnson-Eilola and Eric York (Johnson-Eilola, Selber, and York, 2024). In our study, we analyzed the ability of ChatGPT to generate effective instructions for a consequential task: taking a COVID-19 test at home. In our analysis, we compared the output from a commercial prompt for generating these instructions to those provided by the test manufacturer. We also analyzed the input—the prompt itself—to address prompt engineering issues. Although the output from ChatGPT exhibited certain conventions for procedural documentation, the human-authored instructions from the manufacturer were superior in many ways. We therefore concluded that when it comes to creating high-quality, consequential instructions, ChatGPT might be better seen as a collaborator than a competitor with human technical communicators. Let me summarize our results for you. Although the robot did a good job of breaking down the overall task into subtasks and using the imperative voice, it was unsuccessful in several areas, some of them critical to health and safety. For example, the robot numbered headings and bulleted steps instead of numbering steps and using text attributes to build a design hierarchy with headings. You probably recall dividing your stressed-out attention between a test kit and its instructions: Bullets do nothing to help you re-find your place in the instructions as you go. More critically, the robot failed to specify the insertion depth for the nasal swab, and it also got the insertion time wrong. In addition, the robot failed to specify how long the nasal swab should be swirled in the tube or well. These problems could very well lead to incorrect test results, failing to reveal, for instance, that the user contracted COVID-19. There were other problems with the instructions, but these were among the major ones. The output, of course, reflects the input, and the prompt had many problems, too. For example, the prompt did not call for a document title for the instructions, a preview of the task, a list of items that users will need, or safety measures, all common conventions of the genre whose presence produces well-known usability benefits. Possible solutions to these problems are themselves problematic. We might naively assume that the solution to a flawed prompt is simply better prompt engineering, that we should rewrite our purchased prompt to achieve better results. Using our expert knowledge of the conventions of the instruction set genre (and more to the point, the reasons for those conventions), we could write a new prompt to make the numbered steps unnumbered headings and the bulleted substeps numbered (to make them more conventional and thus more usable); make sure the instructions are accurate and inclusive of all information (to account for the missing swab depth), including a clause to double check that the information is accurate (to correct the errors in nasal insertion time and swab swirl time). The problem with this solution is that while we can rely on our expertise to tell us how to effectively rewrite this prompt, novice users cannot do that. And even if users could effectively rewrite this prompt, ChatGPT may not be able to successfully replace the bullets with numbers, correct the insertion times, and so on. The solution to the errors introduced by the AI, then, is not more AI. Quite the opposite, we argued. Making a fully detailed set of instructions to correctly tell ChatGPT how to write instructions seems to defeat the purpose of using the technology. That is, if we can already write instructions that well and have the time and wherewithal to iterate the results until we are satisfied they will not harm anyone, then why have the robot write them for us in the first place? In sum, one danger of AI is not that it can replace highly routine genres but that it seems like it can. And depressingly, this might just be good enough for much of the world. Reference Johnson-Eilola, Johndan, Stuart A. Selber, and Eric J. York. 2024. “Can Artificial Intelligence Robots Write Effective Instructions?” Journal of Business and Technical Communication 38 (3): 199-212. Conclusion Rather than concluding with a summary of the five tenets—I hope I have been clear enough in discussing them—I will end this series of blog posts with a final point, one that is critical to any and all pedagogical activities in technical communication programs. As we all know, writing is a mode of learning and of meaning-making. Although we must prepare students for the day-to-day work of technical communication, we are doing more than just grading the effectiveness of final projects. As we develop AI activities and assignments, we should always keep our learning goals in mind. With those goals in mind, we can better tackle the “where” of AI, something we address throughout the new edition of Technical Communication. By this I mean where we teach students to use AI in technical communication work. For example, I discourage the use of AI in first drafts because of the crucial role they play in helping students figure out what they want to say and how they want to say it. Where can AI assist learning and where might it create problems for learners? If we want to avoid outsourcing learning to technology, this is a key question for all of us.
... View more
Labels
0
1
1,767
andrea_lunsford
Author
10-28-2024
11:39 AM
Kim Haimes-Korn is a Professor of English and Digital Writing at Kennesaw State University. She also trains graduate student teachers in composition theory and pedagogy. Kim’s teaching philosophy encourages dynamic learning and critical digital literacies and focuses on students’ powers to create their own knowledge through language and various “acts of composition.” She is a regular contributor to this Multimodal Monday academic blog since 2014. She likes to have fun every day, return to nature when things get too crazy, and think deeply about way too many things. She loves teaching. It has helped her understand the value of amazing relationships and boundless creativity. You can reach Kim at khaimesk@kennesaw.edu or visit her website: Acts of Composition As a teacher of writing and literature, and as a humanist, banning books is about as bad as it gets. Book banning has always been a short-sighted strategy attempt to control thinking and access through censorship. As of late, this issue is back with us at an alarming rate, gaining strength and power. The challenged books often focus on issues of race, history, gender identity, and sexuality among others. I think there is an argument for age appropriateness, but I argue that our students need to participate in conversations on controversial topics and issues of identity to understand the society in which they live and as part of their coming-of-age process. Sure, it is uncomfortable at times, but it speaks to specific human perspectives that we can only understand through exposure. Students must have opportunities to critically examine a full range of issues that relate to their human existence. They need to understand what is at stake and have their voices heard. Or, as best put by one of my students, “By banning books, society isn’t just removing paper and ink– it’s silencing stories, ideas, and voices that challenge, provoke, and ultimately teach us lessons” (Breedlove). The American Library Association (ALA) provides some great resources for understanding the complexities of this issue. They have lists of banned and challenged books, maps that show censorship by the numbers, and the Intellectual Freedom Blog that helps “raise awareness of time-sensitive issues related to intellectual freedom, professional ethics, or the mission of the Office for Intellectual Freedom (OIF).” This group shares data along with legislative action and specific cases. For example, “OIF documented 4,240 unique book titles targeted for censorship, as well as 1,247 demands to censor library books, materials, and resources in 2023." Each year, my department participates in Banned Books Week. We have speakers, activities, and book giveaways. This year, students in my literature class contributed to these efforts through the creation of a Banned Books slideshow. This is a simple but impactful project that raises awareness for my students and others who see it. Once we created the slideshow, it was distributed across campus on closed-circuit TVs and sent out as an instructional supplement for teachers to embed in their LMS. This kind of project has a public outreach and community engagement component that takes it beyond the classroom, increasing a sense of investment for students. Here’s how my students and I created our Banned Books slideshow: Researching Banned Books: Students started by researching the history and current resources on banned books so that they understood the issues, actions, and challenged books. I shared the ALA site and the National Council of Teachers of English (NCTE) site on intellectual freedom along with lists of banned and challenged books. This allowed them to understand the context behind banning books as well as what is at stake. Choosing a Book: After students reviewed the resources, they chose a book and added it to a spreadsheet. There are so many choices! I asked students to review the list to avoid duplication. I also encouraged them to review both current and past titles along with the reasons the book was challenged. Creating the Slideshow: For this project, I used one of my favorite assignment tools, Google Slides, for creating collaborative presentations. Students easily created the project together. Each slide includes the title, publication year, author, why it was banned, and why they consider it an important text. Students also included an image of the book cover and a citation. I allowed them to design their slide for visual appeal and rhetorical impact. This project was meaningful for students as they discovered that many of their favorite books and important books are banned. They couldn’t believe some of their beloved childhood books such as Charlotte’s Web (talking animals) or Where the Wild Things Are (child abuse) appeared on the list. They also reflected on the lessons learned from important texts like To Kill a Mockingbird that contribute to our understanding of race and humanity or Gender Queer that teaches about gender identity and LGBTQ+ themes. They also recognized the importance of multicultural texts such as The Bluest Eye by Toni Morrison, The Absolute True Diary of a Part-time Indian by ShermanAlexie, or The Color Purple by Alice Walker which represent a diversity of voices and perspectives. Students learned about the processes for challenging books and participated in the current cultural conversations on this issue. When students completed the project, they felt the potential impact of their voices as their work was distributed to others for awareness and possible change. Students engaged deeply in the project and joined others who stand for the freedom to read.
... View more
Labels
0
1
654
guest_blogger
Expert
10-25-2024
03:45 PM
by Darci Thoune and Jenn Fishman Since the inaugural (WIS) in 2018, folks have been telling us that ours is an event where they feel seen, heard, and valued or, in a word, welcome. As Kaia Simon recalls in Community Literacy Journal, describing her experience at our first WIS: “It was my first year out of graduate school,” and “I remember feeling truly like a guest, likeI had been invited and that my presence mattered.” Comments like these are important to any event organizers, but all of us involved in the WIS couldn’t be more proud because of the priority we place on hosting. In fact, it’s a central part of the WIS mission, and we’ve worked hard to make it one of our hallmarks. In fact, when we started planning WIS ‘22, our first gathering of the COVID era, the importance of hosting was very much on our minds. After a year’s hiatus, we wanted to do more than simply reinstate the WIS. We wanted to amplify our hospitality, although we weren’t sure how. Enter our colleagues from Macmillan Learning. Thanks to Laura Davidson and Joy Fisher Williams, we were able to level up as hosts beyond our original capacity or our initial imaginings. Through our partnership with them, in 2022 we launched the Bedford/St. Martin’s WIS Fellows Program. It offers 3-5 early career colleagues mentorship and need-based financial support to attend the symposium as well as an opportunity to publish here on the Bits Blog. Over 3 years, the program has grown and grown, and in 2024, we welcomed our first international cohort of B/SM WIS Fellows. The roster includes: Abigayle Farrier, a lecturer in the English Department at the University of North Texas, who delivered the flashtalk, "Who Let the Dog Out?: Therapy Dogs and Trauma-Informed Pedagogy," and shared a poster, "Collaging Humans: Reflecting on the Writing Process." Christina Davidson, a PhD student in Rhetoric and Composition and Assistant Director of Composition at the University of Louisville, who shared her workshop "Collaborative Writing with AI: Utilizing Design Thinking to Improve Classroom Outcomes." Emma Tam, a writer, interdisciplinary educator, and senior undergraduate at Minerva University, who joined the WIS from the UK as an online participant. Saurabh Anand, a PhD student in Rhetoric and Composition and Assistant Writing Center Director at the University of Georgia, who presented his poster "My Queer Heart." Sonakshi Srivastava, a writing tutor at Ashoka University, Sonepat, India, who shared her WIS poster, "What's Attention Got To Do With It: On Reading and Notemaking as Writing Pedagogy," this was also the topic of her 2024 Watson Conference project. This group attended WIS ‘24, Writing Human/s, both onsite and online, and they made vital contributions as writers, as writing scholars and teachers, and as colleagues. Highlights include the synergy that developed between them and their mentors, all members of the 2024 WIS Steering Committee, including Gitte Frandsen, Jenna Green, Max Gray, Patrick Thomas, and Seán McCarthy. Today, the Macmillan-WIS partnership is one of the brightest spots in the WIS sky. You’ll see what we mean via forthcoming posts by 2024 Fellows Christina Davidson, Saurabh Anand, and Sonakshi Srivastava. We also invite you to follow the tags for WIS and writing innovation, where you’ll find additional insights from past B/SM WIS Fellows and others. Early career colleagues—undergraduates, graduate students, recent graduates, and others who have recently joined the profession—will find information about the latest fellowship opportunities in the WIS ‘25 CFP. In all, we hope the B/SM WIS Fellowship is a beacon that shines alongside WIS program opportunities, which include workshops, posters, small-scale performances and displays, and large-scale installations as well as flashtalks, flares, and sparks. The theme for WIS ‘25 is mise en place, a culinary term for putting things in place before cooking, especially in a professional kitchen. For us, it’s a metaphor for getting ready to write as well as a pathway to exploring the interrelationship between writing and food. Join us online or in Milwaukee, WI, January 30-31, 2025. Proposals are welcome through 10/25 and, for undergraduate writers, through 12/13. Registration opens in early November.
... View more
0
0
232
bedford_new_sch
Macmillan Employee
10-25-2024
10:00 AM
Chloe Cardosi Chloe Cardosi is pursuing her PhD in Public Rhetorics and Community Engagement at University of Wisconsin Milwaukee. She teaches a variety of courses in writing, including FYC courses, business and technoscience writing, and rhetoric and culture. She has also served as the English 102 coordinator, assisting the Director of Composition in revising curriculum and supporting incoming Graduate Teaching Assistants in their transition to UWM’s English Department and composition program. Her research focuses on linguistic justice, public memory, and cultural rhetorics methodologies. As a first-generation student, she is invested in helping students of all levels and backgrounds finding their voices as writers, and making the space to amplify their work. How do you ensure your course is inclusive, equitable, and culturally responsive? In addition to standard accessibility practices, my main goal for inclusivity in all of my course is fostering a sense of belonging in my students. As a writing teacher, I never want to lose sight of the fact of what an honor it is to work with students so closely and to interact with their voices and perspectives through their work. I believe that my job is not to teach students how to write, but to guide them to recognize how they already write, then get to work in refining their voices so they can compose work that matters to them. This is a major reason why I welcome and encourage multiple modes of expression and community-engaged research in my assignments. I want students to know that writing is not a skill you learn to make it through a semester and earn a good grade based on whatever idiosyncratic expectations their instructor may have—it’s a tool that will help them express themselves and their ideas, connect with others, and accomplish things in the “real world.” How this tool is wielded depends on the student and what they’re trying to accomplish with their writing. Through the work they do in my class, I want students to see that writing can—and should—look different based on audience, purpose, genre, and so much more. What is the most important skill you aim to provide to your students? I want my students to feel equipped to write for a variety of rhetorical situations. All too often, students are instructed that “Standard Academic English” is the end-all, be-all way to write, the “neutral” standard all other writing either adheres to or strays from. But let’s be real: “academic” writing is not neutral, and the idea that it’s the standard is a myth to uphold the idea of exclusivity in academia. From the beginning of my classes, I’m very clear with my students that more traditional, “standard” ways of writing in academia is just one way to write, not the way. Don’t get me wrong: this isn’t simply an “anything goes” approach. I still expect students to compose projects with clarity, careful research, and effective rhetorical choices. This is simply a way to make room for other kinds of writing in the academy, and to instill confidence in students. To do this, I encourage a lot of experimentation when it comes to writing. I try to motivate students to work with topics and genres that excite them but are perhaps unfamiliar to them—like podcasts, TED Talks, creative work, or whatever else they’d like to try. I want whatever students compose in my classes to interest and excite them, and feel like it’s usable beyond the walls of classroom—which is why I use the assignment that I’ll describe below. Chloe's Assignment That Works Below is a brief synopsis of Chloe's assignment. For the full activity, see the Research Remix assignment prompt. In my College Writing and Research classes, students spend the semester researching a topic or issue in Milwaukee (where our university is located) that matters to them. The second major assignment they complete in the course is a research report on this issue. The course then culminates in their final project: “remixing” their research report into a public-facing project that addresses a specific audience within Milwaukee. Based on the research expertise they’ve gained throughout the semester, students have to decide what information will be most useful to their chosen audience, and what genre is best for presenting that information. Essentially, this project should help students see how the same research can be employed differently to new genres and audiences. After using this assignment many times over many different sections of College Writing and Research, I’d identify these as the main benefits: Community engagement: This project helps students to see themselves as participating members and stakeholders of the larger community of our city. Creativity: Students get to create an information product that feels more tangible and exciting to them than a more traditional research paper. Recognizing their identity as writers: In allowing/encouraging them to write about something that matters to them and make something they and their audience find useful, students will recognize that their perspective already has value.
... View more
Labels
0
0
1,248
stuart_selber
Author
10-22-2024
10:00 AM
This series of posts is adapted from a keynote address given by Stuart A. Selber for the Teaching Technical Communication and Artificial Intelligence Symposium on March 20, 2024 hosted by Utah Valley University. This is the fourth post in a series exploring five tenets of a Manifesto for Technical Communication Programs: AI is evolutionary, not revolutionary AI both solves and creates problems AI shifts, rather than saves, time and money AI provides mathematical solutions to human communication problems < You are here AI requires students to know more, not less, about technical communication At this point, I am guessing that most technical communication teachers know how AI works, at least generally speaking—although it does things that not even developers can always understand. At times, the black box of AI can be hard to explain. AI robots are trained using very large language models (LLMs), which include a corpus of multimodal texts measured in terabytes. Open AI reported that the corpus for ChatGPT totaled 45 terabytes of initial data. How big is one terabyte? According to the website TechTarget.com, one terabyte of data is equivalent to 86 million pages in Microsoft Word or 310,000 photographs or 500 hours of movies. We are talking about a scale for training that defeats the capacity of concrete imagination. At least mine, anyway. In glossing over many technical aspects, what is important for teachers to remember is that the output produced by generative AI is based on statistical probability, on pattern matching, on math for a massive corpus of decontextualized texts. And while the output can be useful and interesting in all sorts of ways, the field has already tried and dismissed mathematical approaches as overarching frameworks for technical communication because they are, in a word, arhetorical. I am referring, of course, to the Shannon and Weaver (1949) mathematical model of communication, which had a good run starting in the mid-twentieth century and continues to be influential, at least obliquely, in certain popular settings and STEM contexts (for an overview and critique of this model, see Schneider, 2002; Slack, Miller, and Doak, 1993). As a reminder, this model conceptualizes communication as a linear process involving a sender, who, say, crafts an email message to a reader; an encoder, which converts the email message into binary data; a channel or network, which passes the binary data to its destination; a decoder, which re-assembles the data into an email message; and the reader, who consumes the email message. It is a tidy little circuit. The possibilities for dysfunction in the circuit come from noise, which is anything that can distort the email message. Noise could come from technical difficulties, for example, or it could come from ambient conditions, which, as Thomas Rickert (2013) taught us, can actually be rhetorical. But because Shannon and Weaver separated meaning from information, all we need to do is eliminate the noise and voila, we have success! Our field has struggled with the Shannon and Weaver mathematical model of communication for obvious reasons: it is a one-way communication model, it is a transmission model, and it models the field in very impoverished ways. Under approaches based on the Shannon and Weaver model, technical communicators are not working as meaning makers or knowledge producers in any significant sense. Instead, they are positioned as low-level workers who can probably be replaced by writing robots in some cases. I have not been able to find any mention of the communication models informing the work of AI companies, but their promises often elide the complexities of language and language use. In the communication circuit for AI, the possibilities for noise come from two main sources: the training data for robots and end-user prompts. All we have to do, so the thinking goes, is clean up the training data and teach people how to craft effective prompt sequences. The result will be AI-generated texts that are effective and usable—or at least effective and usable enough, a concern I will return to in the next tenet of the Manifesto. As such, there is little to no acknowledgement of the fundamental limitations of math as a guiding structure for communication or communication products. Put differently, there is little to no acknowledgement of the surplus of meaning in language and language use and of the interpretive capabilities required to make rhetorical sense of writing for work and school. References Rickert, Thomas. 2013. Ambient Rhetoric: The Attunements of Rhetorical Being. Pittsburgh: University of Pittsburgh Press. Schneider, Barbara. 2002. “Clarity in Context: Rethinking Misunderstanding.” Technical Communication 49 (2): 210-218. Shannon, Claude, and Warren Weaver. 1949. The Mathematical Theory of Communication. Urbana: University of Illinois Press. Slack, Jennifer Daryl, David James Miller, and Jeffrey Doak. 1993. “The Technical Communicator as Author: Meaning, Power, Authority.” Journal of Business and Technical Communication 7 (1): 12-36.
... View more
Labels
0
0
1,540
mimmoore
Author
10-22-2024
07:24 AM
Several years ago, I developed a Quality Enhancement Plan (QEP) for the community college where I worked at the time. Our topic was information literacy. Fortunately for us, generative AI was not yet widely known or accessible to students (or to the faculty QEP team). We used the Association of College and Research Library’s Framework for Information Literacy for Higher Education as one of our guiding texts. I had already seen how the internet had made information “flat” for many of my students: they did not seem to understand that information made its way to an online source via multiple routes, with varying degrees of transparency and accuracy. In short, for some students, all things lived online in basically the same way—in a flat, two-dimensional information landscape, lacking contour, context, or layers. The Information Literacy Framework, in contrast, proclaimed that “information creation is a process;” developing information literacy entails strategic evaluation—of why, by whom, for whom, and how—a text, video, purported fact, or image was created and shared. Fast forward to the present. The advent of widely available generative AI (by which I refer to large-language models that include ChatGPT and Google’s Gemini, among many others) has exacerbated the conception of “flat information” in ways that I could not have envisioned. In fact, a colleague recently commented that in some fields, it really doesn’t matter if an artifact is human or AI-generated; what matters is what students can do with it. I was stunned by that comment. Granted, this colleague was not speaking of “information” per se; the focus was on products such as reports, posters, datasets, tables, etc. But it raised for me a central question: does it matter whether a product or a piece of information is generated by a machine? If so, when does that provenance matter? And do I have the right to know when text (and the information in it) is the output of an algorithm or machine learning? The proliferation of fake images and misinformation following Hurricane Helene and Hurricane Milton has been mind-boggling. Do my students know how those images and “reports” came to be or how to verify their accuracy? (Do I?) When students search online and get a coherent answer from Google’s Gemini, do they understand that it was produced based on statistical patterns of language data, not on a search of facts? Do they recognize the disclaimer that follows Gemini’s output? Do they know they can look at the blue boxes on the right and find Gemini’s sources (which still need to be understood in the context of who, why, how, and for what audience)? Do they distinguish between tools and information sources? This fall, I am trying to blend a writing-about-writing syllabus with a writing-about-generative-AI syllabus. I want my students to see that just as their writing is the result of a process (of thinking, collaborating, drafting, using tools, fact-checking, revising, editing, and other things), so also AI came to us via a process, and it has added new layers to the processes of creating and disseminating information. Asking questions about these processes—and recognizing them as processes, not just a landscape of flat products—seems to be a reasonable response to technological changes that I cannot keep up with.
... View more
Labels
0
1
417
guest_blogger
Expert
10-18-2024
12:00 PM
by Jenn Fishman This is the first post in an occasional series affiliated with the Writing Innovation Symposium (WIS), a regional event with national reach that Jenn leads as Chief Capacitator. Learn more below and in posts tagged “writing innovation” and “WIS.” OpenAI went public with ChatGPT not even two years ago on November 30, 2022. It’s worth pausing to think about how we, as writers and writing educators, have been affected. For old times’ sake, find a pen or a pencil and a piece of paper, and make a list. Don’t stop to correct yourself or sort the positives from the negatives. Just tell yourself all the ways that AI and GenAI have had an impact on you. Some version of this exercise might be a good question of the day or freewriting topic. It makes me think about how quickly Facebook spread twenty years ago, extending from Harvard to Columbia, Stanford, and Yale in 2004; to other colleges, universities, and high schools in 2005; and to anyone with an email address and access to the internet by the end of the next year. I was a graduate student when Facebook launched, and two years later, while I was navigating the changing face of writing and writing instruction as an assistant professor, Facebook registered its 12 millionth user. The velocity of writing change, both measured and felt, prompted the cross-institutional group of us involved in the Writing Innovation Symposium or WIS to make 2024 the year of “Writing Human/s.” For us, and perhaps for you too, writing is fundamental to our human being. So we practice it again and again, and we build lives around it. We have favorite writing tools, spaces, and snacks, and if we are lucky we have writing groups that sustain us. There is writing that stays with us, writing we feel compelled to write, and writers it is our privilege to advise, mentor, and teach. To echo Donald Murray (with a dash of Elizabeth Bishop), writers write or (say it!) writers must write, and students and teachers of writing must write, too. With a sense of imperative as well as a sense of play, we gathered online and in Milwaukee, Wisconsin, at Marquette University in the first days of February to affirm, explore, question, and contend with the complexity of being writing human/s in the mid-2020s. The WIS program featured workshops about AI and collaborative writing, autoethnography, mail art, and post-ChatGPT assignment design as well as shimmer stories, the social stakes of peer response, teaching in times of crisis, and ‘zines as sites of radical possibility. We also offered a series of 5-minute flashtalks on topics as varied as robot peer review, climate change, critical making, and the embodiment of emotions, problems, and solutions in writing classrooms. In addition, along with research posters and displays, WIS ‘24 featured more than two dozen flares or 3-minute audio- and video-recorded thinkpieces by undergraduates. The opening workshop, “Multimodal Writing, Drawing, and Listening” led by Tracey Bullington set the scene. Tracey joined us from the doctoral program in Curriculum and Instruction at the University of Wisconsin, Madison. At WIS, she began with a simple lesson. Observing it is difficult, if not impossible, to learn something if we believe we cannot do it, Tracey led us in a series of drawing exercises inspired, in part, by her teacher, Lynda Barry. Following Tracey’s instructions, we drew breakfast (bacon and eggs) without looking down at our index cards or felt-tipped pens. We drew self-portraits and pictures of ourselves as animals. Then, flush with evidence of our ability, we listened to one another tell stories, and (coached by Tracey) we drew our takeaways. The results were a combination of documentary-style notes, impressions, and embellishments that inscribed what and how we heard what others were saying. We were writing human/s, and we had the pictures to prove it! Our closing activities also featured the writing arts, starting with a spoken word performance by Donnie McClendon, a PhD student in English at the University of South Florida. Through “When 4 is 6,” Donnie taught a complex lesson about remembering and forgetting by telling the story of Johnny Robinson and Virgil Ware. They were murdered the same day in 1963 that the 16th Street Baptist Church in Birmingham, Alabama, was bombed, killing Addie Mae Collins, Denise McNair, Carole Robertson, and Cynthia Wesley. We listened to their story, and then, we ended the way we began: by drawing our takeaways along with our gratitude. In the same spirit, the blogs that follow offer a coda to WIS ‘24 as well as a bridge to WIS ‘25. We hope you’ll join us here on the Bits Blog and in Milwaukee next year. The theme for WIS ‘25 is mise en place, a culinary term for putting things in place before cooking, especially in a professional kitchen. For us, it’s a metaphor for getting ready to write as well as a pathway to exploring the interrelationship between writing and food. Join us online or in Milwaukee, WI, January 30-31, 2025. Proposals are welcome through 10/25 and, for undergraduate writers, through 12/13. Registration opens in early November.
... View more
Labels
0
0
213
guest_blogger
Expert
10-17-2024
10:48 AM
This post is a part of our ongoing series on teaching in a post-AI classroom called Bits on Bots. Be sure to follow along with posts tagged with "Bits on Bots."
Do you remember where you were on November 30, 2022? I was wrapping up a semester with a teaching overload, a probable result of COVID-19 faculty fatigue lurking around (and still is, to be quite frank). That day, students in my Composition I course were gathered around one of the classroom computers when I arrived. They were tinkering with something called ChatGPT, which had just been released. Two students looked up as I walked in and excitedly called me over to check out this new, shiny thing. “This is gonna change everything,” they proclaimed. “You can really conversate with it.”
I sat down with them and, in the span of an hour, had created a cute poem and a KETO cake recipe (with the wrong ingredients, BTW). The outputs were fun, but I couldn’t see much further than “shiny new thing.” Then, a software engineering major said to me, “with the right input, I can get outputs that will help me be a better writer.” That was my Eureka moment, one where students actually told me that this tool could help them write rhetorically. So, I set out to do some tinkering myself to think about how we could facilitate effective writing while still maintaining the value of process over product — a bedrock of learning critical thinking in first-year writing courses.
Our class used Andrea Lunsford’s Everyday Writer that semester, so I started there. In her handbook, Lunsford discusses the Rhetorical Triangle as a model for brainstorming and iterating on a draft. The “iterating” part of the writing process, which asks students to re-visit text, audience, context, and communicator, helped me imagine how ChatGPT could be integrated into a student’s writerly journey. I then thought about how my students advised me to use ChatGPT: to talk to it through the content box, almost like texting a friend (computer scientists call this natural language processing (NLP)). Lunsford’s triangle and her discussion of the writing process in Section 2g, made me think about how a conversation with an AI Assistant like ChatGPT could engage emerging writers in a process that mimicked an offline process. I had already employed a process that helped students produce writing that demonstrated both their critical thought and alignment with academic conventions.
TBH, the idea of “cheating” never crossed my mind and was never even mentioned in this or any of my classes by students. They were genuinely excited by the process of iterating with ChatGPT, not interested in short-cutting. If anything, they were more engaged with writing than I had seen in semesters prior. If you are interested in this aspect of the generative AI conversation, stay tuned in this space. I have some interesting data to share over the next few posts. Here’s a sneak peek: 78% of students surveyed think that AI might be the future of writing; 35% would take a specific class on how to use AI for writing. Almost all students reported some version of this mantra: we need to keep humans at the helm of AI-human collaboration.
I wanted to find a way to capture the spirit of process that was non-linear and authentically iterative, just like the impromptu conversations my students were having with ChatGPT that first day.
So, I went back to The Writer’s Loop, which I co-authored with my friend Dr. Lauren Ingraham. The Writer’s Loop is a born-digital text that describes a model of writing that is both iterative and purposefully recursive. The process we developed in that text helped me re-imagine how typed ChatGPT conversations, guided by proven rhetorical elements, could sustain critical thinking while increasing students’ engagement in the process and, most importantly, help them create writing in their own voices. Finally, I took this draft framework back to my first-year students in January 2023.
We worked on what computer scientists call a multi-shot process, that has become the Rhetorical Prompt Engineering Framework and the Ethical Wheel of Prompting. Both of these frameworks keep the human at the helm of AI collaboration. They are, of course, a work in progress.
Over the next several semesters, I tested and revised the framework to meet students where they were and to help them engage fully with a model they could and would use as their base for conversing with an AI Assistant. I infused the Framework with a known generative AI process called prompt engineering. What I learned from folks who guide ethical outputs with AI assistants is similar to what students learn as they work through their own writing processes. Simply put, this AI-infused process helps students prompt their way to a useful and reliable product, without short-cutting critical thinking. In the semesters since 2023, students have overwhelmingly responded to this process in positive and creative ways. More importantly, they have used it in first-year writing classes as well as in their majors and have reported that they feel more confident not only in their writing but in their ability to communicate complex ideas to multiple audiences in diverse ways. And about curricular alignment? We can measure that as well! Stay tune for a blog post this semester on the topic of assessing and aligning generative AI prompting to learning outcomes.
Just like I appreciate and respect multiple student perspectives on generative AI use, I also respect yours. We each come to this conversation with diverse skills, stories, and insights. What links us together is a commitment to serving our students to help them succeed and lead in their post-college lives. I hope that my small piece of the Bits and Bots space opens up my perspectives as it does yours. If you are interested in trying out the Rhetorical Prompt Engineering Framework, please feel free to download it here and iterate on it. If you are interested in the Ethical Wheel of Prompting, which I will discuss more in-depth in upcoming posts, please do the same. If you just want to chat about your own perspectives, please also reach out. I’m interested to hear your GPT origin stories and how you have (or haven’t) infused AI Assistants into your own pedagogies.
Thanks for reading.
Jeanne Beatrix Law is a Professor of English and Director of Composition at Kennesaw State University, focusing on generative AI (genAI) and digital civil rights histography. Her AI literacy work has global reach, including multiple presentations of her Rhetorical Prompt Engineering Framework at conferences like Open Educa Berlin and the Suny Council on Writing. She has led workshops on ethical genAI for diverse institutions and disciplines at Eastern Michigan, Kent State, and CUNY’s Ai Subgroup. She and her students have authored publications on student perceptions of AI in professional writing. Jeanne also co-authored The Writer's Loop: A Guide to College Writing and contributed to the Instructor's Guide for Andrea Lunsford's Everyday Writer. She has authored eight Coursera courses on genAI and advocates for ethical AI integration in educational spaces in both secondary and higher education spaces as a faculty mentor for the AAC&U’s AI Pedagogy Institute.
... View more
Labels
1
1
1,028
stuart_selber
Author
10-15-2024
10:00 AM
This series of posts is adapted from a keynote address given by Stuart A. Selber for the Teaching Technical Communication and Artificial Intelligence Symposium on March 20, 2024 hosted by Utah Valley University. This is the third post in a series exploring five tenets of a Manifesto for Technical Communication Programs: AI is evolutionary, not revolutionary AI both solves and creates problems AI shifts, rather than saves, time and money < You are here AI provides mathematical solutions to human communication problems AI requires students to know more, not less, about technical communication One of the more compelling technology myths encourages university administrators to assume that AI platforms will automatically make people more productive and thus are a cost-effective way of doing business. This myth, which is particularly appealing in a time of shrinking fiscal resources and budgetary deficits, inspires institutional initiatives that increase enrollments and workloads but not faculty positions, use chatbot advisers to help students negotiate the complexities of academic landscapes, and use AI dashboards to interpret learning analytics that require nuanced judgment and situational understanding. Did a student spend an unusual amount of time in a distance learning module because it was helpful or confusing? Only a human in the loop can discover the answer. Right now, there is very little evidence to suggest that AI platforms actually reduce costs in any significant manner or that they enhance productivity in the ways people often claim. In fact, more than a few studies show that it is still cheaper to employ humans in many contexts and cases and that organizations and institutions are underestimating the real costs of implementing and supporting AI initiatives (see, for example, Altchek, 2024; Walch, 2024). The myth of AI as a cost and time reducing technology is perpetuated by work culture, especially American work culture. Let me provide a quick example to illustrate this point. When I first started learning about generative AI, I watched a YouTube video by a financial expert who recorded himself trying to use ChatGPT to create a macro for Microsoft Excel. He captured his attempt, including all of the trial and error, for anyone to see. It took him about 20 minutes of work with TypeScript, a subset of JavaScript, but he was ultimately successful in prompting the robot to generate the macro. His response? If I had done this myself, it would have taken me all weekend. Now I can work on other things. In other words, his application of AI shifted time and money to other things. I had to chuckle about this video because at the time—this was January of 2023—a million plus workers in France were striking because President Macron proposed to raise the retirement age from 62 to 64. Meanwhile, people in the US work hundreds of hours more per year than the average European (Grieve, 2023), and AI feeds our general obsession with productivity and work culture. (Here’s an idea, by the way: Use AI to write that macro and then take the weekend for yourself. You will be a better employee for it. But I digress.) The real issue is how to think about the nature of the shifts. That is, what are we shifting to, exactly? What you hear a lot of analysts saying is that AI can automate lower-order tasks so that people can spend more time on higher-order tasks. Technical communication teachers already prioritize higher-order tasks in both our process and assessment advice. We tell students, for example, to ignore formatting while brainstorming ideas and drafting. And we tell them that a perfectly grammatical text can still fail miserably if it does not attend to audiences, their purposes and tasks, and other rhetorical dimensions. Indeed, for many years now, Les Perelman has been a thorn in the side of proponents of essay grading software because his high-scoring essays have been nonsensical (see Perelman, 2012, for an overview of his case against automated essay grading). But is the distinction between lower-order tasks and higher-order tasks always so sharply drawn or even static? Grammar is an obvious test case here. Commentators often think that applying grammar is a lower-order task, and sometimes it is. This view is unsurprising given the popularity and wide availability of grammar checkers. Our institutional instance of Microsoft Word embeds Grammarly as its checker, and by default, students can activate it from the main interface ribbon. In technical communication, however, our rhetorical treatment of active and passive voice indicates higher-order considerations. As technical communication teachers know, both active and passive voice are grammatically correct, but we have preferences. In most cases, we tell students, the active voice tends to work better because it emphasizes the agent or the doer of the action, human or non-human. The passive voice, in contrast, should be reserved for instances in which the agent is unknown or the agent is less important than the action. Many grammar checkers, including AI-assisted checkers like Grammarly, can help students locate the passive voice in their writing. Some checkers even advise students that the passive voice is undesirable, almost an error, but this advice is misleading. What a robot needs to decide is if the passive voice works better than the active voice for the specific purposes at hand. This is a tough ask for the robot because the answer requires higher-order reasoning. (The new edition of Technical Communication, available this fall, helps students to navigate these questions.) Another rhetorical dimension here is ethical considerations. People use the passive voice to help them avoid responsibility. But I defy an AI robot to reason through the ethical considerations of active and passive voice in a complex situation with many factors. This is a simple example of the complications of assigning so-called lower-order tasks to AI in order to free up time for higher-order tasks. Humans will most certainly need to be in the loop much if not most of the time, somewhere, and what counts as a lower-order task or a higher-order task can change from situation to situation. Cost and time savings are tricky AI subjects with rhetorical aspects, not straightforward propositions based on static rules. References Altchek, Ana. 2024, April 24. “Meta’s AI Plans are Costing Way More than They Thought.” Business Insider, n.p. Grieve, Peter. 2023, January 6. “Americans Work Hundreds of Hours More a Year Than Europeans: Report.” Money, n.p. Perelman, Les. 2012. “Construct Validity, Length, Score, and Time in Holistically Graded Writing Assessments: The Case Against Automated Essay Scoring (AES).” In International Advances in Writing Research: Cultures, Places, Measures, edited by Charles Bazerman, Chris Dean, Jessica Early, Karen Lunsford, Suzie Null, Paul Rogers, and Don Donahue, 121-132. Denver: University Press of Colorado. Walch, Kathleen. 2024, April 5. “What are the Real Costs of AI Projects?” Money, n.p.
... View more
Labels
1
0
1,796
april_lidinsky
Author
10-14-2024
10:00 AM
A few weeks ago, I had the pleasure of being in the front row when Newbery Medal-winning author Kwame Alexander visited South Bend, Indiana. Alexander was on tour to promote his newest lyrical novel for young readers, Black Star, the second in “The Door of No Return” series. There were many engaged young people in the audience, and they dominated the question and answer period with verve. A few showered Alexander with praise for his novels, which taught them history they hadn’t learned in school. Some said he’d showed them how poetry can carry a story. One high-schooler asked how she could find a publisher for her novel; he invited her to chat with him after the event, and she beamed. And then came more tentative questions from the back of the room: “How do I become a better writer? And how can I figure out what to write?” Heads in the crowded auditorium swiveled in the direction of the speaker. I suspect many of us empathized with this brave young person who managed in several seconds to capture the core anxieties of most writers. Kwame Alexander smiled broadly, and, I like to think, said a version of what we all tell our students (and ourselves!) when we’re worried about our writing and paralyzed about what to write about. His advice: “Read a lot. I mean, really read a lot, and pay attention to what you like and don’t like.” He also said, “In order to be interesting as a writer, you’ve got to ask a lot of questions. In order to be interesting, you’ve got to be …” And since he paused and I was in the front row, I supplied a word: “Interested.” It was not an original contribution, I know, but he gave me a dazzler of a smile, and repeated it to the crowd: “Yes! In order to be interesting, you have to be interested.” He seemed so pleased to be brainstorming with the mixed-age crowd about the hard work and satisfactions of writing. I wanted to bottle the fizz of the evening. At this stage of the semester, many of us are working with students who struggle to find something to write about. After all, one can write about anything, but to be worth the time of the writer — and reader — that pesky “So what?” question remains the gold standard. How many of us have sat with students who are simply stuck and “can’t think of anything to write about”? It’s tempting to flip Kwame Alexander’s point and say, “I can’t make you interesting if you’re not interested.” But that’s not very helpful. So, my co-author, Stuart Greene, and I offer a variety of actually helpful questions for students in Chapter Five of From Inquiry to Academic Writing, Fifth Edition. Channeling Kwame Alexander’s advice to be interested readers so that they can be interesting writers, we help students see themselves as participants in the scholarly conversations they read about, inviting them, with scaffolded models, to: 1) Draw on your personal experience 2) Identify what is open to dispute 3) Resist binary thinking 4) Build on and extend the ideas of others 5) Read to discover a writer’s frame 6) Consider the constraints of the situation. Like Kwame Alexander, I believe — and hope you do, too — that our students can develop into more interesting writers by learning to be more interested in the world. And that’s a habit of mind that pays dividends far beyond our classrooms.
... View more
Labels
1
0
614
bedford_new_sch
Macmillan Employee
10-11-2024
10:30 AM
Eric Korankye Eric Korankye is a PhD English student specializing in Rhetoric and Composition at Illinois State University (ISU). He teaches Business Writing, First-Year Composition, and Advanced Composition, and also serves as the New Instructor Mentor in the ISU Writing Program, providing mentorship and pedagogical support to new writing instructors. As an international interdisciplinary researcher and teacher from Ghana, Eric is committed to designing and practicing social justice pedagogies in Composition Studies, Rhetoric, and Technical Communication, focusing on design justice, students’ language rights advocacy, legitimation of international scholarly knowledge, and working against intersectional oppression against students of color. What do you think is the most important recent development or pedagogical approach in teaching composition? Composition in this technocultural age has been impacted by the emergence of digital tools, technologies, multiliteracies, and the (r)evolution of rhetorical genres for composing. These digital innovations unavoidably have several implications on composition pedagogies, especially in terms of integrating the use of online platforms for collaborative writing and multimedia composition, to meet the interconnected and interdependent needs of writers in our composition classrooms. For teaching composition, this means more than instructors reexamining their teaching practices, but also actively 1) integrating innovative assessment practices for student writers’ multimodal composition artifacts, 2) (re)framing traditional perceptions about writing, genres of writing, and audiences of writing, 3) (re)situating privileged canons, theories, and practices in composition studies, 4) valuing occluded writing traditions, knowledges, and languages of student writers from minoritized backgrounds, 5) foregrounding composition courses on rhetoric and multiliteracies, and 6) navigating the affordances and constraints associated with emerging writing technologies such as artificial intelligence (AI) that have recently become topical in composition studies. When these pedagogical practices are (re)inforced in the teaching of composition, student writers’ perceptions and approaches about composing will change, and in the long run, shape their own composing and learning practices in the classroom and apply them effectively in their worlds outside the classroom. How do you ensure your course is inclusive, equitable, and culturally responsive? In contemporary times when the ubiquity of multiculturalism and multilingualism is gradually phasing out the previous homogeneous demography in most composition classrooms, it is important that composition classes move beyond homogenous conceptualizations to value inclusivity, equity, and culturally responsiveness. This shift can be achieved by designing and practicing anti-racist, anti-oppressive, and culturally-sustaining pedagogies which ensure that the course materials, readings, activities, assignments, language practices, and classroom cultures reflect a diverse range of voices, perspectives, cultural experiences, and ways of knowing. Instructors need to incorporate texts from authors of different backgrounds and identities, especially BIPOC authors, and include course content that represents various cultures, traditions, histories, and experiences. This (re)alignment also means instructors’ valuing culturally relevant examples, contexts, case studies, and references that are relevant and relatable to students from diverse backgrounds. This can help students see their identities reflected in the course materials and establish connections to their own experiences. Instructors need to continuously create a supportive–safe and brave–classroom environment which makes all students feel valued and respected. Instructors need to value students’ right to their own languages, encourage open dialogue, respect differing opinions, avoid biases and stereotypes, and create opportunities for students to share their experiences and ways of knowing. Eric’s Assignment That Works Below is a brief synopsis of Eric's assignment. For the full activity, see Professional Outlook Portfolio Project Prompt. My “Assignment That Works” is the Professional Outlook Portfolio project, the second major project I assign in the ENG 145.13 Writing Business and Government Organizations course I teach at Illinois State University (ISU). In this Project, student writers will create a resume and a blog/website profile that match their education and work experiences. They will also search and find a job posting which they will respond to and write a cover letter for. After completing all these writing tasks, they will create an uptake document, explaining and describing their composing practices throughout the project. This project has been designed to help student writers become more critical, creative, and capable as both consumers and producers of business writing (e.g., resume/CV, job posting, business/personal website, etc.). The goal is also to provide writers with hands-on experience in building a solid marketable professional outlook for their chosen career path and exploring various business writing genres. In assigning this project, we read and annotate the project prompt together and ask questions. We workshop for project ideas in subsequent classes, and the feedback from student writers shows that the project offers them hands-on experience in creating business writing genres that they can use in real life.
... View more
Labels
0
0
2,177
mimmoore
Author
10-08-2024
11:02 AM
In a recent Chronicle of Higher Education piece, Marc Watkins shared an AI-Assisted Learning Template that he uses to encourage critical reflection and open disclosure of AI use on writing assignments. I find much to admire in his approach: he pushes students to consider assignment goals and the extent to which an AI tool helped them achieve those goals. In short, he asks them to reflect critically on their learning. And to the cynics who suggest that students will just use generative AI to compose these evaluations, Watkins admits the possibility. But he suggests most students are “eager for mechanisms to show how they used AI tools,” and he clearly wants to resist “adopting intrusive and often unreliable surveillance” in his classrooms. I applaud the learning focus embedded in his assignment design, as well as his rejection of detection-obsessed pedagogy. But I still have concerns. First is the notion that we “teach ethical usage” primarily by ensuring students do not cheat. I think discussions of ethical AI use must include transparency in what we ask the AI to do and how we use the outputs, of course, but there are other ethical issues as well. In my current corequisite FYC course, for example, we are looking at how generative AI platforms were developed and trained: was labor (particularly of minority groups), privacy, or copyright abused in the process? Will my use of the AI expose my words or data in ways that I cannot control? If I want to use the tool to brainstorm for a few minutes, are there environment, worker, or privacy costs to that session? Is there a risk that my use will perpetuate biases or misinformation? Ultimately, is the AI the best tool for the task at hand? If not, is using it for the sake of convenience an ethically defensible choice? My students and I have also asked this question: do we have a right to know when we are interacting with synthetic (i.e., machine-generated) text or images? The AI-Assisted Learning Template could be modified to include some of these reflections, so that learning issues are coupled with broader ethical concerns. I have included a similar exercise with the AI-themed assignments in my FYC course this term. But another issue lingers for me: how much reflection will actually occur in assigned self-evaluations? McGarr & O’Gallchóir (2020), Thomas and Liu (2012), Ross (2011; 2014), and Wharton (2012) are just some of the researchers who have suggested that assigned (and assessed) reflections are inherently performative: students may be just as much focused on managing their perceived image through skillful self-positioning as engaging in deep reflection. Students want to position themselves positively, as those who made a good effort and determined to learn something—deploying what Thomas and Liu (2012) call “sunshining” to characterize their efforts, challenges, potential failures, and ultimate progress. My own research in the language of student reflections suggests that students make linguistic choices that distance them from agency over decisions and outcomes. They also foreground what is perceived as desirable—effort, open-mindedness, resourcefulness, and willingness to learn or grow. Many find it hard to acknowledge uncertainty about what they learned in a given assignment; after all, doing so might be perceived as criticism of the assignment or failure to learn—and engaging in either one is risky for students. But learning insights don’t always arrive on schedule, packaged neatly prior for submission by the due date. Still, my students have been trained since middle school to assert with confidence (and deferential gratitude to the teacher who provided the opportunity) that they have indeed learned, just as they were supposed to. Check that box. I am not suggesting that the template is without value or that reflective writing needs to be scrapped altogether. FYC certainly has a rich history of reflection-infused pedagogy that cannot be ignored. But as we adopt (and adapt) technology, tools, and templates, let’s consider how to ensure that students are empowered to question such technology, tools, and templates—promoting honest and authentic reflection insofar as that is possible. How do we do that? I don’t have all the answers, but I agree with what Marc Watkins says: we should “shift focus away from technology as a solution and invest in human capital.” Ultimately, he says, he should honor the “irreplaceable value of human-centered education.” Exactly. Thanks to Marc for opening space for this conversation. Photo by Solen Feyissa on Unsplash
... View more
Labels
1
1
812
Popular Posts
Converting to a More Visual Syllabus
traci_gardner
Author
8
10
We the People??
andrea_lunsford
Author
7
0