-
About
Our Story
back- Our Mission
- Our Leadership
- Accessibility
- Careers
- Diversity, Equity, Inclusion
- Learning Science
- Sustainability
Our Solutions
back
-
Community
Community
back- Newsroom
- Discussions
- Webinars on Demand
- Digital Community
- The Institute at Macmillan Learning
- English Community
- Psychology Community
- History Community
- Communication Community
- College Success Community
- Economics Community
- Institutional Solutions Community
- Nutrition Community
- Lab Solutions Community
- STEM Community
- Newsroom
- Macmillan Community
- :
- English Community
- :
- Bits Blog
- :
- Bits Blog - Page 2
Bits Blog - Page 2
Options
- Mark all as New
- Mark all as Read
- Float this item to the top
- Subscribe
- Bookmark
- Subscribe to RSS Feed
Bits Blog - Page 2
mimmoore
Author
10-22-2024
07:24 AM
Several years ago, I developed a Quality Enhancement Plan (QEP) for the community college where I worked at the time. Our topic was information literacy. Fortunately for us, generative AI was not yet widely known or accessible to students (or to the faculty QEP team). We used the Association of College and Research Library’s Framework for Information Literacy for Higher Education as one of our guiding texts. I had already seen how the internet had made information “flat” for many of my students: they did not seem to understand that information made its way to an online source via multiple routes, with varying degrees of transparency and accuracy. In short, for some students, all things lived online in basically the same way—in a flat, two-dimensional information landscape, lacking contour, context, or layers. The Information Literacy Framework, in contrast, proclaimed that “information creation is a process;” developing information literacy entails strategic evaluation—of why, by whom, for whom, and how—a text, video, purported fact, or image was created and shared. Fast forward to the present. The advent of widely available generative AI (by which I refer to large-language models that include ChatGPT and Google’s Gemini, among many others) has exacerbated the conception of “flat information” in ways that I could not have envisioned. In fact, a colleague recently commented that in some fields, it really doesn’t matter if an artifact is human or AI-generated; what matters is what students can do with it. I was stunned by that comment. Granted, this colleague was not speaking of “information” per se; the focus was on products such as reports, posters, datasets, tables, etc. But it raised for me a central question: does it matter whether a product or a piece of information is generated by a machine? If so, when does that provenance matter? And do I have the right to know when text (and the information in it) is the output of an algorithm or machine learning? The proliferation of fake images and misinformation following Hurricane Helene and Hurricane Milton has been mind-boggling. Do my students know how those images and “reports” came to be or how to verify their accuracy? (Do I?) When students search online and get a coherent answer from Google’s Gemini, do they understand that it was produced based on statistical patterns of language data, not on a search of facts? Do they recognize the disclaimer that follows Gemini’s output? Do they know they can look at the blue boxes on the right and find Gemini’s sources (which still need to be understood in the context of who, why, how, and for what audience)? Do they distinguish between tools and information sources? This fall, I am trying to blend a writing-about-writing syllabus with a writing-about-generative-AI syllabus. I want my students to see that just as their writing is the result of a process (of thinking, collaborating, drafting, using tools, fact-checking, revising, editing, and other things), so also AI came to us via a process, and it has added new layers to the processes of creating and disseminating information. Asking questions about these processes—and recognizing them as processes, not just a landscape of flat products—seems to be a reasonable response to technological changes that I cannot keep up with.
... View more
Labels
0
1
346
guest_blogger
Expert
10-18-2024
12:00 PM
by Jenn Fishman This is the first post in an occasional series affiliated with the Writing Innovation Symposium (WIS), a regional event with national reach that Jenn leads as Chief Capacitator. Learn more below and in posts tagged “writing innovation” and “WIS.” OpenAI went public with ChatGPT not even two years ago on November 30, 2022. It’s worth pausing to think about how we, as writers and writing educators, have been affected. For old times’ sake, find a pen or a pencil and a piece of paper, and make a list. Don’t stop to correct yourself or sort the positives from the negatives. Just tell yourself all the ways that AI and GenAI have had an impact on you. Some version of this exercise might be a good question of the day or freewriting topic. It makes me think about how quickly Facebook spread twenty years ago, extending from Harvard to Columbia, Stanford, and Yale in 2004; to other colleges, universities, and high schools in 2005; and to anyone with an email address and access to the internet by the end of the next year. I was a graduate student when Facebook launched, and two years later, while I was navigating the changing face of writing and writing instruction as an assistant professor, Facebook registered its 12 millionth user. The velocity of writing change, both measured and felt, prompted the cross-institutional group of us involved in the Writing Innovation Symposium or WIS to make 2024 the year of “Writing Human/s.” For us, and perhaps for you too, writing is fundamental to our human being. So we practice it again and again, and we build lives around it. We have favorite writing tools, spaces, and snacks, and if we are lucky we have writing groups that sustain us. There is writing that stays with us, writing we feel compelled to write, and writers it is our privilege to advise, mentor, and teach. To echo Donald Murray (with a dash of Elizabeth Bishop), writers write or (say it!) writers must write, and students and teachers of writing must write, too. With a sense of imperative as well as a sense of play, we gathered online and in Milwaukee, Wisconsin, at Marquette University in the first days of February to affirm, explore, question, and contend with the complexity of being writing human/s in the mid-2020s. The WIS program featured workshops about AI and collaborative writing, autoethnography, mail art, and post-ChatGPT assignment design as well as shimmer stories, the social stakes of peer response, teaching in times of crisis, and ‘zines as sites of radical possibility. We also offered a series of 5-minute flashtalks on topics as varied as robot peer review, climate change, critical making, and the embodiment of emotions, problems, and solutions in writing classrooms. In addition, along with research posters and displays, WIS ‘24 featured more than two dozen flares or 3-minute audio- and video-recorded thinkpieces by undergraduates. The opening workshop, “Multimodal Writing, Drawing, and Listening” led by Tracey Bullington set the scene. Tracey joined us from the doctoral program in Curriculum and Instruction at the University of Wisconsin, Madison. At WIS, she began with a simple lesson. Observing it is difficult, if not impossible, to learn something if we believe we cannot do it, Tracey led us in a series of drawing exercises inspired, in part, by her teacher, Lynda Barry. Following Tracey’s instructions, we drew breakfast (bacon and eggs) without looking down at our index cards or felt-tipped pens. We drew self-portraits and pictures of ourselves as animals. Then, flush with evidence of our ability, we listened to one another tell stories, and (coached by Tracey) we drew our takeaways. The results were a combination of documentary-style notes, impressions, and embellishments that inscribed what and how we heard what others were saying. We were writing human/s, and we had the pictures to prove it! Our closing activities also featured the writing arts, starting with a spoken word performance by Donnie McClendon, a PhD student in English at the University of South Florida. Through “When 4 is 6,” Donnie taught a complex lesson about remembering and forgetting by telling the story of Johnny Robinson and Virgil Ware. They were murdered the same day in 1963 that the 16th Street Baptist Church in Birmingham, Alabama, was bombed, killing Addie Mae Collins, Denise McNair, Carole Robertson, and Cynthia Wesley. We listened to their story, and then, we ended the way we began: by drawing our takeaways along with our gratitude. In the same spirit, the blogs that follow offer a coda to WIS ‘24 as well as a bridge to WIS ‘25. We hope you’ll join us here on the Bits Blog and in Milwaukee next year. The theme for WIS ‘25 is mise en place, a culinary term for putting things in place before cooking, especially in a professional kitchen. For us, it’s a metaphor for getting ready to write as well as a pathway to exploring the interrelationship between writing and food. Join us online or in Milwaukee, WI, January 30-31, 2025. Proposals are welcome through 10/25 and, for undergraduate writers, through 12/13. Registration opens in early November.
... View more
Labels
0
0
179
guest_blogger
Expert
10-17-2024
10:48 AM
This post is a part of our ongoing series on teaching in a post-AI classroom called Bits on Bots. Be sure to follow along with posts tagged with "Bits on Bots."
Do you remember where you were on November 30, 2022? I was wrapping up a semester with a teaching overload, a probable result of COVID-19 faculty fatigue lurking around (and still is, to be quite frank). That day, students in my Composition I course were gathered around one of the classroom computers when I arrived. They were tinkering with something called ChatGPT, which had just been released. Two students looked up as I walked in and excitedly called me over to check out this new, shiny thing. “This is gonna change everything,” they proclaimed. “You can really conversate with it.”
I sat down with them and, in the span of an hour, had created a cute poem and a KETO cake recipe (with the wrong ingredients, BTW). The outputs were fun, but I couldn’t see much further than “shiny new thing.” Then, a software engineering major said to me, “with the right input, I can get outputs that will help me be a better writer.” That was my Eureka moment, one where students actually told me that this tool could help them write rhetorically. So, I set out to do some tinkering myself to think about how we could facilitate effective writing while still maintaining the value of process over product — a bedrock of learning critical thinking in first-year writing courses.
Our class used Andrea Lunsford’s Everyday Writer that semester, so I started there. In her handbook, Lunsford discusses the Rhetorical Triangle as a model for brainstorming and iterating on a draft. The “iterating” part of the writing process, which asks students to re-visit text, audience, context, and communicator, helped me imagine how ChatGPT could be integrated into a student’s writerly journey. I then thought about how my students advised me to use ChatGPT: to talk to it through the content box, almost like texting a friend (computer scientists call this natural language processing (NLP)). Lunsford’s triangle and her discussion of the writing process in Section 2g, made me think about how a conversation with an AI Assistant like ChatGPT could engage emerging writers in a process that mimicked an offline process. I had already employed a process that helped students produce writing that demonstrated both their critical thought and alignment with academic conventions.
TBH, the idea of “cheating” never crossed my mind and was never even mentioned in this or any of my classes by students. They were genuinely excited by the process of iterating with ChatGPT, not interested in short-cutting. If anything, they were more engaged with writing than I had seen in semesters prior. If you are interested in this aspect of the generative AI conversation, stay tuned in this space. I have some interesting data to share over the next few posts. Here’s a sneak peek: 78% of students surveyed think that AI might be the future of writing; 35% would take a specific class on how to use AI for writing. Almost all students reported some version of this mantra: we need to keep humans at the helm of AI-human collaboration.
I wanted to find a way to capture the spirit of process that was non-linear and authentically iterative, just like the impromptu conversations my students were having with ChatGPT that first day.
So, I went back to The Writer’s Loop, which I co-authored with my friend Dr. Lauren Ingraham. The Writer’s Loop is a born-digital text that describes a model of writing that is both iterative and purposefully recursive. The process we developed in that text helped me re-imagine how typed ChatGPT conversations, guided by proven rhetorical elements, could sustain critical thinking while increasing students’ engagement in the process and, most importantly, help them create writing in their own voices. Finally, I took this draft framework back to my first-year students in January 2023.
We worked on what computer scientists call a multi-shot process, that has become the Rhetorical Prompt Engineering Framework and the Ethical Wheel of Prompting. Both of these frameworks keep the human at the helm of AI collaboration. They are, of course, a work in progress.
Over the next several semesters, I tested and revised the framework to meet students where they were and to help them engage fully with a model they could and would use as their base for conversing with an AI Assistant. I infused the Framework with a known generative AI process called prompt engineering. What I learned from folks who guide ethical outputs with AI assistants is similar to what students learn as they work through their own writing processes. Simply put, this AI-infused process helps students prompt their way to a useful and reliable product, without short-cutting critical thinking. In the semesters since 2023, students have overwhelmingly responded to this process in positive and creative ways. More importantly, they have used it in first-year writing classes as well as in their majors and have reported that they feel more confident not only in their writing but in their ability to communicate complex ideas to multiple audiences in diverse ways. And about curricular alignment? We can measure that as well! Stay tune for a blog post this semester on the topic of assessing and aligning generative AI prompting to learning outcomes.
Just like I appreciate and respect multiple student perspectives on generative AI use, I also respect yours. We each come to this conversation with diverse skills, stories, and insights. What links us together is a commitment to serving our students to help them succeed and lead in their post-college lives. I hope that my small piece of the Bits and Bots space opens up my perspectives as it does yours. If you are interested in trying out the Rhetorical Prompt Engineering Framework, please feel free to download it here and iterate on it. If you are interested in the Ethical Wheel of Prompting, which I will discuss more in-depth in upcoming posts, please do the same. If you just want to chat about your own perspectives, please also reach out. I’m interested to hear your GPT origin stories and how you have (or haven’t) infused AI Assistants into your own pedagogies.
Thanks for reading.
Jeanne Beatrix Law is a Professor of English and Director of Composition at Kennesaw State University, focusing on generative AI (genAI) and digital civil rights histography. Her AI literacy work has global reach, including multiple presentations of her Rhetorical Prompt Engineering Framework at conferences like Open Educa Berlin and the Suny Council on Writing. She has led workshops on ethical genAI for diverse institutions and disciplines at Eastern Michigan, Kent State, and CUNY’s Ai Subgroup. She and her students have authored publications on student perceptions of AI in professional writing. Jeanne also co-authored The Writer's Loop: A Guide to College Writing and contributed to the Instructor's Guide for Andrea Lunsford's Everyday Writer. She has authored eight Coursera courses on genAI and advocates for ethical AI integration in educational spaces in both secondary and higher education spaces as a faculty mentor for the AAC&U’s AI Pedagogy Institute.
... View more
Labels
1
1
875
stuart_selber
Author
10-15-2024
10:00 AM
This series of posts is adapted from a keynote address given by Stuart A. Selber for the Teaching Technical Communication and Artificial Intelligence Symposium on March 20, 2024 hosted by Utah Valley University. This is the third post in a series exploring five tenets of a Manifesto for Technical Communication Programs: AI is evolutionary, not revolutionary AI both solves and creates problems AI shifts, rather than saves, time and money < You are here AI provides mathematical solutions to human communication problems AI requires students to know more, not less, about technical communication One of the more compelling technology myths encourages university administrators to assume that AI platforms will automatically make people more productive and thus are a cost-effective way of doing business. This myth, which is particularly appealing in a time of shrinking fiscal resources and budgetary deficits, inspires institutional initiatives that increase enrollments and workloads but not faculty positions, use chatbot advisers to help students negotiate the complexities of academic landscapes, and use AI dashboards to interpret learning analytics that require nuanced judgment and situational understanding. Did a student spend an unusual amount of time in a distance learning module because it was helpful or confusing? Only a human in the loop can discover the answer. Right now, there is very little evidence to suggest that AI platforms actually reduce costs in any significant manner or that they enhance productivity in the ways people often claim. In fact, more than a few studies show that it is still cheaper to employ humans in many contexts and cases and that organizations and institutions are underestimating the real costs of implementing and supporting AI initiatives (see, for example, Altchek, 2024; Walch, 2024). The myth of AI as a cost and time reducing technology is perpetuated by work culture, especially American work culture. Let me provide a quick example to illustrate this point. When I first started learning about generative AI, I watched a YouTube video by a financial expert who recorded himself trying to use ChatGPT to create a macro for Microsoft Excel. He captured his attempt, including all of the trial and error, for anyone to see. It took him about 20 minutes of work with TypeScript, a subset of JavaScript, but he was ultimately successful in prompting the robot to generate the macro. His response? If I had done this myself, it would have taken me all weekend. Now I can work on other things. In other words, his application of AI shifted time and money to other things. I had to chuckle about this video because at the time—this was January of 2023—a million plus workers in France were striking because President Macron proposed to raise the retirement age from 62 to 64. Meanwhile, people in the US work hundreds of hours more per year than the average European (Grieve, 2023), and AI feeds our general obsession with productivity and work culture. (Here’s an idea, by the way: Use AI to write that macro and then take the weekend for yourself. You will be a better employee for it. But I digress.) The real issue is how to think about the nature of the shifts. That is, what are we shifting to, exactly? What you hear a lot of analysts saying is that AI can automate lower-order tasks so that people can spend more time on higher-order tasks. Technical communication teachers already prioritize higher-order tasks in both our process and assessment advice. We tell students, for example, to ignore formatting while brainstorming ideas and drafting. And we tell them that a perfectly grammatical text can still fail miserably if it does not attend to audiences, their purposes and tasks, and other rhetorical dimensions. Indeed, for many years now, Les Perelman has been a thorn in the side of proponents of essay grading software because his high-scoring essays have been nonsensical (see Perelman, 2012, for an overview of his case against automated essay grading). But is the distinction between lower-order tasks and higher-order tasks always so sharply drawn or even static? Grammar is an obvious test case here. Commentators often think that applying grammar is a lower-order task, and sometimes it is. This view is unsurprising given the popularity and wide availability of grammar checkers. Our institutional instance of Microsoft Word embeds Grammarly as its checker, and by default, students can activate it from the main interface ribbon. In technical communication, however, our rhetorical treatment of active and passive voice indicates higher-order considerations. As technical communication teachers know, both active and passive voice are grammatically correct, but we have preferences. In most cases, we tell students, the active voice tends to work better because it emphasizes the agent or the doer of the action, human or non-human. The passive voice, in contrast, should be reserved for instances in which the agent is unknown or the agent is less important than the action. Many grammar checkers, including AI-assisted checkers like Grammarly, can help students locate the passive voice in their writing. Some checkers even advise students that the passive voice is undesirable, almost an error, but this advice is misleading. What a robot needs to decide is if the passive voice works better than the active voice for the specific purposes at hand. This is a tough ask for the robot because the answer requires higher-order reasoning. (The new edition of Technical Communication, available this fall, helps students to navigate these questions.) Another rhetorical dimension here is ethical considerations. People use the passive voice to help them avoid responsibility. But I defy an AI robot to reason through the ethical considerations of active and passive voice in a complex situation with many factors. This is a simple example of the complications of assigning so-called lower-order tasks to AI in order to free up time for higher-order tasks. Humans will most certainly need to be in the loop much if not most of the time, somewhere, and what counts as a lower-order task or a higher-order task can change from situation to situation. Cost and time savings are tricky AI subjects with rhetorical aspects, not straightforward propositions based on static rules. References Altchek, Ana. 2024, April 24. “Meta’s AI Plans are Costing Way More than They Thought.” Business Insider, n.p. Grieve, Peter. 2023, January 6. “Americans Work Hundreds of Hours More a Year Than Europeans: Report.” Money, n.p. Perelman, Les. 2012. “Construct Validity, Length, Score, and Time in Holistically Graded Writing Assessments: The Case Against Automated Essay Scoring (AES).” In International Advances in Writing Research: Cultures, Places, Measures, edited by Charles Bazerman, Chris Dean, Jessica Early, Karen Lunsford, Suzie Null, Paul Rogers, and Don Donahue, 121-132. Denver: University Press of Colorado. Walch, Kathleen. 2024, April 5. “What are the Real Costs of AI Projects?” Money, n.p.
... View more
Labels
1
0
1,705
april_lidinsky
Author
10-14-2024
10:00 AM
A few weeks ago, I had the pleasure of being in the front row when Newbery Medal-winning author Kwame Alexander visited South Bend, Indiana. Alexander was on tour to promote his newest lyrical novel for young readers, Black Star, the second in “The Door of No Return” series. There were many engaged young people in the audience, and they dominated the question and answer period with verve. A few showered Alexander with praise for his novels, which taught them history they hadn’t learned in school. Some said he’d showed them how poetry can carry a story. One high-schooler asked how she could find a publisher for her novel; he invited her to chat with him after the event, and she beamed. And then came more tentative questions from the back of the room: “How do I become a better writer? And how can I figure out what to write?” Heads in the crowded auditorium swiveled in the direction of the speaker. I suspect many of us empathized with this brave young person who managed in several seconds to capture the core anxieties of most writers. Kwame Alexander smiled broadly, and, I like to think, said a version of what we all tell our students (and ourselves!) when we’re worried about our writing and paralyzed about what to write about. His advice: “Read a lot. I mean, really read a lot, and pay attention to what you like and don’t like.” He also said, “In order to be interesting as a writer, you’ve got to ask a lot of questions. In order to be interesting, you’ve got to be …” And since he paused and I was in the front row, I supplied a word: “Interested.” It was not an original contribution, I know, but he gave me a dazzler of a smile, and repeated it to the crowd: “Yes! In order to be interesting, you have to be interested.” He seemed so pleased to be brainstorming with the mixed-age crowd about the hard work and satisfactions of writing. I wanted to bottle the fizz of the evening. At this stage of the semester, many of us are working with students who struggle to find something to write about. After all, one can write about anything, but to be worth the time of the writer — and reader — that pesky “So what?” question remains the gold standard. How many of us have sat with students who are simply stuck and “can’t think of anything to write about”? It’s tempting to flip Kwame Alexander’s point and say, “I can’t make you interesting if you’re not interested.” But that’s not very helpful. So, my co-author, Stuart Greene, and I offer a variety of actually helpful questions for students in Chapter Five of From Inquiry to Academic Writing, Fifth Edition. Channeling Kwame Alexander’s advice to be interested readers so that they can be interesting writers, we help students see themselves as participants in the scholarly conversations they read about, inviting them, with scaffolded models, to: 1) Draw on your personal experience 2) Identify what is open to dispute 3) Resist binary thinking 4) Build on and extend the ideas of others 5) Read to discover a writer’s frame 6) Consider the constraints of the situation. Like Kwame Alexander, I believe — and hope you do, too — that our students can develop into more interesting writers by learning to be more interested in the world. And that’s a habit of mind that pays dividends far beyond our classrooms.
... View more
Labels
1
0
562
bedford_new_sch
Macmillan Employee
10-11-2024
10:30 AM
Eric Korankye Eric Korankye is a PhD English student specializing in Rhetoric and Composition at Illinois State University (ISU). He teaches Business Writing, First-Year Composition, and Advanced Composition, and also serves as the New Instructor Mentor in the ISU Writing Program, providing mentorship and pedagogical support to new writing instructors. As an international interdisciplinary researcher and teacher from Ghana, Eric is committed to designing and practicing social justice pedagogies in Composition Studies, Rhetoric, and Technical Communication, focusing on design justice, students’ language rights advocacy, legitimation of international scholarly knowledge, and working against intersectional oppression against students of color. What do you think is the most important recent development or pedagogical approach in teaching composition? Composition in this technocultural age has been impacted by the emergence of digital tools, technologies, multiliteracies, and the (r)evolution of rhetorical genres for composing. These digital innovations unavoidably have several implications on composition pedagogies, especially in terms of integrating the use of online platforms for collaborative writing and multimedia composition, to meet the interconnected and interdependent needs of writers in our composition classrooms. For teaching composition, this means more than instructors reexamining their teaching practices, but also actively 1) integrating innovative assessment practices for student writers’ multimodal composition artifacts, 2) (re)framing traditional perceptions about writing, genres of writing, and audiences of writing, 3) (re)situating privileged canons, theories, and practices in composition studies, 4) valuing occluded writing traditions, knowledges, and languages of student writers from minoritized backgrounds, 5) foregrounding composition courses on rhetoric and multiliteracies, and 6) navigating the affordances and constraints associated with emerging writing technologies such as artificial intelligence (AI) that have recently become topical in composition studies. When these pedagogical practices are (re)inforced in the teaching of composition, student writers’ perceptions and approaches about composing will change, and in the long run, shape their own composing and learning practices in the classroom and apply them effectively in their worlds outside the classroom. How do you ensure your course is inclusive, equitable, and culturally responsive? In contemporary times when the ubiquity of multiculturalism and multilingualism is gradually phasing out the previous homogeneous demography in most composition classrooms, it is important that composition classes move beyond homogenous conceptualizations to value inclusivity, equity, and culturally responsiveness. This shift can be achieved by designing and practicing anti-racist, anti-oppressive, and culturally-sustaining pedagogies which ensure that the course materials, readings, activities, assignments, language practices, and classroom cultures reflect a diverse range of voices, perspectives, cultural experiences, and ways of knowing. Instructors need to incorporate texts from authors of different backgrounds and identities, especially BIPOC authors, and include course content that represents various cultures, traditions, histories, and experiences. This (re)alignment also means instructors’ valuing culturally relevant examples, contexts, case studies, and references that are relevant and relatable to students from diverse backgrounds. This can help students see their identities reflected in the course materials and establish connections to their own experiences. Instructors need to continuously create a supportive–safe and brave–classroom environment which makes all students feel valued and respected. Instructors need to value students’ right to their own languages, encourage open dialogue, respect differing opinions, avoid biases and stereotypes, and create opportunities for students to share their experiences and ways of knowing. Eric’s Assignment That Works Below is a brief synopsis of Eric's assignment. For the full activity, see Professional Outlook Portfolio Project Prompt. My “Assignment That Works” is the Professional Outlook Portfolio project, the second major project I assign in the ENG 145.13 Writing Business and Government Organizations course I teach at Illinois State University (ISU). In this Project, student writers will create a resume and a blog/website profile that match their education and work experiences. They will also search and find a job posting which they will respond to and write a cover letter for. After completing all these writing tasks, they will create an uptake document, explaining and describing their composing practices throughout the project. This project has been designed to help student writers become more critical, creative, and capable as both consumers and producers of business writing (e.g., resume/CV, job posting, business/personal website, etc.). The goal is also to provide writers with hands-on experience in building a solid marketable professional outlook for their chosen career path and exploring various business writing genres. In assigning this project, we read and annotate the project prompt together and ask questions. We workshop for project ideas in subsequent classes, and the feedback from student writers shows that the project offers them hands-on experience in creating business writing genres that they can use in real life.
... View more
Labels
0
0
2,024
mimmoore
Author
10-08-2024
11:02 AM
In a recent Chronicle of Higher Education piece, Marc Watkins shared an AI-Assisted Learning Template that he uses to encourage critical reflection and open disclosure of AI use on writing assignments. I find much to admire in his approach: he pushes students to consider assignment goals and the extent to which an AI tool helped them achieve those goals. In short, he asks them to reflect critically on their learning. And to the cynics who suggest that students will just use generative AI to compose these evaluations, Watkins admits the possibility. But he suggests most students are “eager for mechanisms to show how they used AI tools,” and he clearly wants to resist “adopting intrusive and often unreliable surveillance” in his classrooms. I applaud the learning focus embedded in his assignment design, as well as his rejection of detection-obsessed pedagogy. But I still have concerns. First is the notion that we “teach ethical usage” primarily by ensuring students do not cheat. I think discussions of ethical AI use must include transparency in what we ask the AI to do and how we use the outputs, of course, but there are other ethical issues as well. In my current corequisite FYC course, for example, we are looking at how generative AI platforms were developed and trained: was labor (particularly of minority groups), privacy, or copyright abused in the process? Will my use of the AI expose my words or data in ways that I cannot control? If I want to use the tool to brainstorm for a few minutes, are there environment, worker, or privacy costs to that session? Is there a risk that my use will perpetuate biases or misinformation? Ultimately, is the AI the best tool for the task at hand? If not, is using it for the sake of convenience an ethically defensible choice? My students and I have also asked this question: do we have a right to know when we are interacting with synthetic (i.e., machine-generated) text or images? The AI-Assisted Learning Template could be modified to include some of these reflections, so that learning issues are coupled with broader ethical concerns. I have included a similar exercise with the AI-themed assignments in my FYC course this term. But another issue lingers for me: how much reflection will actually occur in assigned self-evaluations? McGarr & O’Gallchóir (2020), Thomas and Liu (2012), Ross (2011; 2014), and Wharton (2012) are just some of the researchers who have suggested that assigned (and assessed) reflections are inherently performative: students may be just as much focused on managing their perceived image through skillful self-positioning as engaging in deep reflection. Students want to position themselves positively, as those who made a good effort and determined to learn something—deploying what Thomas and Liu (2012) call “sunshining” to characterize their efforts, challenges, potential failures, and ultimate progress. My own research in the language of student reflections suggests that students make linguistic choices that distance them from agency over decisions and outcomes. They also foreground what is perceived as desirable—effort, open-mindedness, resourcefulness, and willingness to learn or grow. Many find it hard to acknowledge uncertainty about what they learned in a given assignment; after all, doing so might be perceived as criticism of the assignment or failure to learn—and engaging in either one is risky for students. But learning insights don’t always arrive on schedule, packaged neatly prior for submission by the due date. Still, my students have been trained since middle school to assert with confidence (and deferential gratitude to the teacher who provided the opportunity) that they have indeed learned, just as they were supposed to. Check that box. I am not suggesting that the template is without value or that reflective writing needs to be scrapped altogether. FYC certainly has a rich history of reflection-infused pedagogy that cannot be ignored. But as we adopt (and adapt) technology, tools, and templates, let’s consider how to ensure that students are empowered to question such technology, tools, and templates—promoting honest and authentic reflection insofar as that is possible. How do we do that? I don’t have all the answers, but I agree with what Marc Watkins says: we should “shift focus away from technology as a solution and invest in human capital.” Ultimately, he says, he should honor the “irreplaceable value of human-centered education.” Exactly. Thanks to Marc for opening space for this conversation. Photo by Solen Feyissa on Unsplash
... View more
Labels
1
1
620
stuart_selber
Author
10-08-2024
10:00 AM
This series of posts is adapted from a keynote address given by Stuart A. Selber for the Teaching Technical Communication and Artificial Intelligence Symposium on March 20, 2024 hosted by Utah Valley University. This is the second post in a series exploring five tenets of a Manifesto for Technical Communication Programs: AI is evolutionary, not revolutionary AI both solves and creates problems < You are here AI shifts, rather than saves, time and money AI provides mathematical solutions to human communication problems AI requires students to know more, not less, about technical communication My touchstone for this tenet of the Manifesto is what John Carroll and Mary Beth Rosson (1992) called the “task-artifact cycle,” which posits that each new technology solution entails a new set of problems, and further design and invention. Some of the problems created by AI are certainly well documented. AI can generate inaccurate information, make up information (hallucinate), automate work that requires human judgment, neglect current events, reinforce bias and discrimination, and spread political disinformation. AI companies are working hard to address these problems, but their solutions can spawn other or additional problems. Consider what happened with the Gemini platform from Google. In response to concerns about perpetuating bias and discrimination, Google retrained its Gemini robot to be more sensitive to diversity issues, but a result was racially diverse images of Nazi soldiers and the Founding Fathers. The task-artifact cycle can be illustrated at nearly every twist and turn. A good example of how this cycle has played out previously is templates, such as the templates in Microsoft Word for technical communication documents and in WordPress for websites. And it applies to AI too. After all, much of the content generated by AI is templated, reflecting conventional understandings represented as predictive patterns in a large corpus of texts. The field has done a good bit of research on the use of templates, showing what they buy writers and outlining problems for both writing and education (see Arola, 2010; Gallagher, 2019, chapter 2). What templates buy writers is a mechanism for embodying genres, foregrounding document structures, enforcing consistency, and supporting collaboration. Templates also aid invention by signaling design dimensions of conventional genre elements. But a new problem, especially in schools, is how to account for the template in a technical communication project. Phrased as a question, How much of the template do I have to revise to make my project original enough to warrant individual academic credit? My students often ask this question, and it is easy to see why. (For student-facing guidance on templates and the use of AI, see the new edition of Technical Communication, available this fall.) And it is a pretty safe bet that students will be asking this very question about website designs generated by AI, code generated by AI, and content generated by AI, especially because users typically own the copyright to AI output, at least according to AI companies. In fact, a question in the courts right now is whether robots can even own copyrights because US law states that copyrights only protect human-made work; this is not the case globally, by the way. We will have to see what ends up being the tipping point here. If a student revises 51% of an image generated by AI, is that enough of an intervention for them to be able to claim that the image is a human-made work? Time will tell. Plagiarism has certainly received a lot of attention in schools, and AI promises to exacerbate plagiarism problems unless they are accommodated thoughtfully. By plagiarism, I of course mean cheating, in which students knowingly hand in the work of a robot as their own. But I also mean the more interesting cases of so-called inadvertent plagiarism, in which students do not really know how to incorporate the work of a robot into their writing and communication. Teachers themselves are struggling with how to think about AI-generated content as source material for technical communication students. I already mentioned that one complicating factor has to do with copyright. In ChatGPT, for instance, users own the copyright to both the input, their prompts, and the output, robot responses. This makes sense in that AI companies like OpenAI want users to feel free to use the content produced by their robots. In most cases—again, at least according to AI companies—students may very well own the copyright to what an AI program produces for them. But this is an unsettled legal distinction, not a programmatic, pedagogical, or even ethical distinction. Just because students own the copyright to a text does not mean they can use it in just any situation. And this has been true historically. For example, most if not all technical communication programs do not permit students to reuse work to earn credit more than once without permission from the current teacher. There are limits to what copyright ownership buys students in technical communication programs. Plagiarism is obviously an old problem, but there does seem to be something qualitatively different when AI is involved. To prove academic misconduct, a usual part of the process has been to find the plagiarized texts. To find the plagiarized texts, teachers often use the very technology students used to cheat in the first place: Google—or another search engine. With AI, however, there are no plagiarized texts to find. Although some studies have concluded that robots themselves can plagiarize, the issue here has to do with what counts as proof for an academic integrity committee in a college or university setting. This question of evidence is a new problem that many teachers and program directors are now struggling to overcome. I hesitate to say much more about plagiarism, especially the cheating version, because it has received oversized attention in schools, forcing many of us to respond in kind. But the issues around integrating AI as a writing resource are worth our time and attention. To foreshadow my final overall point in the last post in this series (and some of the new coverage in the latest edition of Technical Communication): What, exactly, is the “where” of AI? Where should students be able to use it as a writing resources in their academic work? And how should they acknowledge that use? Anticipating the task-artifact cycle, it will be interesting to see what sorts of problems are spawned by our answers. References Arola, Kristin L. 2010. “The Design of Web 2.0: The Rise of the Template, The Fall of Design.” Computers and Composition 27 (1): 4-14. Carroll, John M., and Mary Beth Rosson. 1992. “Getting around the Task-Artifact Cycle: How to Make Claims and Design by Scenario.” ACM Transactions on Information Systems 10 (2): 181-212. Gallagher, John R. 2019. Update Culture and the Afterlife of Digital Writing. Boulder: University Press of Colorado.
... View more
Labels
0
1
814
jack_solomon
Author
10-03-2024
10:00 AM
The polarization in American society today, and the way in which it is reflected in our popular culture, is a fundamental focus of the now-available 11th edition of Signs of Life in the U.S.A. And so I'd like to inaugurate my return to the Bits blog with a look at how political passion can pop up in some of the weirdest places, causing controversy in the most apparently innocuous circumstances. Allow me to explain. As I have noted a number of times in my years of blogging here, I participate in an online hobby forum (no, it doesn't involve firearms) where many of the participants are quite conservative. In fact, that's the main reason I still visit the site: it provides me with a glimpse into a world that tends to fly under the radar of most mainstream media coverage, and which is generally invisible to most cultural analysts. My participation on the site thus offers me insights into what is going on in places outside the ordinary academic orbit, and I am sometimes mystified at first by what I see there because I do not know the codes behind a lot of the comments there. For example, in the run-up to Super Bowl LVIII there was a discussion on the site about who was going to win the game. That, of course, is only natural. A number of the comments, however, explicitly said, "I'm for any team except the Kansas City Chiefs." That was a bit odd. I mean, what's wrong with the Kansas City Chiefs? Perhaps, I thought, the hostility might be due to the Chiefs' recent domination of the Super Bowl (a sort of "damn Yankees" reaction), but I also noticed an obsessive focus on Kansas City Tight End Travis Kelce and how much attention he was getting. That puzzled me until it became clear that the objection was to Kelce's relationship with Taylor Swift and all the attention the two of them were getting. But that puzzled me too, because I thought that the kind of people who participate on the forum would adore this heartland-of-America team and the wholesome young couple at the center of it. So I was faced with a perfect opening for a semiotic analysis, one that begins with a question ("what is going on here?"), rather than a conclusion. And to answer that question I had to do some research. Now, I am sure that many of you reading this blog will already know the answer to my question because all-things-Swift are well known to just about everyone. Except me. But I learned a lot in my research, including the fact that Taylor Swift is known to have voted for Joe Biden in 2020 and has generally indicated an inclination towards liberal political positions. Of course, I've also learned that long before Swift finally did announce her endorsement of Kamala Harris, her support was eagerly sought by Democrats and Republicans alike, though it was assumed that she would probably lean blue. So the explanation for all the conservative fuss about the Kansas City Chiefs on the web site is a simple one: the Chiefs had become Taylor Swift's team, which meant, in the code, that the Chiefs were anti-MAGA, which is why some Republican voters are now repudiating their Swiftie pasts. But the obviousness of the conclusion only becomes apparent when you can crack the code. As various news outlets like to say after things like presidential debates, there are five takeaways from this blog's analysis: Political polarization has infiltrated every nook and cranny of American popular culture. This polarization is often expressed in insiders' codes, which can be decoded by situating their signs in larger systems of associated phenomena that reveal what is really being said. A semiotic analysis best begins with a question – what is going on here? – rather than with an opinion or a pre-formed conclusion. One's own political views are not a part of a semiotic analysis. Taylor Swift is occupying a great deal of mental real estate, and not only among her fans – which says a great deal about the power of popular culture. Image courtesy of IHeartRadioCA via Wikimedia Commons
... View more
Labels
0
0
173
stuart_selber
Author
10-01-2024
10:00 AM
This series of posts is adapted from a keynote address given by Stuart A. Selber for the Teaching Technical Communication and Artificial Intelligence Symposium on March 20, 2024 hosted by Utah Valley University. As we kick off the 2024 fall term, I want to offer something of a conceptual manifesto for how to think about artificial intelligence (AI) in the context of technical communication programs. I hope to provoke pedagogical and programmatic initiatives that are both productive for our students and responsible to our field. The manifesto includes five tenets, each of which will be explored in its own post: AI is evolutionary, not revolutionary AI both solves and creates problems AI shifts, rather than saves, time and money AI provides mathematical solutions to human communication problems AI requires students to know more, not less, about technical communication Teachers can use these tenets as talking points for their students and to frame curricular developments and revisions in their courses and programs. Which leads me to our focus for this week. Tenet # 1: AI is evolutionary, not revolutionary When I was working on my dissertation in the early 1990s at Michigan Technological University, hypertext was all the rage, and many scholars in our field considered it to be a revolutionary technology that promised to suddenly change the nature of textuality in central ways. I am thinking especially of scholars in the groundbreaking collections edited by Edward Barrett (1988, 1989) and Paul Delany and George Landow (1991). But what hypertext really offered us was a platform for enacting postmodern theories of writing and reading that were at least two decades old. In this respect, hypertext was more evolutionary than revolutionary in nature. Historian of science Michael Mahoney (1996) has argued quite convincingly that “Nothing is entirely new, especially in matters of scientific technology. Innovation is incremental, and what already exists determines to a large extent what can be created” (773). We see this reality in AI itself: How can an AI chatbot generate anything entirely new when its training data comes from the historical past? Technical communication teachers and program directors have managed to domesticate everything from microcomputers and mobile devices to production and communication platforms to course-management systems and the internet of things. We will also learn how to domesticate AI for our purposes and contexts. Historically, a popular approach to the curricular integration of technology has been to “forget technology and remember literacy,” to reference what my dissertation director Cynthia Selfe (1988) wrote in the late 1980s. What continues to be powerful about this sentiment is that it reminds us that what we already know about teaching and learning will go a long way toward helping us address artificial intelligence. This is why the AI position statement from the Association for Writing Across the Curriculum re-affirms best practices grounded in decades of writing research. So too does the AI position statement from the MLA-CCCC Joint Task Force on Writing and AI. There is a trap, however, and that is relying too heavily on one-way literacy models as a foundation for AI initiatives. Many people simply transfer their existing assumptions, goals, and practices into AI contexts. Although it is comfortable and sensible to begin with current ways of knowing and working, such an approach is ultimately limiting because it is non-dialogic: Not only does the model assume that AI is neutral, but it fails to recognize that AI can encourage us to reconsider taken-for-granted assumptions, goals, and practices. So, in addition to addressing the possibilities and problems of AI, we should also see this liminal moment as an opportunity to revisit the status quo and consider how AI might encourage us to reinvent certain aspects of the field, including writing processes and the roles and responsibilities of technical communicators. On the broadest level, one of the more valuable aspects of AI might end up being that it can defamiliarize the familiar, as sociologist Zygmunt Bauman (2005) might put it, at least for the foreseeable future, so that we can look anew with fresh eyes at how we construct our professional world. References Barrett, Edward, ed. 1988. Text, Context, Hypertext: Writing with and for the Computer. Cambridge: MIT Press. Barrett, Edward, ed. 1989. The Society of Text: Hypertext, Hypermedia, and the Social Construction of Information. Cambridge: MIT Press. Bauman, Zygmunt. 2005. Liquid Life. Cambridge: Polity Press. Delany, Paul, and George P. Landow, eds. 1991. Hypermedia and Literary Studies. Cambridge: MIT Press. Mahoney, Michael S. 1996. “Issues in the History of Computing.” In History of Programming Languages, Volume 2, edited by Thomas J. Bergin and Richard G. Gibson, 772-81. Reading: Addison-Wesley Professional. Selfe, Cynthia L. 1988. “The Humanization of Computers: Forget Technology, Remember Literacy.” The English Journal 77 (6): 69-71.
... View more
2
0
362
bedford_new_sch
Macmillan Employee
09-27-2024
07:00 AM
Katayoun Hashemin Katayoun Hashemin is an Iranian teacher and political activist who views teaching composition and creative writing as two sides of the same coin. After earning her M.A. in TESL/TEFL from Colorado State University in 2023, she committed herself to supporting her home country through its ongoing national revolution. She writes nonfiction essays about Iran to illuminate lesser-known facts and life experiences that many do not normally associate with Iran. Her goal is to broaden non-Persian speakers' understanding of Iran’s cultural, historical, and political heritage. What is the most important skill you aim to provide your students? Many emphasize the importance of teaching students to read and think critically, but I believe that teaching students how to do so is even more important as there is usually more than one way to critical thinking and people’s approach and receptiveness to persuasion and being convinced varies highly with culture, personality, social priorities, identity and context. Therefore, I personally prefer to design activities and group work that reveal the processes of thinking and writing. By making these processes visible, students can compare and analyze their own and their classmates' approaches against each other. This comparison highlights similarities and differences in analysis and the level of detail required to become and emerge as a critical thinker and writer. Through these methods, students gain a deeper understanding of critical thinking and develop the skills necessary to apply it effectively in their writing. Additionally, it’s very important to teach students to become independent writers, who are capable of spotting inadequacies in their drafts by using self-assessment checklists and predicting and responding to the audience’s reaction. As an independent writer, I teach them to craft revision plans, prioritizing changes and setting specific, actionable goals to enhance their writing. How do you engage students in your course, whether f2f, online, or hybrid? As a TESOL major, I've been trained to work with the SIOP (Sheltered Instruction Observation Protocol) model, which I find highly effective for engaging students in various learning environments. The SIOP model was first developed to make content like math and biology comprehensible for second language learners through hands-on activities and by aligning lesson objectives with actionable steps that guide the progression of relatable class activities that keep students involved. In a composition classroom, SIOP can be replicated by matching the two axes of content and language with rhetorical concepts and the act of composing, respectively. In other words, teaching rhetoric becomes the content, while the act of composing is the second language. In SIOP, the key to keeping students engaged is to ensure that the material is accessible and intelligible for them. Therefore, to make rhetorical concepts comprehensible for students, I introduce these holistic and abstract concepts by comparing their components with the rather tangible and familiar content of everyday activities. For example, in one assignment, I compare synthesis of academic sources to the process of baking a cake, where individuals need to decide what ingredients need to be mixed in what order to deliver an audience-friendly and convincing cake! Moreover, I ensure that each lesson objective is paired with an action verb. This method not only clarifies the lesson's goals but also actively engages students in the learning process by specifying what they are supposed to perform and do, rather than just building theoretical knowledge. An activity designed around the objective “Today I will be able to convince my audience to approve my proposal” is more likely to engage students than a more general objective like “learning persuasive writing.” This combination of actionable objectives, and real-life analogies helps to engage students deeply and effectively in their learning journey. Katayoun's Assignment that Works During the Bedford New Scholars Summit, each member presented an assignment that had proven successful or innovative in their classroom. Below is Katayoun's explanation of her assignment, Synthesizing Sources: This assignment is meant to help students understand the meaning and process of synthesizing various sources for writing a research paper and apply it for crafting a thesis statement. Recognizing that grasping the idea of synthesizing can be challenging, I compare it to baking a cake! Just as a baker needs different ingredients to create a cake, students need to read a range of different sources to develop their own brand-new claim/argument. No baker can make five different types of flour into a cake, but they need milk, oil and other materials too! Moreover, a good pastry chef doesn't serve raw ingredients separately (that would make a horrible experience for the customers/audience!) but combines them in the right amounts and order to create a delicious cake that persuades customers to want more! The assignment uses numerous visuals to bring these steps to life and make the analogy as effective as possible. It concludes with students drawing a visual representation of what material each of their sources offer. This marks the beginning of the synthesis process for their own research. If all the visuals end up depicting the same scene, then perhaps the chef is using flour only and needs to consider using a range of different materials to make the research cake possible!
... View more
Labels
1
0
476
andrea_lunsford
Author
05-23-2024
07:00 AM
Commencement and award times in 2024 are, without doubt, a season of discontent, to say the very least. So many campuses turned in on themselves, as students, faculty, and staff raise protests, sometimes violent, against Israeli bombings or against anti-semitism, or both. Cancellations of commencement ceremonies–what’s to celebrate? And yet. As always, there are some signs of spring and pockets of hope. If we look for them. If we find and share them. And while students on my home campus have mounted significant protests, demanding changes in University investment policies, they are also completing this spring term and participating in one of my favorite spring rituals: annual writing awards. There’s an award for first-year writing and rhetoric, one for second-year writing and rhetoric, and one for writing across the disciplines—and I look forward every year to listening to presentations and reading winning essays, always so happy to celebrate student ingenuity and creativity. This year's second-year awards for oral presentation of research go to students who have written and presented on subjects as diverse and fascinating as "What Does It Mean to Create Alongside Technology?," "Outside the World: Community, Culture, and Utopian Ideas in Antarctic Stations," "How Sex and Age Influence Medical Treatment," "Securing Farmworkers' Futures in the Age of AI," and "Citizen Science, Environmental Justice, and Lithium Mining." I was fortunate to join this year’s celebration via Zoom just two days ago where I got to hear six students describe their research and talk about what the process of investigation and presentation had meant to them. In every case, they commented on the privilege of being able to “dig deep” into a subject they felt passionate about as well as the chance to learn to communicate through way more than words alone. As always, I wanted to hear all their presentations and to sit and talk with each one of them. So here near the end of the 2023-24 school year, and amidst upheaval and daily horrors, I hope that you are finding small moments of joy and connection. I am grateful for such moments, and for all teachers and students of writing, everywhere. As summer nears, I am going to take a break from this beloved blog to put my big girl pants on and tackle revisions of my books, including Easy Writer and Everything’s an Argument. So it’s nose-to-the-grindstone time! I wish you—and our poor world—a rejuvenating summer season. And peace. "graduation caps" by j.o.h.n. walker is licensed under CC BY 2.0.
... View more
Labels
0
0
505
susan_bernstein
Author
05-17-2024
07:00 AM
Letter to My Students: Spring 2024 Edition Memory Plaques Neurodivergent Teaching Dear Students, The last full week of spring semester classes began with a sunny day, warm enough for summer. The quad, with its blossoming trees in white and pink, called my heart. I taught the lesson projected on the screen. You wrote for a while. Then we took a walking tour of the memory plaques around our campus. Initially I had scheduled the field trip for our penultimate class meeting, but storms were predicted for that day. We agreed to take advantage of the pleasant weather. When my fall semester first-year writing class took this tour, we focused mostly on Mississippi Freedom Summer, 1964, and the three Civil Rights workers murdered by the KKK: James Chaney, Andew Goodman, and Michael Schwerner. The clocktower on our campus is named after them. Andrew Goodman was a student at our college who volunteered to take part in Freedom Summer. There are other memory plaques on our campus that you asked to see, two for 9/11/2001, and one for lives lost in our community to Covid-19. The Covid-19 memory plaque is slate gray with yellow letters framed in yellow rectangles. Under the letters there is a slate gray heart outlined in yellow. Covid-19 Memorial Plaque, Queens College, City University of New York Photo by Susan Bernstein, October 17, 2023 I suggested that I would write reflection prompts that would allow you to connect this field trip to the final writing project. Then the clock tower chimed for 12 noon and we offered a moment of silence. Class had run overtime, and soon afterward all of us dispersed in the sunlight. PROMPTS As promised, please consider the following prompts for reflection: What did you learn from our field trip to visit the memory plaques on campus? In your opinion, what do you think is the intended purpose of the memory plaques? Do you think the plaques serve their intended purpose? Why or why not? If you were to create your own memory plaque or other remembrance for Spring Semester 2024, what memory would you choose? What would your remembrance look like? Where would your remembrance be located? What shape or space would your remembrance take? Why?
... View more
Labels
0
0
663
guest_blogger
Expert
05-16-2024
10:00 AM
Jennifer Duncan has been teaching English for twenty years, first at Chattanooga State Community College in Tennessee and now at Georgia State University’s Perimeter College in Atlanta, GA where she is Associate Professor of English. For the last ten years, she has focused exclusively on online teaching concentrating on creating authentic and meaningful learning opportunities for students in composition and literature courses. In the Doctor Who episode “Silence in the Library,” the Doctor and his companion, Donna, visit the Library, a planet containing printed copies of every book ever written in all of time and space. AI Image Generated by Craiyon.com Unfortunately, when they arrive, the Library is entirely empty—except, of course, for the intergalactic book worms—Vashta Nerada—who have lived in the books for generations and which now awaken to eat anyone who visits the library. There’s also a second threat from the Library itself which “saves” people by uploading them into information nodes—robotic library assistants who wear the faces of those they have consumed and offer vague answers to questions which require lots of follow ups from the Doctor to offer any real information. Essentially, this is how GenAI tools like ChatGPT work. They were created by scanning vast amounts of online information indiscriminately, so, while containing lots of knowledge, they also contain both threats faced by the Doctor. In this case, Vashta Nerada serve as my metaphor for the bias sleeping in our texts throughout history. Why is ChatGPT so biased? Because it’s reflecting back to us the bias that was always hidden in our texts—in the gender stereotypes of textbooks, in the disproportionate representations of minorities in police reports, in the negative biases that have always been around for us to ignore or claim that they represent only a fringe point of view. The GenAI data collector consumed the mass of texts available to it and reflected those biases back to us. Like those information nodes, it shows us our own weakness by way of vague answers that require us to ask the follow up questions, expose the bias, and find what we really want which is, hopefully, the best of us instead of the worst. It's like a child who learned to read and was then given access to everything in a vast library with no assistance, no direction, and no guardrails to evaluate or understand what they consumed. When my son was born, we were residents of Tennessee, so he was automatically enrolled in Dolly Parton’s Imagination Library. Every month, we received a new book in the mail, and they were great books! From The Little Engine that Could to Look Out Kindergarten, Here I Come! these carefully curated book lists reach children as they develop intellectually and guide them as they develop their worldview. Kids learn to rely on friends in Kitty Up, that every creature needs rest in Panda Whispers, and that mom needs a minute in Llama Llama Red Pajama! So where does this tale of two libraries leave us? Like the Doctor and Donna confronting hidden dangers in the Library, we must confront the biases ingrained within the texts and systems that shape our culture and our digital landscape. But like Dolly’s Imagination Library, we have the opportunity to curate our students’ digital literacy skills, guiding them toward a better understanding of the flaws in our culture and in GenAI technologies. It might even be possible to cultivate a generation of discerning learners capable of embracing the complexities and contradictions of our modern age, much like the Doctor traveling through time and space armed with nothing more than curiosity, optimism, and a commitment to truth. The challenge seems impossible—especially without a sonic screwdriver—but it’s worth the effort. For in the words of the Doctor, “The universe is big, it’s vast, and complicated, and ridiculous. And sometimes, very rarely, impossible things just happen and we call them miracles.”
... View more
Labels
1
0
804
andrea_lunsford
Author
05-16-2024
07:08 AM
It’s probably no coincidence that a lot of folks are talking about the importance of and need for conversation today. News feeds are full of shouting matches, scenes of verbal attack and counterattack; everyone seems to be talking and no one seems to be listening. Conversation? Not likely or even possible, some say. Into this scene steps conservative columnist David Brooks with a book called How to Know a Person. Brooks says he has spent a lot of his life taking and advocating for positions with little regard for what others think or say. it has taken him years, he realizes, to learn to listen with genuine curiosity to others, especially those he doesn’t agree with. In short, he was engaging in one-way talk, rather than two-way conversation. And that got him questioning his own modes of communication. David Brooks in conversation at LBJ Library in 2022 Extensive research for his book eventually led him to identify two levels to any conversation: the first layer is the subject—what the participants are literally talking about. The second layer, which he calls the “underconversation,” is the “flow of emotion” going between the people talking. That second layer, Brooks argues, is very important: is it making the speakers feel safe? Uneasy or. unsafe? Angry? Listened to and respected? Or not? Paying attention to the underconversation led Brooks to ask different questions: not “what do you think about X or Y?” but to keep opening doors by saying “Tell me more. What am I missing? Tell me more.” Even more recently, Pulitzer Prize-winning investigative journalist Charles Duhigg takes a close look at the need for productive conversations in his Supercommunicators: How to Unlock the Secret Language of Connection. Like Brooks, Duhigg has done a lot of research, interviewing people all over the country and working up what amount to a series of case studies on how effective or “super” communicators manage to be able to “connect to almost anyone.” This research taught him that a key to success is, first and foremost, understanding what kind of conversation you are having: one focused on practical issues (what is this conversation really about?), emotional issues (how does each participant feel?), or identity issues (who are we and how are identities silently shaping the conversation?). The rest of the book focuses on what Duhigg calls “learning conversations,” which I think of as deeply rhetorical, and shows how working through the four “rules” his research found at work in the powerfully effective communicative practices lead to understanding (of both self and others) and learning. Here are the four rules, simple sounding but hard to live by: Pay really close attention to the kind of conversation you are having Share your goals—and invite others to share theirs as well Inquire about how others feel, and share. Our feelings too. Explore if and how identities are central or important to the conversation In case after case, Duhigg shows how listening without judgment, sharing feelings, and identifying common ground can lead to productive conversations and sometimes (!) to changed behavior. In chapter 6, titled “Our Social Identities Shape Our Worlds,” Duhigg follows Dr. Jay Rosenbloom as he conducts numerous “well baby exams” and talks with parent about vaccinations, quickly learning that some parents are eager for such immunizations while others reject them outright. Rosenbloom does his best to give good advice but generally defers to parental wishes—until Covid. As the pandemic spreads, Dr. Rosenbloom becomes more and more frantic about the number of lives being lost land by the number of patients who refuse vaccination. When he asks a senior colleague for advice, that doctor says “tell them you’re the doctor and you know best.” Predictably, this tactic didn’t work, and often just alienated patients and infuriated doctors. You can check out the chapter to read about how Rosenbloom learned to talk with patients, sharing values and personal stories, listening to what part of their identities are at play—and eventually finding ways to connect—and sometimes to change minds and to save lives. The potential and power of conversation. Writing classes often rest on a conversational foundation, though usually more implicitly than explicitly. What I’ve been thinking about since reading these books is how I might bring what Brooks called the “underconversation” into focus in my classroom and then how we could use Duhigg’s three kinds of conversations and his four rules to guide us in classroom talk—especially on topics that are uncomfortable, that trigger strong emotions, and that are often skipped over or ignored because they are “:just too much.” The way to begin, I’ve found, is by having a conversation about conversation, one in which we sketch out what’s in it for us as a class (and as individuals) to learn how to connect to othe\rs, and to learn in the process. Image via Wikimedia Commons
... View more
Labels
1
0
1,101
Popular Posts
Converting to a More Visual Syllabus
traci_gardner
Author
8
10
We the People??
andrea_lunsford
Author
7
0