An AI Manifesto for Technical Communication Programs: AI both solves and creates problems

stuart_selber
Author
Author
0 1 962

This series of posts is adapted from a keynote address given by Stuart A. Selber for the Teaching Technical Communication and Artificial Intelligence Symposium on March 20, 2024 hosted by Utah Valley University.

 

This is the second post in a series exploring five tenets of a Manifesto for Technical Communication Programs:

  1. AI is evolutionary, not revolutionary
  2. AI both solves and creates problems  < You are here
  3. AI shifts, rather than saves, time and money
  4. AI provides mathematical solutions to human communication problems
  5. AI requires students to know more, not less, about technical communication

stuart_selber_0-1727289385197.png

My touchstone for this tenet of the Manifesto is what John Carroll and Mary Beth Rosson (1992) called the “task-artifact cycle,” which posits that each new technology solution entails a new set of problems, and further design and invention.

Some of the problems created by AI are certainly well documented. AI can generate inaccurate information, make up information (hallucinate), automate work that requires human judgment, neglect current events, reinforce bias and discrimination, and spread political disinformation.

AI companies are working hard to address these problems, but their solutions can spawn other or additional problems. Consider what happened with the Gemini platform from Google. In response to concerns about perpetuating bias and discrimination, Google retrained its Gemini robot to be more sensitive to diversity issues, but a result was racially diverse images of Nazi soldiers and the Founding Fathers. The task-artifact cycle can be illustrated at nearly every twist and turn.

A good example of how this cycle has played out previously is templates, such as the templates in Microsoft Word for technical communication documents and in WordPress for websites. And it applies to AI too. After all, much of the content generated by AI is templated, reflecting conventional understandings represented as predictive patterns in a large corpus of texts. The field has done a good bit of research on the use of templates, showing what they buy writers and outlining problems for both writing and education (see Arola, 2010; Gallagher, 2019, chapter 2).

What templates buy writers is a mechanism for embodying genres, foregrounding document structures, enforcing consistency, and supporting collaboration. Templates also aid invention by signaling design dimensions of conventional genre elements.

But a new problem, especially in schools, is how to account for the template in a technical communication project. Phrased as a question, How much of the template do I have to revise to make my project original enough to warrant individual academic credit? My students often ask this question, and it is easy to see why. (For student-facing guidance on templates and the use of AI, see the new edition of Technical Communication, available this fall.) And it is a pretty safe bet that students will be asking this very question about website designs generated by AI, code generated by AI, and content generated by AI, especially because users typically own the copyright to AI output, at least according to AI companies.

In fact, a question in the courts right now is whether robots can even own copyrights because US law states that copyrights only protect human-made work; this is not the case globally, by the way. We will have to see what ends up being the tipping point here. If a student revises 51% of an image generated by AI, is that enough of an intervention for them to be able to claim that the image is a human-made work? Time will tell.

Plagiarism has certainly received a lot of attention in schools, and AI promises to exacerbate plagiarism problems unless they are accommodated thoughtfully. By plagiarism, I of course mean cheating, in which students knowingly hand in the work of a robot as their own. But I also mean the more interesting cases of so-called inadvertent plagiarism, in which students do not really know how to incorporate the work of a robot into their writing and communication. Teachers themselves are struggling with how to think about AI-generated content as source material for technical communication students.

I already mentioned that one complicating factor has to do with copyright. In ChatGPT, for instance, users own the copyright to both the input, their prompts, and the output, robot responses. This makes sense in that AI companies like OpenAI want users to feel free to use the content produced by their robots. In most cases—again, at least according to AI companies—students may very well own the copyright to what an AI program produces for them.

But this is an unsettled legal distinction, not a programmatic, pedagogical, or even ethical distinction. Just because students own the copyright to a text does not mean they can use it in just any situation. And this has been true historically. For example, most if not all technical communication programs do not permit students to reuse work to earn credit more than once without permission from the current teacher. There are limits to what copyright ownership buys students in technical communication programs.

Plagiarism is obviously an old problem, but there does seem to be something qualitatively different when AI is involved. To prove academic misconduct, a usual part of the process has been to find the plagiarized texts. To find the plagiarized texts, teachers often use the very technology students used to cheat in the first place: Google—or another search engine. With AI, however, there are no plagiarized texts to find. Although some studies have concluded that robots themselves can plagiarize, the issue here has to do with what counts as proof for an academic integrity committee in a college or university setting. This question of evidence is a new problem that many teachers and program directors are now struggling to overcome.

I hesitate to say much more about plagiarism, especially the cheating version, because it has received oversized attention in schools, forcing many of us to respond in kind. But the issues around integrating AI as a writing resource are worth our time and attention. To foreshadow my final overall point in the last post in this series (and some of the new coverage in the latest edition of Technical Communication): What, exactly, is the “where” of AI? Where should students be able to use it as a writing resources in their academic work? And how should they acknowledge that use? Anticipating the task-artifact cycle, it will be interesting to see what sorts of problems are spawned by our answers.

 

References

Arola, Kristin L. 2010. “The Design of Web 2.0: The Rise of the Template, The Fall of Design.” Computers and Composition 27 (1): 4-14.

Carroll, John M., and Mary Beth Rosson. 1992. “Getting around the Task-Artifact Cycle: How to Make Claims and Design by Scenario.” ACM Transactions on Information Systems 10 (2): 181-212.

Gallagher, John R. 2019. Update Culture and the Afterlife of Digital Writing. Boulder: University Press of Colorado.

1 Comment