-
About
Our Story
back- Our Mission
- Our Leadership
- Accessibility
- Careers
- Diversity, Equity, Inclusion
- Learning Science
- Sustainability
Our Solutions
back
-
Community
Community
back- Newsroom
- Discussions
- Webinars on Demand
- Digital Community
- The Institute at Macmillan Learning
- English Community
- Psychology Community
- History Community
- Communication Community
- College Success Community
- Economics Community
- Institutional Solutions Community
- Nutrition Community
- Lab Solutions Community
- STEM Community
- Newsroom
- Macmillan Community
- :
- English Community
- :
- Bits Blog
- :
- An AI Manifesto for Technical Communication Progra...
An AI Manifesto for Technical Communication Programs: AI requires students to know more, not less, about technical communication
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
This series of posts is adapted from a keynote address given by Stuart A. Selber for the Teaching Technical Communication and Artificial Intelligence Symposium on March 20, 2024 hosted by Utah Valley University.
This is the fifth post in a series exploring five tenets of a Manifesto for Technical Communication Programs:
- AI is evolutionary, not revolutionary
- AI both solves and creates problems
- AI shifts, rather than saves, time and money
- AI provides mathematical solutions to human communication problems
- AI requires students to know more, not less, about technical communication < You are here
To explain this tenet of the Manifesto, I want to share the results of a study I conducted with Johndan Johnson-Eilola and Eric York (Johnson-Eilola, Selber, and York, 2024). In our study, we analyzed the ability of ChatGPT to generate effective instructions for a consequential task: taking a COVID-19 test at home. In our analysis, we compared the output from a commercial prompt for generating these instructions to those provided by the test manufacturer. We also analyzed the input—the prompt itself—to address prompt engineering issues. Although the output from ChatGPT exhibited certain conventions for procedural documentation, the human-authored instructions from the manufacturer were superior in many ways. We therefore concluded that when it comes to creating high-quality, consequential instructions, ChatGPT might be better seen as a collaborator than a competitor with human technical communicators.
Let me summarize our results for you. Although the robot did a good job of breaking down the overall task into subtasks and using the imperative voice, it was unsuccessful in several areas, some of them critical to health and safety. For example, the robot numbered headings and bulleted steps instead of numbering steps and using text attributes to build a design hierarchy with headings. You probably recall dividing your stressed-out attention between a test kit and its instructions: Bullets do nothing to help you re-find your place in the instructions as you go.
More critically, the robot failed to specify the insertion depth for the nasal swab, and it also got the insertion time wrong. In addition, the robot failed to specify how long the nasal swab should be swirled in the tube or well. These problems could very well lead to incorrect test results, failing to reveal, for instance, that the user contracted COVID-19. There were other problems with the instructions, but these were among the major ones.
The output, of course, reflects the input, and the prompt had many problems, too. For example, the prompt did not call for a document title for the instructions, a preview of the task, a list of items that users will need, or safety measures, all common conventions of the genre whose presence produces well-known usability benefits.
Possible solutions to these problems are themselves problematic. We might naively assume that the solution to a flawed prompt is simply better prompt engineering, that we should rewrite our purchased prompt to achieve better results. Using our expert knowledge of the conventions of the instruction set genre (and more to the point, the reasons for those conventions), we could write a new prompt to make the numbered steps unnumbered headings and the bulleted substeps numbered (to make them more conventional and thus more usable); make sure the instructions are accurate and inclusive of all information (to account for the missing swab depth), including a clause to double check that the information is accurate (to correct the errors in nasal insertion time and swab swirl time).
The problem with this solution is that while we can rely on our expertise to tell us how to effectively rewrite this prompt, novice users cannot do that. And even if users could effectively rewrite this prompt, ChatGPT may not be able to successfully replace the bullets with numbers, correct the insertion times, and so on.
The solution to the errors introduced by the AI, then, is not more AI. Quite the opposite, we argued. Making a fully detailed set of instructions to correctly tell ChatGPT how to write instructions seems to defeat the purpose of using the technology. That is, if we can already write instructions that well and have the time and wherewithal to iterate the results until we are satisfied they will not harm anyone, then why have the robot write them for us in the first place?
In sum, one danger of AI is not that it can replace highly routine genres but that it seems like it can. And depressingly, this might just be good enough for much of the world.
Reference
Johnson-Eilola, Johndan, Stuart A. Selber, and Eric J. York. 2024. “Can Artificial Intelligence Robots Write Effective Instructions?” Journal of Business and Technical Communication 38 (3): 199-212.
Conclusion
Rather than concluding with a summary of the five tenets—I hope I have been clear enough in discussing them—I will end this series of blog posts with a final point, one that is critical to any and all pedagogical activities in technical communication programs.
As we all know, writing is a mode of learning and of meaning-making. Although we must prepare students for the day-to-day work of technical communication, we are doing more than just grading the effectiveness of final projects. As we develop AI activities and assignments, we should always keep our learning goals in mind.
With those goals in mind, we can better tackle the “where” of AI, something we address throughout the new edition of Technical Communication. By this I mean where we teach students to use AI in technical communication work. For example, I discourage the use of AI in first drafts because of the crucial role they play in helping students figure out what they want to say and how they want to say it.
Where can AI assist learning and where might it create problems for learners? If we want to avoid outsourcing learning to technology, this is a key question for all of us.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.