An AI Manifesto for Technical Communication Programs: AI shifts, rather than saves, time and money

stuart_selber
Author
Author
0 0 152

This series of posts is adapted from a keynote address given by Stuart A. Selber for the Teaching Technical Communication and Artificial Intelligence Symposium on March 20, 2024 hosted by Utah Valley University.

This is the third post in a series exploring five tenets of a Manifesto for Technical Communication Programs:

  1. AI is evolutionary, not revolutionary
  2. AI both solves and creates problems
  3. AI shifts, rather than saves, time and money  < You are here
  4. AI provides mathematical solutions to human communication problems
  5. AI requires students to know more, not less, about technical communication

 

stuart_selber_0-1727373178773.png

One of the more compelling technology myths encourages university administrators to assume that AI platforms will automatically make people more productive and thus are a cost-effective way of doing business. This myth, which is particularly appealing in a time of shrinking fiscal resources and budgetary deficits, inspires institutional initiatives that increase enrollments and workloads but not faculty positions, use chatbot advisers to help students negotiate the complexities of academic landscapes, and use AI dashboards to interpret learning analytics that require nuanced judgment and situational understanding. Did a student spend an unusual amount of time in a distance learning module because it was helpful or confusing? Only a human in the loop can discover the answer.

Right now, there is very little evidence to suggest that AI platforms actually reduce costs in any significant manner or that they enhance productivity in the ways people often claim. In fact, more than a few studies show that it is still cheaper to employ humans in many contexts and cases and that organizations and institutions are underestimating the real costs of implementing and supporting AI initiatives (see, for example, Altchek, 2024; Walch, 2024).

The myth of AI as a cost and time reducing technology is perpetuated by work culture, especially American work culture. Let me provide a quick example to illustrate this point. When I first started learning about generative AI, I watched a YouTube video by a financial expert who recorded himself trying to use ChatGPT to create a macro for Microsoft Excel. He captured his attempt, including all of the trial and error, for anyone to see. It took him about 20 minutes of work with TypeScript, a subset of JavaScript, but he was ultimately successful in prompting the robot to generate the macro. His response? If I had done this myself, it would have taken me all weekend. Now I can work on other things. In other words, his application of AI shifted time and money to other things.

I had to chuckle about this video because at the time—this was January of 2023—a million plus workers in France were striking because President Macron proposed to raise the retirement age from 62 to 64. Meanwhile, people in the US work hundreds of hours more per year than the average European (Grieve, 2023), and AI feeds our general obsession with productivity and work culture.

(Here’s an idea, by the way: Use AI to write that macro and then take the weekend for yourself. You will be a better employee for it. But I digress.)

The real issue is how to think about the nature of the shifts. That is, what are we shifting to, exactly? What you hear a lot of analysts saying is that AI can automate lower-order tasks so that people can spend more time on higher-order tasks. Technical communication teachers already prioritize higher-order tasks in both our process and assessment advice. We tell students, for example, to ignore formatting while brainstorming ideas and drafting. And we tell them that a perfectly grammatical text can still fail miserably if it does not attend to audiences, their purposes and tasks, and other rhetorical dimensions. Indeed, for many years now, Les Perelman has been a thorn in the side of proponents of essay grading software because his high-scoring essays have been nonsensical (see Perelman, 2012, for an overview of his case against automated essay grading).

But is the distinction between lower-order tasks and higher-order tasks always so sharply drawn or even static? Grammar is an obvious test case here. Commentators often think that applying grammar is a lower-order task, and sometimes it is. This view is unsurprising given the popularity and wide availability of grammar checkers. Our institutional instance of Microsoft Word embeds Grammarly as its checker, and by default, students can activate it from the main interface ribbon.

In technical communication, however, our rhetorical treatment of active and passive voice indicates higher-order considerations. As technical communication teachers know, both active and passive voice are grammatically correct, but we have preferences. In most cases, we tell students, the active voice tends to work better because it emphasizes the agent or the doer of the action, human or non-human. The passive voice, in contrast, should be reserved for instances in which the agent is unknown or the agent is less important than the action.

Many grammar checkers, including AI-assisted checkers like Grammarly, can help students locate the passive voice in their writing. Some checkers even advise students that the passive voice is undesirable, almost an error, but this advice is misleading. What a robot needs to decide is if the passive voice works better than the active voice for the specific purposes at hand. This is a tough ask for the robot because the answer requires higher-order reasoning. (The new edition of Technical Communication, available this fall, helps students to navigate these questions.)

Another rhetorical dimension here is ethical considerations. People use the passive voice to help them avoid responsibility. But I defy an AI robot to reason through the ethical considerations of active and passive voice in a complex situation with many factors.

This is a simple example of the complications of assigning so-called lower-order tasks to AI in order to free up time for higher-order tasks. Humans will most certainly need to be in the loop much if not most of the time, somewhere, and what counts as a lower-order task or a higher-order task can change from situation to situation. Cost and time savings are tricky AI subjects with rhetorical aspects, not straightforward propositions based on static rules.

 

References

Altchek, Ana. 2024, April 24. “Meta’s AI Plans are Costing Way More than They Thought.” Business Insider, n.p.

Grieve, Peter. 2023, January 6. “Americans Work Hundreds of Hours More a Year Than Europeans: Report.” Money, n.p.

Perelman, Les. 2012. “Construct Validity, Length, Score, and Time in Holistically Graded Writing Assessments: The Case Against Automated Essay Scoring (AES).” In International Advances in Writing Research: Cultures, Places, Measures, edited by Charles Bazerman, Chris Dean, Jessica Early, Karen Lunsford, Suzie Null, Paul Rogers, and Don Donahue, 121-132. Denver: University Press of Colorado.

Walch, Kathleen. 2024, April 5. “What are the Real Costs of AI Projects?” Money, n.p.