An AI Manifesto for Technical Communication Programs: AI provides mathematical solutions to human communication problems

stuart_selber
Author
Author
0 0 1,482

This series of posts is adapted from a keynote address given by Stuart A. Selber for the Teaching Technical Communication and Artificial Intelligence Symposium on March 20, 2024 hosted by Utah Valley University.

This is the fourth post in a series exploring five tenets of a Manifesto for Technical Communication Programs:

  1. AI is evolutionary, not revolutionary
  2. AI both solves and creates problems
  3. AI shifts, rather than saves, time and money 
  4. AI provides mathematical solutions to human communication problems  < You are here
  5. AI requires students to know more, not less, about technical communication

stuart_selber_0-1727465990623.png

At this point, I am guessing that most technical communication teachers know how AI works, at least generally speaking—although it does things that not even developers can always understand. At times, the black box of AI can be hard to explain.

AI robots are trained using very large language models (LLMs), which include a corpus of multimodal texts measured in terabytes. Open AI reported that the corpus for ChatGPT totaled 45 terabytes of initial data. How big is one terabyte? According to the website TechTarget.com, one terabyte of data is equivalent to 86 million pages in Microsoft Word or 310,000 photographs or 500 hours of movies. We are talking about a scale for training that defeats the capacity of concrete imagination. At least mine, anyway.

In glossing over many technical aspects, what is important for teachers to remember is that the output produced by generative AI is based on statistical probability, on pattern matching, on math for a massive corpus of decontextualized texts. And while the output can be useful and interesting in all sorts of ways, the field has already tried and dismissed mathematical approaches as overarching frameworks for technical communication because they are, in a word, arhetorical.

I am referring, of course, to the Shannon and Weaver (1949) mathematical model of communication, which had a good run starting in the mid-twentieth century and continues to be influential, at least obliquely, in certain popular settings and STEM contexts (for an overview and critique of this model, see Schneider, 2002; Slack, Miller, and Doak, 1993). As a reminder, this model conceptualizes communication as a linear process involving a sender, who, say, crafts an email message to a reader; an encoder, which converts the email message into binary data; a channel or network, which passes the binary data to its destination; a decoder, which re-assembles the data into an email message; and the reader, who consumes the email message. It is a tidy little circuit.

The possibilities for dysfunction in the circuit come from noise, which is anything that can distort the email message. Noise could come from technical difficulties, for example, or it could come from ambient conditions, which, as Thomas Rickert (2013) taught us, can actually be rhetorical. But because Shannon and Weaver separated meaning from information, all we need to do is eliminate the noise and voila, we have success!

Our field has struggled with the Shannon and Weaver mathematical model of communication for obvious reasons: it is a one-way communication model, it is a transmission model, and it models the field in very impoverished ways. Under approaches based on the Shannon and Weaver model, technical communicators are not working as meaning makers or knowledge producers in any significant sense. Instead, they are positioned as low-level workers who can probably be replaced by writing robots in some cases.

I have not been able to find any mention of the communication models informing the work of AI companies, but their promises often elide the complexities of language and language use. In the communication circuit for AI, the possibilities for noise come from two main sources: the training data for robots and end-user prompts. All we have to do, so the thinking goes, is clean up the training data and teach people how to craft effective prompt sequences. The result will be AI-generated texts that are effective and usable—or at least effective and usable enough, a concern I will return to in the next tenet of the Manifesto.

As such, there is little to no acknowledgement of the fundamental limitations of math as a guiding structure for communication or communication products. Put differently, there is little to no acknowledgement of the surplus of meaning in language and language use and of the interpretive capabilities required to make rhetorical sense of writing for work and school.

 

References

Rickert, Thomas. 2013. Ambient Rhetoric: The Attunements of Rhetorical Being. Pittsburgh: University of Pittsburgh Press.

Schneider, Barbara. 2002. “Clarity in Context: Rethinking Misunderstanding.” Technical Communication 49 (2): 210-218.

Shannon, Claude, and Warren Weaver. 1949. The Mathematical Theory of Communication. Urbana: University of Illinois Press.

Slack, Jennifer Daryl, David James Miller, and Jeffrey Doak. 1993. “The Technical Communicator as Author: Meaning, Power, Authority.” Journal of Business and Technical Communication 7 (1): 12-36.