Developments in Artificial Intelligence and Writing

andrea_lunsford
1 0 1,155

If you are following developments in AI technology, you may have read a long essay in The New York Times Magazine called “A.I. Is Mastering Language. Should We Trust What It Says?” by Steven Johnson. And if you missed this article, I recommend taking time to read it. It’s a page-turner, or at least it was for me, providing a brief history of AI along the way but concentrating primarily on Generative Pre-Trained Transformer 3 (GPT-3). GPT-3 is part of a “category of deep learning known as a large language model [LLM], a complex neural net that has been trained on a titanic set of text: in GPT-3’s case, roughly 700 gigabytes of data” that include Wikipedia, large collections of digitized texts, and so on. GPT-3 “learns” by essentially playing a “what’s the next word?” game trillions of times, using its database to hypothesize answers. (Johnson points out that most of us have encountered LLMs at work in apps like Gmail, with its autocomplete feature, which works on the same principle but at a much more basic level.)

Most experts seem to agree that GPT-3 is not sentient—it is not (yet) thinking as a human would do. But it is demonstrating amazing abilities to generate unique answers to complex questions. For example, Johnson reports asking GPT-3 to “Write an essay discussing the role of metafiction in the work of Italo Calvino.” Here’s (the beginning of) what GPT-3 had to say:

Italian author Italo Calvino is considered a master of metafiction, a genre of writing in which the author breaks the fourth wall to discuss the act of writing itself. For Calvino, metafiction is a way of exploring the nature of reality and the ways in which stories can shape our perceptions of the world. His novels often incorporate playful, labyrinthine structures that play with the boundaries between reality and fiction. In If on a winter’s night a traveler, for example, the reader is constantly interrupted by meta-level discussions of the act of reading and the nature of storytelling. . . .

Johnson could give GPT-3 the same prompt over and over and get a different unique answer every time, some that may strike you as more accurate than others but “almost all. . . remarkably articulate.” As Johnson puts it, “GPT-3 is not just a digital-age book of quotations, stringing together sentences that it borrowed directly from the internet.” Rather, it has learned to generate fairly proficient arguments—ones that could seem to have been written by a fairly competent high school or first year college student—by endlessly playing that “predict the next word” game. LLMs are also gaining ground in reading comprehension, scoring about the same as an “average high school student” on exam questions similar to those on the reading section of the SAT.

Johnson poses questions all writing teachers will have about LLMs: are deep learning systems capable of true human thinking and human intelligence? If so, what will that mean for teachers of writing and reading? NYU emeritus professor Gary Marcus argues, “There’s fundamentally no ‘there’ there,” noting the apparently stunning language skills of GPT-3 are just a smokescreen that obscures the lack of generative, coherent human thought and that “it doesn’t really understand the underlying ideas.”

In any case, this research is of tremendous importance to those of us devoted to writing instruction and writing development. The systems now developed, for instance, can use their vast database to answer questions like “how many ingredients are in paella?” faster and more accurately than any search engine now available—and this fact could change our ways of teaching research. But they will also challenge us in new ways to deal with plagiarism, if plagiarism even continues to exist as a viable concept in terms of machine-generated text. To refer again to the Italo Calvino essay: it was written by GPT-3 in half a second. And just take a look at what GPT-3 came up with when asked to “write a paper comparing the music of Brian Eno to a dolphin,” a nonsense prompt posed by Johnson that elicited a pretty amazing “essay.”

Oh, brave new world indeed!

I know that many of our colleagues in writing studies are paying very close attention to AI research and development, and for them I am grateful. As always, our technical abilities outstrip our ethical understanding of them by light years. That’s only one of the reasons that scholars of rhetoric and writing need to be part of this exciting, and concerning, conversation.

Image Credit: "Machine Learning & Artificial Intelligence" by mikemacmarketing, used under a CC BY 2.0 license

About the Author
Andrea A. Lunsford is the former director of the Program in Writing and Rhetoric at Stanford University and teaches at the Bread Loaf School of English. A past chair of CCCC, she has won the major publication awards in both the CCCC and MLA. For Bedford/St. Martin's, she is the author of The St. Martin's Handbook, The Everyday Writer and EasyWriter; The Presence of Others and Everything's an Argument with John Ruszkiewicz; and Everything's an Argument with Readings with John Ruszkiewicz and Keith Walters. She has never met a student she didn’t like—and she is excited about the possibilities for writers in the “literacy revolution” brought about by today’s technology. In addition to Andrea’s regular blog posts inspired by her teaching, reading, and traveling, her “Multimodal Mondays” posts offer ideas for introducing low-stakes multimodal assignments to the composition classroom.