-
About
Our Story
back- Our Mission
- Our Leadership
- Accessibility
- Careers
- Diversity, Equity, Inclusion
- Learning Science
- Sustainability
Our Solutions
back
-
Community
Community
back- Newsroom
- Discussions
- Webinars on Demand
- Digital Community
- The Institute at Macmillan Learning
- English Community
- Psychology Community
- History Community
- Communication Community
- College Success Community
- Economics Community
- Institutional Solutions Community
- Nutrition Community
- Lab Solutions Community
- STEM Community
- Newsroom
Should We Be Preparing Prompt Engineers?
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
Amid the super-hype surrounding ChatGPT (and similar programs) these days–not just the predictable Little Red Hen “the sky is falling” response from so many, but the much more deeply informed critical analyses from AI researchers like Geoffrey Hinton–it’s been hard for me (and I suspect for many teachers of writing) to sort through all the noise surrounding the issue of generative AI in order to think hard about our discipline, writing, and its future.
Writing, like rhetoric, is a plastic art, shapeshifting to meet changing needs and circumstances and opportunities. Plato famously decried this plasticity, charging that writing would kill memory and hamper communication. And indeed, writing did challenge the centrality of memory, which has been further marginalized by all the writing “assistants” now at writer’s disposal. The slow, gradual shift from orality to literacy—and the blending of the two—accompanied huge changes in communication and commerce as well as in the way people experienced the world. The advent of the printing press marked another gigantic shift in communicative technologies, as did the subsequent industrial revolution.
Clearly, we are in the midst of another such cataclysmic shift, as machines not only aid human communication but—as hundreds of AI researchers and leaders are currently warning Congress as well as the world community—are close to taking over communication in ways that could have devastating effects. At such a time, scholars and teachers of writing need to be part of the conversation, working to define “writing” in the age of AI (and beyond), exploring what it means to be a human writer today, and making sure that scholars of writing and rhetoric are at the AI table.
In addition, we need to be asking how our responsibilities to students are changing in light of the rise of AI. What are we preparing student writers to DO—and what should we be preparing them for? That’s where prompt engineering comes in, the goal of which is to use prompts—or really chains of prompts—to lead the burgeoning group of generative AI apps to do what the human engineers want them to do. As a brief article in Forbes recently put it, “Learning to get the best results from generative AI is a skill that needs to be learned and honed. . .” Easier said than done!
Assuming that human writers will, at least in the near future, be able to direct the writing that ChatGPT, Bard, and their brethren do, what is our responsibility in preparing these human writers: what abilities (and ethics) will they need to possess, and how should we teach them?
In the Forbes piece, author and futurist Bernard Marr sums up the abilities prompt engineering requires: in addition to having a grasp of subject matter and being able to attend carefully to details, such writers will need skill in communication, organizational, data analysis, critical thinking, and high-level planning. At the end of the article, Marr suggests that students interested in pursuing prompt engineering look to the online course portal Udemy for online courses or to Next-Level Prompt Engineering with AI, which “promises to teach students to create effective prompts that will give them a competitive edge over everyone else trying to use AI to automate their tasks.”
I plan to take a look at these online courses: I wonder if they have an ethical component, or if they are asking students to consider the short and long-term implications and consequences of what their instructions/prompts will yield. In the meantime, when I look at the list of skills needed to become a prompt engineer (and to earn salaries of over 300K a year), I see skills that are embedded in most if not all of the writing courses we teach. Are teachers of writing today paying more attention to prompt engineering and other “new” jobs emerging as a result of AI? More important, how can we make sure that we will have a voice in creating intellectually robust and ethical curricula for students pursuing such jobs?
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.