- Our Mission
- Our Leadership
- Diversity, Equity, Inclusion
- Learning Science
- Webinars on Demand
- Digital Community
- English Community
- Psychology Community
- History Community
- Communication Community
- College Success Community
- Economics Community
- Institutional Solutions Community
- Nutrition Community
- Lab Solutions Community
- STEM Community
Gayle Yamazaki, a colleague at Macmillan who teaches psychology and works on the psychology book list as a Senior Educational Technology Advisor, has a Community blog post that makes a philosophical exploration into the implications of machine generate.... Gayle looks at a human-machine transcript from research in machine conversation.
Gayle focuses on a transcription between human and machine kicked off when the human opened with the question, "what is moral?" The transcript reads like two drunk graduate students unintentionally putting a "Who's on First?" spin of things. That is, even though the machine response is artificially generated and sometimes flummoxing, you can imagine a reasonably realistic condition -- college bar, close to closing, after two TA's have graded a stack of mid-terms, table littered with beer and whiskey glasses -- where two humans might say what is ascribed to both the human and machine in the transcript.
Go read the excerpt at Gayle's post; it's a hoot. I'll wait here for you to come back.
Oh good, you're back.
The excerpt I have below is from a different transcript in "A Neural Conversational Model," by Oriol Vinyals and Quoc V. Le, the researchers at Google, and unlike Gayle, I don't want to look at whether injecting a machine with personality will make them more like us, or at least more like us when we manage to be "coherent, realistic, and relevant" as opposed to some other things humans can manage to be.
I want to flip the question. What does Vinyals's and Le's research reveal about the human capacity for machine like talking, reading, and writing? Vinyals and Le wrote a simple program that required few rules to make a conversation generator. The machine generated language came from two datasets -- "a closed-domain IT helpdesk troubleshooting dataset and an open-domain movie transcript dataset" (2). So the machine lines you see in the transcript at Gayle's post came from movie transcripts. The machine lines you see in what follows came technical support helpdesk transcripts, a narrower and more focused collection.
Both excerpts, the one Gayle shared and the one above, come from scripts. Yet in the helpdesk sample above, you can see more how the machine role is scripted. If you've ever called technical support, you recognize the script's arc: a how-may I help you? opening; queries to narrow and discern the issue; a suggested solution; confirmation that the solution works; the offer of further assistance; and the well-wishing sign off.
Clearly there are times when we all talk like a machine -- either in deliberate scripts the way technical support agents are trained to use, or even in our own small rote ways: for example, "have a nice day," is a ritual parting with strangers with whom we've interacted briefly -- tellers, cashiers, ticket agents. Good technical support agents, sales people, funeral directors, waiters, and other people who use pat lines and scripted moves excel at making the words sound fresh, as if said for the first time and only to you because you are the special center of their attention. Yes, you are.
But we've all had (or on off days have been) waitstaff, flight attendants, call center operators, tellers, and sadly, even funeral directors who clearly didn't have their heart in the work. The delivery is often monotone; you can tell they may be off processing something else in their cybertronic acting brains. And it's not just those kinds of jobs that ask for machine like consistency or insist on lawsuit averse scripts.
As professors and writing teachers, we may be asked to read in machine-like ways. So if you have ever scored writing for placement, program review, or other situations, you may recall (or experience some day), getting a rubric and some writing samples. In these contexts, the assessment team explains the rubric, the prompt that generated the writing sample, and how your are to apply the rubric, how you are to score, as you read. Then in a practice called "norming," the readers read, learning to all read the same way, so that any two readers are likely to apply the same score (or score range) to a given piece of writing. And so it is that we train people to read like machine.
And too, there is also writing we ask people to do that is very proscribed and scripted. Lawyers, for example, have less work writing wills and other legal documents because one can go to a site like Legalzoom.com and download a template. Businesses use boiler plate language. Ever see the Lazlo Letters where Don Novello wrote strange and odd letters to businesses and elected officials and then published those next to the boiler plate he got back? The machines above look smarter than the human agent who sent him those letters.
In fact, for certain kinds of writing, templates have been replaced by software that actually does do the writing.
And where machines can do something, eventually machines are going to do something and humans will do it less or not all.
What does all this mean for talking, reading, and writing? Find things to talk, read, and write that machines cannot imitate. Create conversations in classrooms that come from having students read and write about things in ways that machines cannot do. Get away from rote assignments, the same old prompts. You cannot avoid that stuff always, but the more you can work away from that stuff and into places machines cannot follow, the more fun and humane things will feel.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.