- Our Mission
- Our Leadership
- Diversity, Equity, Inclusion
- Learning Science
- Webinars on Demand
- Digital Community
- English Community
- Psychology Community
- History Community
- Communication Community
- College Success Community
- Economics Community
- Institutional Solutions Community
- Nutrition Community
- Lab Solutions Community
- STEM Community
I’ve been following University of Pennsylvania Professor Ethan Mollick’s blog, One Useful Thing, and find his thinking and reporting about recent AI developments particularly informative. One recent post, “Centaurs and Cyborgs on the Jagged Frontier,” summarizes results from a study of professional work in what he calls “our AI-haunted age" in an effort to answer the question of whether AI is “really a big deal for the future of work.” The answer to this question: a resounding YES.
This particular research project was multidisciplinary and included scores of interviews and a number of studies designed to test the impact of AI on knowledge work. To do so, they randomized a large group of consultants and asked them to do a variety of creative, analytical, writing, and “persuasiveness” tasks for a made-up shoe company—and checked with a real-life shoe company exec to make sure the tasks were realistic.
So what did they find? In a nutshell, “for 18 different tasks, consultants using ChatGPT-4 outperformed those who did not by a lot. On every dimension. Every way we measured performance.” I’d say that’s a pretty significant finding! Moreover, the consultants did better whether or not they were familiar with the AI tool or not—as judged by both real people as well as AI graders (who agreed).
A second finding that I found especially interesting is that the AI tool works as a skill leveler. That is, the consultants who tested lowest at the start of the study improved their performance the most (up 43% when they used AI). Those who tested highest also performed better with AI but did not experience such a significant jump.
Finally, a third finding that jumped out at me concerns a task the team designed that was “outside the AI’s frontier, where humans with high human capital doing their job would consistently outperform AI.” Mollick says that designing such a task was very difficult but that they finally were able to use “the blind spots of AI” to make sure it would give a wrong (though convincing) answer to the problem. The surprising finding, however, was that “human consultants got the problem right 84% of the time without AI help, but [. . . ] with AI, they did worse.” Investigators think that this is an example of how over-reliance on AI can backfire, and they cite another experiment that showed those who used AI often “became lazy, careless, and less skilled in their own judgment.” Such over-reliance, which the researcher referred to as “falling asleep at the wheel” gets poor results and actually harms human learning and productivity.
Mollick concludes that “people really can go on autopilot when using AI” and that “AI outputs, while of higher quality than that of humans, were also a bit homogenous and same-y in aggregate.” He urges all of us to “use AI enough for work tasks” so we can “start to understand where AI is scarily good . . . and where it falls short.” The bottom line, he says, is not about whether AI is going to remake our work world but what we will make of that.
We get to make choices about how we want to use AI help to make work more productive, interesting, and meaningful. But we have to make those choices soon, so that we can begin to actively use AI in ethical and valuable ways rather than merely reacting to technological change. And that’s a pretty tall order for teachers and students of writing. We have no time to waste!
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.