Bits on Bots: Data-Informed Generative AI Practice: One University’s Trajectory

guest_blogger
Expert
Expert
1 0 1,823

JLaw Headshot.pngJeanne Beatrix Law is a Professor of English and Director of Composition at Kennesaw State University, focusing on generative AI (genAI) and digital civil rights histography. Her AI literacy work has global reach, including multiple presentations of her Rhetorical Prompt Engineering Framework at conferences like Open Educa Berlin and the Suny Council on Writing. She has led workshops on ethical genAI for diverse institutions and disciplines at Eastern Michigan, Kent State, and CUNY’s AI Subgroup. She and her students have authored publications on student perceptions of AI in professional writing. Jeanne also co-authored The Writer's Loop: A Guide to College Writing and contributed to the Instructor's Guide for Andrea Lunsford's Everyday Writer. She has authored eight Coursera courses on genAI and advocates for ethical AI integration in educational spaces in both secondary and higher education spaces as a faculty mentor for the AAC&U’s AI Pedagogy Institute.

 

In my inaugural post a couple of weeks ago, I began with November 2022, which is when the public first gained access to ChatGPT. Today, I want to fast-forward to August 2023 and report out some data that our Kennesaw State University research team conducted. As the Chief P.I. for the project, I led a team seeking to measure first-year students’ attitudes towards generative AI (genAI) use in their academic and personal writing.[i] I had experimented with AI-infused teaching already, but this was a first attempt at systematic inquiry into how students might be using generative AI in their writing.

A Disclaimer

I completely understand the challenge of translating data into actionable steps, especially when introducing generative AI into college classrooms. We’re all navigating new ground here, working to sift through what the data tells us and how it aligns with the dynamic needs of our students and teaching goals. It’s a learning curve, but together, we can explore strategies that are both data-informed and practically applicable. Engaging with data-rich content can often feel overwhelming, especially when every point seems crucial. Recognizing this, my aim here is to distill the most significant insights for clarity and relevance. Rather than inundating readers with exhaustive detail, I’ll focus on the key elements that reveal meaningful trends and implications. Let’s approach this data thoughtfully, with a critical eye on the broader narrative it suggests.

What We Did & What We Found

We surveyed students in the fall and spring semesters and distributed digital surveys through instructors in Composition I and Composition II courses. Around 1,550 students answered the surveys. Some of our findings were expected; for example, more than 92% of students reported being aware of generative AI. Many of our findings, however, surprised us. Almost 40% of students (average of fall and spring collections) reported that they use genAI tools in their personal writing, while 35% reported using genAI in their academic writing. More than 75% of students surveyed believe that genAI is the future of writing. Even more so, the qualitative sentiment analysis gave us deeper insights into the nuanced understandings and writing practices of these students. You can read more about preliminary (2023) insights in the DRC Blog Carnival.

Law_viz1_1028.png

Students felt that generative AI (genAI) was helpful for brainstorming and idea generation and was beneficial for generating ideas, structuring thoughts, and helping overcome writer's block. They further appreciated genAI's ability to provide different perspectives or suggestions which can be developed further.

GenAI was commonly mentioned by students as a useful tool for grammar corrections and refining sentence structure. They seemed to view this practice as analogous to other writing aids that do not compromise academic integrity. Several responses highlighted the utility of genAI in quickly gathering preliminary research or understanding basic concepts, which can be helpful in laying groundwork for more in-depth investigation.

On the flipside, students reported negative sentiments towards genAI as well. Their responses indicated that they considered genAI use as cheating when it was used to replace personal effort. A significant concern was that using genAI to write essays or complete assignments was also cheating. Students emphasized that submitting work generated by genAI as one's own undermines the purpose of education and diminished individual understanding and effort. There is a widespread belief that AI impedes the creative process and that relying on genAI for academic tasks can lead to a decline in students' own creative and critical thinking skills. Many students saw the use of genAI in academic settings as a moral issue, arguing that it promotes laziness and dishonesty. The concern is that it allows students to bypass learning and understanding, leading to a lack of genuine academic growth.

Law_viz2_1028.png

The trends in sentiments led us to speculate that, while students did find ethical use cases for generative AI, they also understood its limitations. For us, this was an “a-ha” moment, where the lore around many faculty campfires that told a narrative of “rampant student cheating” simply wasn’t accurate. The students we surveyed demonstrated deeper understandings and uses of generative AI and sought guidance from faculty. In fact, more than one third of students we surveyed wanted to learn more about genAI and would take a class on it.

Preliminary Takeaways

Initial analysis indicates that students need guidance on ethical AI use, and that faculty have opportunity for key input to:

  • help students scale ethical generative AI use;
  • help students understand the human-centered ethics of AI outputs;
  • cultivate students’ digital literacies and prepare them to thrive in AI-infused workplaces.

It’s important to note that what motivated our research team was workplace and industry data showing trends in generative AI demands and how teens are interacting with generative AI tools.

What’s Next?

In my next post, I will dig deeper into what we have used this data for so far. A preview: supporting students in their AI literacy journeys through rhetorical prompt engineering and OER custom GPTs. Stay tuned; thanks for reading. 

 

[i] Our initial research team included: Dr. John Havard, Dr. Laura Palmer, James Blakely, and myself. We have since added Dr. Tammy Powell, Ahlan Filtrup, and Serenity Hill

Our IRB#: FY23-559