-
About
Our Story
back- Our Mission
- Our Leadership
- Accessibility
- Careers
- Diversity, Equity, Inclusion
- Learning Science
- Sustainability
Our Solutions
back
-
Community
Community
back- Newsroom
- Discussions
- Webinars on Demand
- Digital Community
- The Institute at Macmillan Learning
- English Community
- Psychology Community
- History Community
- Communication Community
- College Success Community
- Economics Community
- Institutional Solutions Community
- Nutrition Community
- Lab Solutions Community
- STEM Community
- Newsroom
- Macmillan Community
- :
- Newsroom
- :
- Learning Stories Blog
- :
- Learning Forward – What We’re Discovering About AI...
Learning Forward – What We’re Discovering About AI Tutors
- Subscribe to RSS Feed
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
Last week, we shared why research is such a critical part of what we do at Macmillan Learning (if you missed it, catch up here). This week, we’re diving into one of the most exciting ways we’re putting that research into practice: AI-powered learning.
The AI Tutor was designed as a purpose-driven study tool to deepen students’ understanding of coursework—moving beyond merely providing answers and using a Socratic method for developing understanding. In other words, it was designed to do more than just provide answers—it helps students think critically, problem-solve, and build deeper understanding. But how well does it actually work?
That’s exactly what we’re studying.
What We’re Learning So Far
When we launched the beta version of the AI Tutor shortly after ChatGPT’s debut, we knew we had a lot to learn. We designed our AI Tutor Study to understand the efficacy of the technology as well as learn more about how AI tools can best support personalized and equitable learning experiences.
And after more than two million student interactions, we’ve seen some promising early results:
➡️ Improved confidence and study habits
➡️ Better problem-solving skills
➡️ More engagement—inside and outside the classroom.
You can read more about these results here.
In Fall 2024, we ran our first IRB-approved efficacy studies, and expect to finish analyzing the data in March 2025. However, early insights indicate improvements in student assignment scores.
What’s Next
As we head into the second semester of research, we’re scaling up the research even more. This Spring we've enlisted 32 instructors across various disciplines and instructors.
We’re curious as to whether we can replicate results from earlier research with a brand new cohort. This isn't just a one-semester deal though. We believe that the scope of these IRB-approved studies reflects our commitment to personalized and inclusive learning at scale and plan to continue this research in Fall of 2025 and beyond.
But AI Tutors are just one piece of the puzzle. What about the broader teaching strategies that shape learning?
Next week, we’ll dive into our Evidence-Based Teaching studies and explore how proven strategies—like metacognition and active learning—are making a difference for students across different disciplines.
Learn more about our overarching goals and how we think about research in part 1