Making efficacy research more actionable: Insights from the EdTech Efficacy Research Symposium

kara_mcwilliams
Macmillan Employee
Macmillan Employee
2 0 1,435

Educational technology has the potential to dramatically improve learner outcomes, but only if instructors are helped to understand what works for their students and classrooms.

Measuring the efficacy of ed tech is difficult because of the complexity and variety of educational settings. Arriving at a clear approach begins with collaboration between developers, researchers and the instructors using these technologies.  

To work toward making efficacy research results more actionable, the University of Virginia’s Curry School of Education, in partnership with Jefferson Education Accelerator and Digital Promise, invited 275 researchers, entrepreneurs, district and university leaders, and teachers and professors to the EdTech Efficacy Research Academic Symposium in Washington D.C. The two-day meeting provided a forum for collaboration and the development of an action plan - to clarify what is meant by the efficacy of ed tech, and to develop more systematic approaches to measuring efficacy within complex and differing educational contexts.  

One of the clearest and most widely supported recommendations was that in order to better support instructors, a paradigm shift is needed in research - away from standalone statements of efficacy and toward the development of a body of evidence of a tool’s effectiveness. There was also consensus that building this sort of evidence takes time, and needs to be done collaboratively between researchers and educators.

Three key themes are driving the need for a paradigm shift in efficacy research

1.) The Counterfactual model is useful, but may not meet a university or instructor's needs

There was a strong message that researchers should resist the race up the ladder of evidence to randomized controlled trials (RCT), and instead rightsize study designs to provide insights that will help instructors.  During a panel discussion, Linda Roberts, Founding Director of the United States Office of Ed Tech suggested that the “Gold Standard” of RCTs cannot be the only model for measuring effectiveness in ed tech, noting that digital tools are often continually evolving.  Susan Fuhrman (Teachers College) echoed these comments by reminding us that RCTs should only be conducted when a product has been in use for at least a year and so aren’t useful for providing insights earlier in product development when significant changes could be made. And Brandon Busteed from Gallup shared from personal experience that many ed tech products won’t survive the (about) seven years it takes to fully communicate the results from an RCT, and that many adoption decisions are already made in absence of evidence.

The take-away: Conducting rapid-cycle studies that meet the standards of their design and provide actionable insights in a timely manner would better serve the needs of instructors and learners than would a rigorous RCT.

2.) Context and use cases are significant factors to consider when measuring impact

How an ed tech tool is used and in what context are critical to impacting learner outcomes.  There was consensus that systematically examining instructor implementations should be priority, as well as an understanding of the challenges.  Karen Vaites (OpenUp Resources) noted that, in general, edtech companies have a desire to explore context, but many can’t afford to conduct on-the-ground studies in multiple institutions and explore multiple implementation models.  Researchers and educators should work together to identify methods for measuring local impact and aggregating those results across multiple settings.  Results would help instructors to understand whether the edtech will work in a classroom like theirs. And, a meta-analysis of studies would be useful add to a product’s overall efficacy portfolio.

The take-away:  A tool’s efficacy research must start with a keen understanding of it’s users and use cases, and meaningful classifications of institutions and implementation models.  Then, a representative sample can be identified to conduct rapid, scalable implementation studies across contexts.  

3.) Providing instructors with the right information and at the right time to make informed decisions

Instructors often rely on a game of “telephone research” when making ed tech adoption decisions, asking friends and colleagues for advice - in part because research results are often not helpful to them.  In a lightning round presentation, Richard Culatta (ISTE) also reminded us that research results that emerge after adoption decisions have been made are useless.  Instructors can become more informed decision makers if the research community can evolve practices to provide more clear and timely communication about what works, for who, and why.  Linda Roberts suggested that if it takes three to five years to communicate research results, then current research should be focusing on the questions instructors will have three to five years from now!  These themes suggest that research should consider two parallel work streams: one providing immediate insights to instructors about current ed tech, and a second looking three to five years out, to set up studies to answer future questions.

The take-away: In the near term, instructors could benefit from intuitive dashboards that provide insights into their learners’ performance.  Rapid evaluations during the development of a product can deliver immediate, actionable results in brief, consumable reports.  Taken together, these artifacts can answer instructor’s immediate questions and build a body of evidence that will help to frame their future questions.

A call to action

The symposium concluded with a call to collaborative action - as Aubrey Francisco (Digital Promise) commented, “No one stakeholder [is] to blame for evidence not being [the] key driver of ed tech decision making, but it is everyone’s burden”.  Attendees committed to evolving the bridge between research and practice, and to partnering to grow a body of evidence that more effectively answers the questions emerging in schools and universities.

A job well done

A huge thank you to the University of Virginia’s Curry School of Education, the Jefferson Education Accelerator, and Digital Promise. And, to each of the Working Groups who spent the past six months tackling these tough topics, conducting important research, and sharing their findings throughout the two-day symposium.  All of the presentations were informative, insightful, and inspiring.

For more of my insights from the Symposium follow me on twitter at @karamcwilliams and join the conversation at #ShowTheEvidence.  

Also, be sure to stay tuned for my forthcoming blog series “Impacts to Insights” on the Macmillan Community 

About the Author
I'm passionate about researching the impact of digital technologies in higher education, and how insights can inform teaching and learning in the classroom. I conduct qualitative and quantitative investigations of how classroom interventions can improve learner outcomes and influence learning gains. I hold a doctorate in Educational Research, Measurement and Evaluation and a master’s degree in Curriculum & Instruction from Boston College. But I am most passionate about my twin sons and my two Australian Shepards.
Topics