Minimizing Risk, Maximizing Utility: A New Method for Evaluating Ed Tech Products in Alpha

Researcher
Macmillan Employee
Macmillan Employee
0 0 1,672

minimizing-risk-lsi-community-graphic.png

Educational technology has the potential to vastly improve teaching and learning in higher education. Rapid advancements in technology and the pace at which new tools are being developed are leading to alpha products being tested with students in live classrooms, which can have a negative impact on teaching and learning. There are many constraints of educational technology that is early in development, and student success in higher education is critically important. Therefore, it is necessary to develop alternative methods to requiring students to use an alpha version of a learning tool as their primary course material for a full semester. This paper discusses a new method for conducting a formative evaluation of digital learning tools. The method has proven to enable relevant, timely, and actionable insights on product optimization, implementation patterns, and professional development—without requiring the use of an alpha product in a high-stakes environment. A case study of a formative evaluation of new digital platform, Achieve, is used to illustrate the approach. The formative evaluation of the alpha version of Achieve was a longitudinal study conducted with a set of
instructors teaching a course in which the solution might be used, but the study was conducted outside of and independent from their live classrooms. The evaluation was comprised of eight unique rapid-cycle evaluations, each lasting one week, that simulated the arc of a traditional semester. Qualitative and quantitative data were collected to develop insights into platform use, implementation patterns, perception, and expectations. Results from the evaluation were used for real-time remediation and optimization of the tool, to understand instructors’ chosen implementation patterns, and to inform professional development for future users. Real-time results implemented by the development teams provided confidence among researchers and instructors that a beta version of the platforms used in live classrooms in subsequent studies would not adversely impact the student and instructor experience, but rather contribute positively to important learner outcomes.