Whether in meetings, conference sessions, or written reports, we are constantly presented with assessment work that includes clear, simple, and easy results. But, we seldom discuss the other side of the coin—that moment when you put significant time and effort into analysis or a project only to end up with unexpected results.
Unexpected results are not necessarily bad. Often, when results aren’t what you expected, they relate to one of the following three scenarios:
Results that don’t match expectations– In some cases, we have anecdotes, stories, or other experiences that have led to expectations of what our data results should look like; however, the real results run counter to those expectations.
Results you do not want– For example, results that may undermine a key justification for a program’s existence or undermine a key source of revenue.
Results that simply do not make sense– Perhaps the data are from an annual or national assessment, or are built on solid theoretical considerations. Yet, the results run counter to previous years’ results, national data or the theory. In essence, the results just don’t seem to make sense.
At a conference last summer, I found myself commiserating with two colleagues about those moments in our offices looking at unexpected results, and the momentary panic we all experience when facing unexpected findings. That conversation got me thinking about how, as a profession, we seldom discuss that moment. We save those discussions for hallway conversations with trusted, experienced colleagues. But, as a younger professional, I would have benefited from knowledge on how to handle the moment when our results don’t match our expectations, contradict previous results or theory, or undermine a core program.
So, what do I usually do when this happens?
Look at the context
At a glance, campus-level results that deviate from accepted theory, national-level data, or even external expectations can be alarming. However, these external comparisons do not account forcampus-level context. Sometimes, campus results can be lower than national averages when the measurement focuses on something that is not part of an institution or program’s priorities. For instance, an assessment of new student orientation may ask about campus traditions, but if an orientation program is more focused on registration and resources, the results from that campus would likely look different from campuses with long histories of campus culture and tradition. So, the initial unexpected result of lower results may actually make sense (and become expected?) when the context is considered.
In some cases, the high-level snapshot simply does not tell the entire story. For instance, I wrote in aprevious blogabout the experiences of military students. Specifically, I wrote about how a high-level look at peer connections would show that military students struggle more with making peer connections than non-military students. However, when you take the time to dig deeper within the military population, we found that current active duty, guard or reservist military students are far more likely to have strong peer connections than those military students who were separated or discharged. In this case, digging deeper provided a more complete picture of the military student experience.
Ultimately, understanding the context of data results and digging deeper are two strategies that can prove crucial to explaining any results that seem out of place. And, despite the challenges that may come with it, unexpected results can truly be used as an opportunity to gather more information, provide additional context, and—if necessary—make changes to improve the experiences of our students.
What other strategies do you use to make sense of unexpected results?
Are you interested in other strategies for managing unexpected results? Listen to ourrecent webinaron what to do when you have unanticipated data results.