In aprevious blog post, I discussed the important role that surveys can play in helping to predict college student attrition risk. Surveys are an important tool for understanding and predicting student retention, completion and success. Why wouldn’t survey data be part of our work? But I’ve said it before, and I’ll say it again: while surveys can be crucial to identifying at-risk students, I would not recommend usingonlysurveys to predict risk. Here are two simple reasons for my thinking:
Sometimes surveys get it wrong.Sometimes students struggle coming to terms with their own challenges. For example, according to the 2013-2014 Mapworks Fall Transition survey, 88% of first-year students reported they expected to make A’s or B’s during their first semester in college. The final outcome, however, tells a different story; only 53% of first-year students ended up with a grade point average of a 3.0 or above.
There are many feasible explanations for this difference. For instance, when they completed the survey, it’s possible students didn’t (or couldn’t) anticipate the academic issues about to arise. Maybe the survey was administered too early for students to have already identified potential problems. Perhaps the students were overly optimistic, either about their abilities or the potential outcomes. Regardless of the explanation, the outcome did not match the initial expectation. And to make matters even more interesting,this performance did not change their expectations for the future. Although nearly half of students surveyed earned a GPAbelowa 3.0 after their first semester, a whopping 85% reported that they did not expect to earn a GPA below a 3.0 during the following semester. So, even with the poor fall-term GPA, expectations for spring-term GPAs were not adjusted.
Overall, surveys have the potential to help us spot students with struggles such as lack of integration, homesickness, and poor study behaviors. But, as shown in the example above, just because a student hasn’t reported issues on a survey doesn’t necessarily mean the student is not at risk. So, despite the undeniable importance of surveys, they are not infallible. What can we do to supplement survey data to give us a more complete picture of our students?
Other data sources have useful, proven information.For instance, institutional records have data about pre-college experiences, enrollment patterns, academic performance, course engagement and performance, and financial aid. Departments may have data relating to campus engagement and utilization of student services. While data may be scattered across various offices on a campus, there is no doubt that these data can and should contribute greatly to our ability to predict student risk. In Mapworks, we use a variety of data points in predicting risk because these additional data points improve our ability to predict success as well. No single source of information is going to provide us with everything we need.
Let’s return to the previous example and consider one data source related to enrollment in remedial courses that could be used in combination with student grade expectations. According to data uploaded by campuses utilizing Mapworks, 77% of first-year students who were not enrolled in remedial courses during their first semester continued to their second academic year. In comparison, only 55% of students who enrolled in more than three remedial credit hours during their first term returned for their second academic year. Simply adding a data point such as remedial credits to survey questions related to course expectations and performance can improve our ability to identify at-risk students.
In the end, I think we can all agree that tackling an issue like student risk prediction is complicated—so why settle for a simplistic approach? Students are complex. The time frame we are predicting for is long, and the literature is filled with research on the countless factors that affect college student success. By utilizing the right combination of proven data sources, rather than limiting ourselves to one source or another, we can be much more confident in our ability to predict student risk.