Join the Community Sign up for free access to premium content, valuable teaching resources, and much more. Get Free Premium Access
In a previous blog post, I discussed the important role that surveys can play in helping to predict college student attrition risk. Surveys are an important tool for understanding and predicting student retention, completion and success. Why wouldn’t survey data be part of our work? But I’ve said it before, and I’ll say it again: while surveys can be crucial to identifying at-risk students, I would not recommend using only surveys to predict risk. Here are two simple reasons for my thinking:
There are many feasible explanations for this difference. For instance, when they completed the survey, it’s possible students didn’t (or couldn’t) anticipate the academic issues about to arise. Maybe the survey was administered too early for students to have already identified potential problems. Perhaps the students were overly optimistic, either about their abilities or the potential outcomes. Regardless of the explanation, the outcome did not match the initial expectation. And to make matters even more interesting, this performance did not change their expectations for the future. Although nearly half of students surveyed earned a GPA below a 3.0 after their first semester, a whopping 85% reported that they did not expect to earn a GPA below a 3.0 during the following semester. So, even with the poor fall-term GPA, expectations for spring-term GPAs were not adjusted.
Overall, surveys have the potential to help us spot students with struggles such as lack of integration, homesickness, and poor study behaviors. But, as shown in the example above, just because a student hasn’t reported issues on a survey doesn’t necessarily mean the student is not at risk. So, despite the undeniable importance of surveys, they are not infallible. What can we do to supplement survey data to give us a more complete picture of our students?
Let’s return to the previous example and consider one data source related to enrollment in remedial courses that could be used in combination with student grade expectations. According to data uploaded by campuses utilizing Mapworks, 77% of first-year students who were not enrolled in remedial courses during their first semester continued to their second academic year. In comparison, only 55% of students who enrolled in more than three remedial credit hours during their first term returned for their second academic year. Simply adding a data point such as remedial credits to survey questions related to course expectations and performance can improve our ability to identify at-risk students.
In the end, I think we can all agree that tackling an issue like student risk prediction is complicated—so why settle for a simplistic approach? Students are complex. The time frame we are predicting for is long, and the literature is filled with research on the countless factors that affect college student success. By utilizing the right combination of proven data sources, rather than limiting ourselves to one source or another, we can be much more confident in our ability to predict student risk.
Interested in exploring even further? Check out our new infographic, Using Data to Predict College Student Risk.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.