- Our Mission
- Our Leadership
- Diversity, Equity, Inclusion
- Learning Science
- Webinars on Demand
- Digital Community
- English Community
- Psychology Community
- History Community
- Communication Community
- College Success Community
- Economics Community
- Institutional Solutions Community
- Nutrition Community
- Lab Solutions Community
- STEM Community
- Macmillan Community
- Institutional Solutions Community
- Institutional Solutions Blog
- BLOG | How Machine Learning Is Easing OER Pain Poi...
BLOG | How Machine Learning Is Easing OER Pain Points
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Printer Friendly Page
- Report Inappropriate Content
Algorithms can help faculty discover and select open educational resources for a course, map the concepts covered in a particular text, generate assessment questions and more.
- By David Raths
The basic definition of machine learning is that it allows a computer to learn and improve from experience without being explicitly programmed. One obvious example: the way a Netflix algorithm learns our TV-watching habits to make suggestions of other movies we might like. We come into contact with dozens of such machine-learning algorithms every day.
Algorithms are even starting to make an impact on university campuses, taking on time-consuming tasks to ease faculty and administrator workloads. For example, RiteClass's predictive admissions platform uses machine learning to produce a "Prospective Student Fit Score" by ingesting data about current students and alumni. The Fit Score will determine how similar (or different) a prospective student is to current students and alumni, according to the company, helping institutions make data-driven admissions decisions.
And in support of faculty members, several efforts are underway to use machine learning to analyze the contents of open educational resources (OER) for their fit in a particular course.
California State University, Fresno has been urging its faculty members to seek out appropriate no- or low-cost course materials. The problem: Replacing costlier course material with appropriate OER content is time-consuming, said Bryan Berrett, director of the campus's Center for Faculty Excellence. To ease the process of selecting material, CSU-Fresno has been piloting an analytics solution from Intellus Learning, which has indexed more than 45 million online learning resources and can make recommendations of matching OER content. "If I am teaching an English course and I have a standard textbook, I can type the ISBN number into Intellus," explained Berrett. "Broken down by chapter, it will say here are all the OER resources that are available that match up with that content." The faculty member can then upload the resources directly into the course learning management system.
Intellus says it can also index the millions of learning objects in use at an institution and provide real-time analytics on student usage.
A similar homegrown effort at Penn State University has branched out into new directions, said Kyle Bowen, director of education technology services. PSU's BBookX takes a human-assisted computing approach to enable creation of open source textbooks. The technology uses algorithms to explore OER repositories and return relevant resources that can be combined, remixed and re-used to support learning goals. As instructors and students add materials to a book, BBookX learns and further refines the recommended material.
Bowen explained that the work was inspired to some degree by more nefarious uses of machine learning. Looking at examples of researchers using algorithms to generate fake research papers begged the question: If you can do something like that to create fake research papers, could you use it to create real ones or real content? "What better problem to try to solve than looking at open content?" he said. "How could we simplify or expedite the process of generating a textbook or a textbook replacement?"
In the process of training machines to search for appropriate content, the PSU researchers discovered that algorithms often surface content the faculty member may not have known about. Even if you are an expert in a topic area, there are still elements of the field you may not be as familiar with, and the algorithm is not biased by knowledge you already have.
Describing the process of fine-tuning the algorithm, Bowen said it works less like a Google search and more like a Netflix recommendation. "With a Google search, you provide a term, and if you don't like the results you change your terms. Here you are changing how the machine is thinking about those terms," he explained. "You are telling it 'more like this, less like that,' and you keep iterating. It begins to focus on what you are looking for and what you mean by that term. It goes by the meaning the faculty member is trying to get to."
Although PSU is continuing its work on the OER textbook project, Bowen said, "What we uncovered was that using this machine learning approach to generate textbooks was potentially one of the least interesting things we could do with it." The institution's data scientists have moved into three other areas with the intent of taking on even more complex issues:
1) Prerequisite knowledge. In terms of sequencing how material is presented, machine learning might help instructors understand the prerequisite knowledge a person would need in order to understand a particular body of text. "We want to make sure that as you are coming into a class, the prerequisite knowledge has already been introduced," Bowen said. "You could do that yourself by charting out the concepts to see how they relate across the material. But in this case, the machine can more effectively construct concept maps and identify disconnects inside of them."
2) Generating assessment questions. Anybody who has crafted a multiple-choice midterm or final exam knows how challenging it is to make it representative of the work and create distractors to effectively assess understanding of a topic. PSU is working on a prototype algorithm that, given an OER chapter or a textbook, can suggest multiple-choice assessments.
"This gets into an area of machine learning called adversarial learning, which comes out of security. It is how the computer identifies spam messages," Bowen said. Spam e-mails aren't real e-mails, although they are trying to look like they are — they are trying to exploit a vulnerability. With the creation of a spam filter, machine learning identifies pattern matches. "We want to do the opposite," he said. "We want to identify things that don't fit the pattern but look like they would. What are some things that might exploit gaps in someone's knowledge? What we have found is the machine creates really difficult multiple-choice tests. It shows very little mercy."
PSU has not yet begun testing this solution with faculty. "It is important to explain that it is not the goal to replace what the person is doing, but rather to assist the faculty member," Bowen said. The goal would not be to have the machine generate multiple choice assessments on the fly, but to help a faculty member craft a multiple choice test that is representative of the material and help simplify the process of creating those tests, he added. The same is true with prerequisite knowledge. It is not to replace the work being done by faculty members, but to support them as they think about prerequisite knowledge.
3) Brainstorming with your computer. A third conceptual area PSU is working on is letting the computer help you brainstorm.
"We all have friends who are really smart and who we go to to bounce ideas off of," Bowen said. Such a friend might ask if you have thought about other concepts. "You can do that with your computer," he explained. If you are thinking about a topic, the machine can say, "well based on that, have you thought about x?" It can help you brainstorm an activity and also form or prototype ideas and come out with a concept map or outline that helps you explore new areas.
"So although the original algorithm was designed to generate texts, when we look at it, these three areas are potentially higher value problems to work on. We have moved away from our original research to look at how we can provide more targeted assistance on pain points in developing OER material."
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.