91Թ

Using computer science methods to discover how people learn

photo of Williams leaning on bannister
“Technology really lets you bring precision to better understanding what helps people learn, and what motivates them,” says Assistant Professor Joseph Jay Williams (photo by Ryan Perez)

The 91Թ’s Joseph Jay Williams is using computer science methods to enhance education. 

“We need scalable methods for analyzing data and figuring out what works for learners, but these algorithmic methods have to be combined with human intelligence – like allowing instructors to provide ideas,” says Williams, who joined 91Թ’s department of computer science as an assistant professor this summer. 

It’s a homecoming of sorts for Williams, whose undergraduate studies were at 91Թ, in , which combines courses in computer science and psychology. He had an opportunity to take a course with University Professor Emeritus Geoffrey Hinton, and the 91Թ program led him to a PhD in computational cognitive science at University of California, Berkeley. 

, as well as applying statistics and machine learning to modeling how people think. His approach includes designing randomized experiments, also known as A/B testing, to see what is helping students learn and involving instructors in this process. 

“Industry, companies like Google and Facebook give you different ads to see which one you click on, ad A or B. So long as you have two or more versions of something, you can see which one is effective,” says Williams. “A/B testing is a common industry term, though the scientific term is randomized experiment or randomized control trial, which are used to test hypotheses about learning, or to evaluate the efficacy of instruction.” 

Students learn from an instructor explaining a problem, but having students generate explanations themselves, such as explaining why they think an answer is correct, can be especially helpful, he says.

“I’ve used randomized lab experiments to investigate why that happens. Some people are told, ‘Why do you think the answer is correct?’ Others are asked, ‘Write your thoughts about the answer.’ So we compare A, why is it correct, against B, write your thoughts, and then we measure which of those prompts lead people to learn more.”

The key is to unpack the mechanisms that underlie learning, says Williams, which makes learners better at solving new problems. He says that it’s a pity that A/B testing is used to improve a consumer product, but not as commonly used to improve education. 

He applies algorithms from machine learning, a subfield of artificial intelligence that allows computers to process, analyze and make decisions from data, to analyze A/B experiments, and provide better conditions to future learners. 

In his paper, , three Harvard University instructors ran randomized A/B comparisons in the student portal to see what kind of explanations or feedback messages students found helpful. 

Williams says one instructor designed different feedback messages, for example, ‘when you get this question wrong, think back to an earlier lesson’, allowing him to compare that approach to simply telling students why their answer wasn’t correct. 

“We get a metric of quality by asking students, ‘How helpful was this message you got on a scale from 0 to 10?’ This provides insight into what students find helpful. But how do we use this data to improve the feedback messages for future students?”

It’s at this juncture that Williams thinks machine learning can be applied to testing in real-time and determine what’s most effective for learners. 

For example, Williams says A/B testing has been used for MOOCs, or Massive Open Online Courses. When resources are digital and cloud-based, it’s easy to compare alternative versions, and data is readily available, he says. Algorithms can be run dynamically to discover what’s effective, providing a much faster process than was possible ten or twenty years ago, with studies taking place in physical classrooms or labs. 

“Technology really lets you bring precision to better understanding what helps people learn, and what motivates them.”

Another part of Williams’s research addresses health behavior change. While it’s easy to say that people should exercise more and perhaps eat less sugar, Williams says A/B testing through technology can help discover exactly what messages or encouragement get people to change their behavior. 

“We can test out what messages we give people, or how do we help people set goals for themselves,” he says. “In the social space, technology can enable us to keep testing out what helps people. In an ideal world, we’d build systems that never stop experimentation, and so they never stop improving.”

 

The Bulletin Brief logo

Subscribe to The Bulletin Brief

Computer Science