Ryan Baker is Professor at the University of Pennsylvania, and Director of the Penn Center for Learning Analytics. His lab conducts research on engagement and robust learning within online and blended learning, seeking to find actionable indicators that can be used today but which predict future student outcomes. Baker has developed models that can automatically detect student engagement in over a dozen online learning environments, and has led the development of an observational protocol and app for field observation of student engagement that has been used by over 150 researchers in 7 countries. Predictive analytics models he helped develop have been used to benefit over a million students, over a hundred thousand people have taken MOOCs he ran, and he has coordinated longitudinal studies that spanned over a decade. He has developed
several learning systems and learning research technologies based on machine learning and foundation models such as large language models.
 
Baker was the founding president of the International Educational Data Mining Society, is currently serving as Editor of the journal Computer-Based Learning in Context, is Associate Editor of the Journal of Educational Data Mining, founded Masters programs in Learning Analytics at Teachers College Columbia University (the first such program) and the University of Pennsylvania, and was the first technical director of the Pittsburgh Science of Learning Center DataShop. Baker has co-authored published papers with over 400 colleagues and has been cited over 25,000 times.

Talk: Algorithmic Bias in Education: The Problem and the Debate About What To Do

Attend this talk via Zoom

Abstract: The advanced algorithms of learning analytics and educational data mining underpin modern adaptive learning technologies, for assessment and supporting learning. However, insufficient research has gone into validating whether these algorithms are biased against historically underrepresented learners. In this talk, I briefly discuss the literature on algorithmic bias in education, reviewing the evidence for how algorithmic bias impacts specific groups of learners, and the gaps in that literature -- both in terms of "known unknowns" and” unknown unknowns". I will then turn to the subject of what to do once algorithmic bias has been discovered. There are a range of algorithmic approaches to addressing and reducing algorithmic bias, but there is an ongoing debate about the role of demographic variables. In particular, is it beneficial or harmful to use demographic variables as predictors? I will discuss some of the arguments in favor of this practice, and some of the key risks and concerns in doing so.