Niloufar Salehi is an assistant professor in the School of Information at UC, Berkeley where she is a member of Berkeley AI Research (BAIR). She studies human-computer interaction, with her research spanning education to healthcare to restorative justice.  Her research interests are social computing, human-centered AI, and more broadly, human-computer interaction (HCI). Her work has been published and received awards in premier venues including ACM CHI, CSCW, and EMNLP and has been covered in VentureBeat, Wired, and the Guardian. She is a W. T. Grant Foundation scholar for her work on promoting equity in student assignment algorithms. She received her PhD in computer science from Stanford University in 2018.

Attend this talk via ZOOM

Abstract: How can users trust an AI system that fails in unpredictable ways? Machine learning models, while powerful, can produce unpredictable results. This uncertainty becomes even more pronounced in areas where verification is challenging, such as in machine translation, and where reliance depends on adherence to community values, such as student assignment algorithms. Providing users with guidance on when to rely on a system is challenging because models can create a wide range of outputs (e.g. text), error boundaries are highly stochastic, and automated explanations themselves may be incorrect. In this talk, I will first focus on the case of health-care communication to share approaches to improving the reliability of ML-based systems by guiding users to gauge reliability and recover from potential errors. Next, I will focus on the case of student assignment algorithms to examine modeling assumptions and perceptions of fairness in AI systems.