- About
- Courses
- Research
- Computational Social Science
- Critical Data Studies
- Data Science
- Economics and Information
- Education Technology
- Ethics, Law and Policy
- Human-Computer Interaction
- Human-Robot Interaction
- Incentives and Computation
- Infrastructure Studies
- Interface Design and Ubiquitous Computing
- Natural Language Processing
- Network Science
- Social Computing and Computer-supported Cooperative Work
- Technology and Equity
- People
- Career
- Undergraduate
- Info Sci Majors
- BA - Information Science (College of Arts & Sciences)
- BS - Information Science (CALS)
- BS - Information Science, Systems, and Technology
- MPS Early Credit Option
- Independent Research
- CPT Procedures
- Student Associations
- Undergraduate Minor in Info Sci
- Our Students and Alumni
- Graduation Info
- Contact Us
- Info Sci Majors
- Masters
- PHD
- Prospective PhD Students
- Admissions
- Degree Requirements and Curriculum
- Grad Student Orgs
- For Current PhDs
- Diversity and Inclusion
- Our Students and Alumni
- Graduation Info
- Program Contacts and Student Advising
Niloufar Salehi is an assistant professor in the School of Information at UC, Berkeley where she is a member of Berkeley AI Research (BAIR). She studies human-computer interaction, with her research spanning education to healthcare to restorative justice. Her research interests are social computing, human-centered AI, and more broadly, human-computer interaction (HCI). Her work has been published and received awards in premier venues including ACM CHI, CSCW, and EMNLP and has been covered in VentureBeat, Wired, and the Guardian. She is a W. T. Grant Foundation scholar for her work on promoting equity in student assignment algorithms. She received her PhD in computer science from Stanford University in 2018.
Abstract: How can users trust an AI system that fails in unpredictable ways? Machine learning models, while powerful, can produce unpredictable results. This uncertainty becomes even more pronounced in areas where verification is challenging, such as in machine translation, and where reliance depends on adherence to community values, such as student assignment algorithms. Providing users with guidance on when to rely on a system is challenging because models can create a wide range of outputs (e.g. text), error boundaries are highly stochastic, and automated explanations themselves may be incorrect. In this talk, I will first focus on the case of health-care communication to share approaches to improving the reliability of ML-based systems by guiding users to gauge reliability and recover from potential errors. Next, I will focus on the case of student assignment algorithms to examine modeling assumptions and perceptions of fairness in AI systems.