Building trust in AI systems

Artificial Intelligence (AI) Auditing and Accountability research investigates how to make AI systems more accountable and transparent through innovative auditing approaches. Researchers explore critical challenges in evaluating AI systems, from examining potential biases in algorithms to assessing their societal impact. Our researchers develop new frameworks and methodologies for AI accountability, focusing on how these complex systems can be effectively monitored, evaluated, and improved.

Key research areas include developing standards for AI auditing, creating tools for detecting algorithmic bias, and designing approaches that promote transparency while protecting intellectual property. This work aims to ensure AI systems serve their intended purposes while upholding ethical principles and promoting fairness.

Faculty exploring AI auditing + accountability. 

Portrait of Allison Koenecke
Allison Koenecke
Assistant Professor of Information Science
Allison Koenecke
Assistant Professor of Information Science
koenecke@cornell.edu
Color portrait of Aditya Vashistha
Aditya Vashistha
Assistant Professor of Information Science
Aditya Vashistha
Assistant Professor of Information Science
adityav@cornell.edu
Color portrait of Angelina Wang
Angelina Wang
Assistant Professor of Information Science
Angelina Wang
Assistant Professor of Information Science
angelina.wang@cornell.edu