- About
- Courses
- Research
- Computational Social Science
- Critical Data Studies
- Data Science
- Economics and Information
- Education Technology
- Ethics, Law and Policy
- Human-Computer Interaction
- Human-Robot Interaction
- Incentives and Computation
- Infrastructure Studies
- Interface Design and Ubiquitous Computing
- Natural Language Processing
- Network Science
- Social Computing and Computer-supported Cooperative Work
- Technology and Equity
- People
- Career
- Undergraduate
- Info Sci Majors
- BA - Information Science (College of Arts & Sciences)
- BS - Information Science (CALS)
- BS - Information Science, Systems, and Technology
- MPS Early Credit Option
- Independent Research
- CPT Procedures
- Student Associations
- Undergraduate Minor in Info Sci
- Our Students and Alumni
- Graduation Info
- Contact Us
- Info Sci Majors
- Masters
- PHD
- Prospective PhD Students
- Admissions
- Degree Requirements and Curriculum
- Grad Student Orgs
- For Current PhDs
- Diversity and Inclusion
- Our Students and Alumni
- Graduation Info
- Program Contacts and Student Advising
Andrew Selbst is an attorney and an academic, currently serving as a postdoctoral scholar at Data & Society Research Institute and visiting fellow at the Yale Information Society Project.
Selbst studies the effects of technological change on legal institutions and structures, with a particular focus on how technology disrupts society’s traditional understandings of civil rights and civil liberties. He is currently interested in the legal and social effects of machine learning algorithms, or what many people now call artificial intelligence. Consequential decisions are increasingly made on the basis of correlations that uncover patterns in human behavior. This technology upends much of what we think we understand about reasoning, causation, culpability, procedural rights, and discrimination. By combining insights from both the technical and legal literatures, Selbst tries to figure out the specific regulatory problems posed by machine learning, and how to go about solving them.
Talk: "Fairness and Abstraction in Sociotechnical Systems"
Abstract: A primary goal of the FAT* community is to develop machine-learning based systems that, once introduced into a social context, can produce social and legal goals such as fairness, justice, and due process. Bedrock concepts in computer science such as abstraction and modular design are used to define notions of fairness and discrimination, to produce fairness-aware learning algorithms, and to intervene at different stages of a decision-making pipeline to produce "fair" outcomes. In this paper, however, we contend that these concepts render technical interventions ineffective, inaccurate, and sometimes dangerously misguided when they enter the societal context that surrounds decision-making systems. We outline this mismatch with five "traps" that fair-ML work can fall into even as it attempts to be more context-aware in comparison with traditional data science. We draw on studies of sociotechnical systems in Science and Technology Studies to explain why such traps occur and how to avoid them. Finally, we suggest ways in which technical designers can mitigate the traps through a refocusing of design in terms of process rather than solutions, and by drawing abstraction boundaries to include social actors rather than purely technical ones.