- About
- Message from the Chair
- History
- Facilities
- News
- Events
- Info Sci Colloquium
- Technopopulism and the Assault on Indian Democracy
- Generative Agents: Interactive Simulacra of Human Behavior
- AGI is Coming… Is HCI Ready?
- Algorithmic Governance: Auditing Online Systems for Bias and Misinformation
- Studying GenAI as a Cultural Technology: Provocations for Understanding the Cultural Entanglements of AI
- The State of Design Knowledge in Human-AI Interaction
- Amy Bruckman, Georgia Tech
- Jeff Bigham, CMU and Apple
- IS Engaged
- Graduation Info
- Ethics and Politics in Computing Colloquium
- Info Sci Colloquium
- Contact Us
- Courses
- Research
- Computational Social Science
- Critical Data Studies
- Data Science
- Economics and Information
- Education Technology
- Ethics, Law and Policy
- Human-Computer Interaction
- Human-Robot Interaction
- Incentives and Computation
- Infrastructure Studies
- Interface Design and Ubiquitous Computing
- Natural Language Processing
- Network Science
- Social Computing and Computer-supported Cooperative Work
- Technology and Equity
- People
- Career
- Undergraduate
- Info Sci Majors
- BA - Information Science (College of Arts & Sciences)
- BS - Information Science (CALS)
- BS - Information Science, Systems, and Technology
- MPS Early Credit Option
- Independent Research
- CPT Procedures
- Student Associations
- Undergraduate Minor in Info Sci
- Our Students and Alumni
- Graduation Info
- Contact Us
- Info Sci Majors
- Masters
- PHD
- Prospective PhD Students
- Admissions
- Degree Requirements and Curriculum
- Grad Student Orgs
- For Current PhDs
- Diversity and Inclusion
- Our Students and Alumni
- Graduation Info
- Program Contacts and Student Advising
Jen is a doctoral student in philosophy at the University of Oxford. Her research focuses on the prospects and implications of artificial moral agency.
Talk: Artificial Moral Behavior
Abstract: We should not deploy autonomous weapons systems. We should not try to program ethics into self-driving cars. We should not replace judges with algorithms. Arguments of this sort—that is, arguments against the use of AI systems in particular decision contexts—often point to the same reason: AI systems should not be deployed in such situations because AI systems are not moral agents. But it’s not always clear why a lack of moral agency is relevant to questions about using AI systems in these circumstances. In this talk, I argue that even if AI systems are accurate and reliable in making morally laden decisions, we do something wrong when we delegate such decisions to AI. Specifically, I argue for the following view: Delegating certain decisions to AI systems is wrong because doing so turns events that should be moral actions into, at best, moral behaviors. That is, when we outsource decisions to entities that are not moral agents, we change the status of those decisions in a morally significant way. This view can help us understand why questions about responsibility for AI-caused harms are so difficult to answer, and it can motivate guidelines for when it’s permissible to delegate decisions to AI systems.