Jen is a doctoral student in philosophy at the University of Oxford. Her research focuses on the prospects and implications of artificial moral agency.

Attend this talk via ZOOM

Talk: Artificial Moral Behavior

Abstract: We should not deploy autonomous weapons systems. We should not try to program ethics into self-driving cars. We should not replace judges with algorithms. Arguments of this sort—that is, arguments against the use of AI systems in particular decision contexts—often point to the same reason: AI systems should not be deployed in such situations because AI systems are not moral agents. But it’s not always clear why a lack of moral agency is relevant to questions about using AI systems in these circumstances. In this talk, I argue that even if AI systems are accurate and reliable in making morally laden decisions, we do something wrong when we delegate such decisions to AI. Specifically, I argue for the following view: Delegating certain decisions to AI systems is wrong because doing so turns events that should be moral actions into, at best, moral behaviors. That is, when we outsource decisions to entities that are not moral agents, we change the status of those decisions in a morally significant way. This view can help us understand why questions about responsibility for AI-caused harms are so difficult to answer, and it can motivate guidelines for when it’s permissible to delegate decisions to AI systems.