Peter Henderson is a joint JD-PhD (Computer Science) candidate at Stanford University advised by Dan Jurafsky. He is also an OpenPhilanthropy AI Fellow, a Graduate Student Fellow at the Stanford RegLab working closely with Daniel E. Ho, and technical advisor at the Institute of Security and Technology. Previously, he received his M.Sc. at McGill University, advised by Joelle Pineau and David Meger. In the past he has worked at the California Supreme Court, Amazon AWS & Alexa, and Meta Fundamental AI Research.

Talk: Aligning Machine Learning, Law, and Policy for Responsible Real-World Deployments

Abstract: Machine learning (ML) is being deployed to a vast array of real-world applications with profound impacts on society. ML can have positive impacts, such as aiding in the discovery of new cures for diseases and improving government transparency and efficiency. But it can also be harmful: reinforcing authoritarian regimes, scaling the spread of disinformation, and exacerbating societal biases. As we rapidly move toward systemic use of ML in the real world, there are many unanswered questions about how to successfully use ML for social good while preventing its potential harms. Many of these questions inevitably require pursuing a deeper alignment between ML, law, and policy. Are certain algorithms truly compliant with current laws and regulations? Is there a better design that can make them more in tune to the regulatory and policy requirements of the real world? Are laws, policies, and regulations sufficiently informed by the technical details of ML algorithms, or will they be ineffective and out-of-sync? In this talk, I will discuss ways to bring together ML, law, and policy to address these questions. I will draw on real-world examples throughout the talk, including a unique real-world collaboration with the Internal Revenue Service. I will show how investigating questions of alignment between ML, law, and policy can advance core research in ML, as well as how we might develop new algorithms to expand policy and regulatory options. It is my hope that the tools discussed in this talk will help us lead to more effective and responsible ways of deploying ML in the real world, so that we steer toward positive impacts and away from potential harms.