David Robinson is currently a Visiting Scientist in the AI Policy and Practice Initiative, in Cornell's College of Computing and Information Science. His research centers on the design and management of algorithmic decisionmaking, particularly in the public sector. He believes that effective governance of these sociotechnical systems will require collaboration and mutual adaptation by the legal and technical communities, leading to changes in both institutional and algorithmic design, as well as the generation and use of new types of data. Through his work, he aims to contribute to that effort.

He is a managing director and cofounder of Upturn, a Washington DC-based public interest organization that promotes equity and justice in the design, governance and use of digital technology. Upturn's research and advocacy combines technical fluency and creative policy thinking to confront patterns of inequity, especially those rooted in race and poverty. 

David previously served as the inaugural associate director of Princeton University’s Center for Information Technology Policy, a joint venture between the university’s School of Engineering and its Woodrow Wilson School of Public and International Affairs. 

David holds a JD from Yale Law School, and bachelor’s degrees in philosophy from Princeton and Oxford, where he was a Rhodes Scholar.

Talk: "Danger Ahead: Risk Assessment and the Future of Bail Reform"

Watch this talk

Abstract: In the last five years, lawmakers in all 50 states have made changes to their pretrial justice systems. Reform efforts aim to shrink jails by incarcerating fewer people, particularly poor, low-risk defendants and racial minorities. Many places are embracing pretrial risk assessment instruments — statistical tools that use historical data to forecast which defendants can safely be released — as a centerpiece of reform. But these instruments, as they are currently built and used, cannot safely be assumed to support reformist goals of reducing incarceration and addressing racial and poverty-based injustice. Existing scholarship and debate centers on how the instruments themselves may reinforce racial disparities, and on how their opaque algorithms may frustrate due process interests. In this talk, I will highlight three underlying challenges that have yet to receive the attention they require. First, today’s risk assessment tools make what I call “zombie predictions.” That is, predictive models trained on data from older bail regimes are blind to the risk-reducing benefits of recent bail reforms. Second the “decision-making frameworks” that mediate the court system’s use of risk estimates embody crucial moral judgments, yet currently escape appropriate public scrutiny. Third, in the long-term, tools risk giving an imprimatur of scientific objectivity to ill-defined concepts of “dangerousness,” may entrench the Supreme Court’s historically recent blessing of preventive detention for dangerousness, and could pave the way for an increase in preventive detention.