Professor Kate Crawford is a leading international scholar of the social implications of artificial intelligence. She is a Research Professor at USC Annenberg in Los Angeles, a Senior Principal Researcher at MSR in New York, an Honorary Professor at the University of Sydney, and the inaugural Visiting Chair for AI and Justice at the École Normale Supérieure in Paris.  Her latest book,  Atlas of AI (Yale, 2021) won the Sally Hacker Prize from the Society for the History of Technology, the ASSI&T Best Information Science Book Award, and was named one of the best books in 2021 by New Scientist and the Financial Times. Over her twenty-year research career, she has also produced groundbreaking creative collaborations and visual investigations. Her project Anatomy of an AI System with Vladan Joler is in the permanent collection of the Museum of Modern Art in New York and the V&A in London, and was awarded with the Design of the Year Award in 2019 and included in the Design of the Decades by the Design Museum of London. Her collaboration with the artist Trevor Paglen, Excavating AI, won the Ayrton Prize from the British Society for the History of Science. She has advised policy makers in the United Nations, the White House, and the European Parliament, and she currently leads the Knowing Machines Project, an international research collaboration that investigates the foundations of machine learning.

Talk: Generative Machines and Ground Truth

Abstract: We are living in a period of rapid acceleration for generative AI, where large language and text-to-image diffusion models are being deployed in a multitude of everyday contexts. From ChatGPT’s training set of hundreds of billions of words to LAION-5B’s corpus of almost 6 billion image-text pairs, these vast datasets – scraped from the internet and treated as “ground truth” – play a critical role in shaping the epistemic boundaries that govern machine learning models.  Yet training data is beset with complex social, political, and epistemological challenges. What happens when data is stripped of context, meaning, and provenance? How does training data limit what and how machine learning systems interpret the world? And most importantly, what forms of power do these approaches enhance and enable? This lecture is an invitation to reflect on the epistemic foundations of generative AI, and to consider the wide-ranging impacts of the current generative turn.