- About
- Courses
- Research
- Computational Social Science
- Critical Data Studies
- Data Science
- Economics and Information
- Education Technology
- Ethics, Law and Policy
- Human-Computer Interaction
- Human-Robot Interaction
- Incentives and Computation
- Infrastructure Studies
- Interface Design and Ubiquitous Computing
- Natural Language Processing
- Network Science
- Social Computing and Computer-supported Cooperative Work
- Technology and Equity
- People
- Career
- Undergraduate
- Info Sci Majors
- BA - Information Science (College of Arts & Sciences)
- BS - Information Science (CALS)
- BS - Information Science, Systems, and Technology
- MPS Early Credit Option
- Independent Research
- CPT Procedures
- Student Associations
- Undergraduate Minor in Info Sci
- Our Students and Alumni
- Graduation Info
- Contact Us
- Info Sci Majors
- Masters
- PHD
- Prospective PhD Students
- Admissions
- Degree Requirements and Curriculum
- Grad Student Orgs
- For Current PhDs
- Diversity and Inclusion
- Our Students and Alumni
- Graduation Info
- Program Contacts and Student Advising
Dr. Kate Starbird is an Associate Professor in the Department of Human Centered Design & Engineering (HCDE) and Director of the Emerging Capacities of Mass Participation (emCOMP) Laboratory. She is also adjunct faculty in the Paul G. Allen School of Computer Science & Engineering and the Information School, and a data science fellow at the eScience Institute. Her research examines how people use social media to seek, share, and make sense of information after natural disasters (such as earthquakes and hurricanes) and man-made crisis events (such as acts of terrorism and mass shooting events). More recently, her work has shifted to focus on the spread of disinformation in this context.
Title: Reflections on Disinformation, Democracy, and Free Expression
Abstract: Disinformation has become a hot topic in recent years. Depending upon the audience, the problem of pervasive deception online is viewed as a critical societal challenge, an overblown moral panic, or a smokescreen for censoring conservatives. Drawing upon empirical research of U.S. elections (2016 and 2020), in this talk, I’ll describe how disinformation “works” within online spaces, show how we’re all vulnerable to spreading it, and highlight three (interrelated) reasons for why it’s such a difficult challenge to address. The first, noted by scholars and purveyors of disinformation across history, is that disinformation exploits democratic societies’ commitments to free expression. The second is that online disinformation is participatory, taking shape as collaborations between witting agents and unwitting crowds of sincere believers. And the third is that working to address disinformation is adversarial — i.e. the people who benefit from manipulating information spaces do not want that manipulation addressed. I’ll note how the latter has recently manifested as efforts that redefine “censorship” to include a broad range of activities — from academic research into online mis- and disinformation, to platform moderation, to information literacy programs — that are themselves, speech. I’ll conclude by presenting a range of potential interventions for reducing the impact of harmful disinformation that respect and support free expression while also empowering people and platforms to be more resilient to exploitation.