Leading scholars discuss today's biggest challenges

The Information Science Colloquium (INFO 7090) regularly hosts leading scholars from around the world to discuss topics ranging from mobile sensing and virtual reality to HCI and workplace surveillance. 

Event Archive

Browse past lectures. If you wish to view an event listing prior to 2024, please email comm-office [at] bowers.cornell.edu.

9.3.25 3-4 p.m.
Title: What Roles Will Humans Play in the Future of Data Annotation?
Speaker: Kenneth Huang, Associate Professor at the Pennsylvania State University's College of Information Sciences and Technology
Attend this talk via Zoom
 

9.17.25 3-4 p.m.
Title: Feeling Like a State: China, America, and Control in the Age of AI
Speaker: Silvia Margot Lindtner, Professor at the University of Michigan in the School of Information and Director of the Center for Ethics, Society, and Computing (ESC)
Attend this talk via Zoom
 

10.1.25 3-4 p.m. 
Title:Designing AI Things, Physically
Speaker:Sang Leigh, Assistant Professor at Cornell University
Attend this talk via Zoom
 

10.15.25 3-4 p.m. 
Title:Citation Analysis and Academic Discrimination in the 1970s: Entangled Histories
Speaker: Alex Cziszar, Professor, Department of the History of Science, Harvard University
Attend this talk via Zoom
 

10.22.25 3-4 p.m.
Title:Shaping Self-Regulated Learners in the LLM Era: Evidence, Models, and Design
Speaker:Conrad Borchers, Ph.D. student, Human-Computer Interaction Institute (HCII), Carnegie Mellon University
Attend this talk via zoom
 

10.29.25 3-4 p.m.
Title: Misinformation on WhatsApp: Insights from a large data donation program
Speaker:Kiran Garimella, Rutgers School of Communication and Information
Attend this talk via Zoom
 

11.5.25 3-4 p.m.
Talk: Avoiding Traps, Tracking Decoys: The Political Economy of AI
Speaker: dana boyd, Partner Researcher at Microsoft Research
Attend this talk via Zoom


11.12.25 3-4 p.m.
Talk:The surprising resilience and uncertain future of Community Notes on X
Speaker: Alexios Mantzarlis, Cornell Tech 
Attend this talk via Zoom


11.19.25 3-4 p.m.
Talk:“What do you know? You’re a robot”: Anthropocentrism, anthropomorphism and defensive stereotypes as strategies to resist AI persuasion
Speaker:Claire Wardle, Associate Professor, Department of Communication, Cornell University
Attend this talk via Zoom
 

IS Field Talk with Jose Sanchez
12.3.25 3 p.m.
Location: Cornell Tech, NYC (Room BLM 301) - Livestream in Ithaca (CIS 350)
Speaker: Jose Sanchez, Associate Professor, Cornell Tech
Attend this talk via Zoom
 

12.5.25 1 p.m.
Location: Computing and Information Science Building, Room 250
Talk:Law and Technology: A Methodical Approach
Speaker: Ryan Calo, Lane Powell and D. Wayne Gittinger Professor, University of Washington School of Law
Attend this talk via Zoom

04.17.25 : Bowers Distinguished Speaker Series - Julie E. Cohen, Georgetown University Law Center
Speaker: Julie E. Cohen
Host: Cornell Bowers CIS Distinguished Speaker Series
Abstract: Theoretical accounts of power in networked digital environments typically do not give systematic attention to the phenomenon of oligarchy—to extreme concentrations of material wealth deployed to obtain and protect durable personal advantage. The biggest technology platform companies are dominated to a singular extent by a small group of very powerful and extremely wealthy men who have played uniquely influential roles in structuring technological development in particular ways that align with their personal beliefs and who now wield unprecedented informational, sociotechnical, and political power. Developing an account of oligarchy and, more specifically, of tech oligarchy within contemporary political economy therefore has become a project of considerable urgency.

 

03.07.25: The Language of Creation: How Generative AI Challenges Intuitions—and Offers New Possibilities
Speaker: J.D. Zamfirescu-Pereira
Abstract: Generative AI systems enable creation—of text, images, code, and more—through humanlike language interfaces, promising to transform how we design and create. Yet, these interfaces often lead users to apply intuitions about understanding and reasoning that then lead them astray. In this talk, I’ll examine these misaligned intuitions and how they can get in the way of better leveraging unique Generative AI capabilities, such as proposing large sets of diverse design alternatives and drawing connections across domains. Our case studies in chatbot design and interactive programming illustrate (1) how humans approach instructing LLM behavior by drawing on human-human instructional experiences—and how those approaches can fail to serve users' goals; and (2) ways that design tools can leverage AI's strengths while addressing these tensions, through new explicit structures and speculative generation. Finally, I'll present a vision exploring the computational and design scaffolding needed to support the expansion of our power to use language to create.

 

03.05.25: Advancing Responsible AI with Human-Centered Evaluation
Speaker: Sunnie Kim
Abstract: As AI technologies are increasingly transforming how we live, work, and communicate, AI evaluation must take a human-centered approach to realistically reflect real-world performance and impact. In this talk, I will discuss how to advance human-centered evaluation, and subsequently, responsible development of AI, by integrating knowledge and methods from AI and HCI. First, using explainable AI as an example, I will highlight the challenges and necessity of human (as opposed to automatic) evaluation. Second, I will illustrate the importance of contextualized evaluation with real users, revisiting key assumptions in explainable AI research. Finally, I will present empirical insights into human-AI interaction, demonstrating how users perceive and act upon common AI behaviors (e.g., LLMs providing explanations and sources). I will conclude by discussing the implications of these findings and future directions for responsible AI development.

 

02.21.25: From Agents to Optimization: User Interface Understanding and Generation
Speaker: Jason Wu
Abstract: A grand challenge in human-computer interaction (HCI) is constructing user interfaces (UIs) that make computers useful for all users across all contexts. UIs today are static, manually-constructed artifacts that limit how users and external software can interact with them. In this talk, I describe two types of machine learning approaches that transform interfaces into dynamic, computational objects that can be optimized for users, applications, and contexts. I first discuss my contributions to UI Understanding, a class of approaches that allow machines to reliably understand the semantics (content and functionality) of UIs using the input/output modalities as humans (e.g., visual perception of rendered pixels and mouse input). I show how these capabilities enhance user interaction and unlock new possibilities for systems such as assistive technology, software testing, and UI automation. Next, I discuss my contributions to UI Generation, which enables

 

02.07.25: AI-assisted Interventions to Promote Healthy and Sustainable Habits
Speaker: Kristina Gligorić
Abstract: AI tools create new opportunities to assist policy-makers. For example, enabling healthy and sustainable diets is key to addressing preventable diseases and climate change. How can computational methods help policy-makers in developing interventions that address these and other societal challenges?
I develop causal inference tools for explaining decision-making and AI and LLM methods for implementing novel interventions, advancing both computational social science and natural language processing. I will describe how social factors affect decision-making in campus communities (PNAS Nexus '24) and talk about work that mined these insights to assist chefs and food scientists by revising menus and products. I apply the same causal inference and LLM tools to other societal issues affecting localized communities. For instance, I will describe how we can help neighbors get along online by developing novel interventions on social media (PNAS '24). These new causal inference and LLM methods advance the foundations for technology to support policy-making and interventions.
These studies have had a real impact, affecting thousands of people within a university campus and hundreds of thousands of online users, in partnerships with real-world organizations and companies. My work expands how AI tools can positively affect society across a spectrum of everyday activities, improving communication, health, and sustainability.

 

 

02.05.25: Modern Foundations of Social Prediction
Speaker: Juan Carlos Perdomo
Abstract: Machine learning excels at pattern recognition. Yet, when we deploy learning algorithms in social settings, we do not just aim to detect patterns; we use predictions to shape outcomes. This dynamic interplay, where we build systems using historical data to influence future behavior, underscores the role of prediction as both a lens and engine of social patterns. It also inspires us as researchers to explore new questions about which patterns we can influence, how to design prediction systems, and how to evaluate their impacts on society.
I will begin by presenting insights from my work on performative prediction: a learning-theoretic framework that places the dynamic aspects of social prediction on firm mathematical foundations. In the second half, I will present an empirical case study evaluating the impact of a risk prediction tool used to allocate interventions to hundreds of thousands of public school students each year. I’ll end with some discussion of future work and the challenges that lie ahead.

 

 

01.29.25: The new behavioral science of belief change?
Speaker: Tom Costello
Abstract: Our social institutions — science, liberal democracy, trial by jury — assume that humans change their minds in response to sufficiently compelling information, allowing us to access truth via deliberation. Yet across the behavioral sciences, interventions that seek to change minds (e.g., shift attitudes and beliefs, correct misinformation) by leveraging factual information are notoriously ineffective, especially for salient topics related to ideology, identity, and coalitional interests. I will argue that much of this inefficacy is not attributable to motivated reasoning (i.e., the typical explanation for humans’ unwillingness to change their minds) but rather that individuals’ belief systems are sufficiently heterogeneous and complex to confound one-size-fits-all attempts at persuasive argumentation. Mounting genuinely compelling arguments at scale is trickier than it appears. My work helps solve this problem by leveraging a novel pipeline for information-focused interactions between humans and generative AI models that (a) measures participants’ beliefs in great detail and (b) delivers high-density factual argumentation (which proves crucial for effectiveness) that bears precisely on said beliefs. These interactions dramatically and durably reduce false beliefs, such as conspiracy theories (d = 1.1, with effects enduring for 2 months) and vaccine skepticism (d = 0.78), shift anti-immigrant prejudice (d = 0.20), and increase voting intentions (d = 0.85) — among other promising findings. I will share these findings and articulate a vision for a new behavioral science of belief change that recognizes beliefs as high dimensional person-specific phenomena, using both computational cognitive science and emerging technologies to account for this complexity. This approach sheds new light on the human mind while helping solve enduring social challenges.

 

01.24.25: Broadening AI Access through Human-Centered Natural Language Interfaces
Speaker: Kaitlyn Zhou
Abstract: In this talk, I will present the novel dynamics of human interaction with large language models (human-LM interaction), focusing on how these systems shape human decision-making, trust, and reliance. As the world seeks to integrate the innovations of foundation models into everyday work and life, my mission is to design human-centered natural language interfaces to augment human intelligence and democratize access to AI. My work pioneers key advancements in natural language processing and human-computer interaction by: 1) uncovering core algorithmic risks in current human-LM interactions, 2) articulating the factors that complicate human-AI interactions, and 3) proposing new human-LM interactions to serve the needs of a broader population.

12.16.24: How Venture Capital Shapes Technology Design, Development, and Use
Speaker: Benjamin Shestakofsky
Abstract: This talk presents findings from my recent book, Behind the Startup. I draw on nineteen months of participant-observation research inside a successful Silicon Valley company to illuminate the relationship between financial systems and on-the-ground processes of technology design, development, and use. Venture capital investors push nascent tech firms to scale as quickly as possible to inflate the value of their asset. I show how investors’ demands systematically generated organizational problems that managers addressed by combining high-tech systems with a low-wage, globally distributed workforce. With its focus on the financialization of innovation, Behind the Startup explains how the gains generated by tech startups are funneled into the pockets of a small cadre of elite investors and entrepreneurs, leaving workers and users to bear many of the costs and risks associated with technology development. To promote innovation that benefits the many rather than the few, I argue that efforts to build more equitable technologies must be complemented by changes to the organizational, financial, and policy infrastructures that support them.

 

 

 

12.13.24: The Trade Policy Knot: Making Open Hardware Across Borders
Speaker: Verónica Uribe del Águila
Abstract: Unlike software designers, Internet of Things designers must negotiate global supply chains to realize their visions. By bringing a case study from open hardware design in Mexico, I show how technology trade policies—and trade policy in general—have shaped and will continue to shape computing designs and design practices (hardware, firmware, and software).
I draw on two and half years of ethnography in the Mexican Bajio, North America’s largest industrial corridor and an important site of global computing supply chains. At this site, new trade agreements offer small tech entrepreneurs, some of them open hardware developers and IoT designers, an opportunity to reorganize the distribution of current and future benefits, risks, and rights attached to private and State-supported technological projects. I show how information technology designers realize their devices at the intersection of trade policies, global supply chain volatilities and the materialities of the digital world.

 

 

12.11.24: Tech’s Right Turn: The Rise of Reactionary Politics in Silicon Valley and Online
Speaker: Becca Lewis
Abstract: In recent years, journalists and scholars alike have observed a “right turn” in Silicon Valley and online. In fact, this “turn” is neither an anomaly, nor a recent development. In this talk, I show how groups with reactionary social goals have long harnessed emerging digital information technologies towards ideological ends. I focus on a group of conservative activists who brought their ideas to bear on Silicon Valley and its digital technologies in the 1980s and 1990s. Operating primarily within a loose network of think tanks and media publications, these activists embraced the world of high technology as a space for building right-wing power and a vehicle for restoring older social orders. Ultimately, I show how these groups’ efforts brought conservatism into the Information Age—and how they helped shape our contemporary information systems.

 

11.20.24: Communities of interest
Speaker: Moon Duchin
Abstract: In the world of political redistricting, many states have a rule on the books that the lines should take "communities of interest" into account. But what in the world does this mean? Both the problem of identifying salient communities and the problem of what it means to "respect" or "reflect" them are wildly hard and interesting challenges that combine geography, sociology, natural language processing, and theories of political representation. I'll set the stage theoretically and then tell some tales from the trenches about what it looks like to take this districting principle seriously.

 

11.06.24: How Easy Access to Statistical Likelihoods of Everything Will Change Interaction with Computers
Speaker: Jeffrey P. Bigham
Abstract:  The recent arrival of impressive large language models and coding assistants has led to speculation that the way we interact with computers would dramatically (and quickly!) change. That hasn’t really happened… yet, but we are at an inflection point where we can influence interaction for both better and, potentially, worse. In this talk, I’ll use examples from our research to highlight four coming challenges and opportunities in how we interact with computers in (i) maintaining user agency, (ii) designing user interfaces that encourage responsibility, (iii) making computer systems accessible, and (iv) designing, generating, and navigating user interfaces automatically.
The future of human-computer interaction will be both more familiar and less familiar than we think; this talk is intended to help develop your sense of what is likely to be and which futures you want to build.

 

 

10.30.24: Understanding “Knowledge”: How Social Epistemology Can Help HCI and AI Researchers to Shape the Future of Generative AI
Speaker: Amy Bruckman
Abstract: What is “knowledge” and how do we find it in the presence of a growing number of epistemically unreliable agents? In the title chapter of my book Should You Believe Wikipedia?, I explain how social epistemology can help researchers better understand complex information networks. In this talk, I’ll extend this analysis to explain the impact of unreliable content from generative AI, and outline next steps for us as researchers.

 

10.23.24: The State of Design Knowledge in Human-AI Interaction
Speaker: Krzystof Gajos
Abstract: AI-powered applications are exciting because of their potential to support people in unprecedented ways but they are also particularly challenging to design right. While some specialized design knowledge related to Human-AI Interaction already exists, the production of this knowledge is not keeping up with the pace at which new AI-powered applications are invented. Consequently, without much fanfare or deliberation (or recognition of the fact!), some critical knowledge gaps are getting filled with reasonable-sounding but unverified assumptions. I will present a series of experiments (related to predictive text entry and AI-supported decision making) demonstrating that several of the key assumptions, upon which a lot of research projects and products rest, are wrong. I will then describe recent projects that build on corrected knowledge foundations and share some early promising results. I conclude with two calls to action for our field. First, we need to engage in critical technical practice, i.e., explicitly name, assess and correct (if necessary) the hidden assumptions of our field. Second, with Human-AI Interaction being a relatively new field but one that many people depend on, we need a greater investment in systematic production, synthesis and dissemination of reliable design knowledge for Human-AI Interaction.

 

10.09.24: Studying GenAI as a Cultural Technology: Provocations for Understanding the Cultural Entanglements of AI
Speaker: Rida Qadri
Abstract: This talk argues for the need to study Generative AI as a “cultural technology”, tracing the complex and multifaceted interactions of AI pipelines with human cultures. This entanglement necessitates we both situate generative AI within a broader cultural context and also be specific about its technological mechanisms. I bring up questions of what it might mean to historicize generative AI within a lineage of ‘cultural technologies’, like the camera and telegraph, and propose methods for studying generative AI that learn from the dynamic and contingent nature of these technologies' cultural impact.  This talk shows how we can  take a more historicized, deliberative and  interdisciplinary approach to grappling with GenAI’s role in our collective cultural  worlds.

 

10.02.24: Algorithmic Governance: Auditing Online Systems for Bias and Misinformation
Speaker: Tanu Mitra
Abstract: Large-scale online systems are fundamental to how people consume information. The emergence of large language models and industrial applications like ChatGPT have further exacerbated people’s dependency on online systems as their primary information source. Yet, they pose significant epistemic risks characterized by the prevalence of harmful misinformation and biased content. The risks are further amplified by the algorithms powering these online platforms, be it YouTube’s video recommendation or generative AI powered LLM interfaces. How do we systematically investigate algorithmic bias and misinformation? How do we govern algorithmic systems to safeguard against problematic content? In this talk, I will present a series of algorithmic audit studies. The first one audits the search and recommendation algorithms of social platforms like YouTube and Amazon for misinformation, while the second audits LLMs for cultural bias, particularly in the context of the Global South. I will end with ideas for how we can develop effective long-term algorithmic governance, the challenges in doing so and the new governance challenges and opportunities that are emerging with the recent advances in the field of large language models.

 

09.25.24: ​AGI is Coming… Is HCI Ready?
Speaker: Meredith Ringel Morris
Abstract: We are at a transformational junction in computing, in the midst of an explosion in capabilities of foundational AI models that may soon match or exceed typical human abilities for a wide variety of cognitive tasks, a milestone often termed Artificial General Intelligence (AGI). Achieving AGI (or even closely approaching it) will transform computing, with ramifications permeating through all aspects of society. This is a critical moment not only for Machine Learning research, but also for the field of Human-Computer Interaction (HCI).
In this talk, I will define what I mean (and what I do NOT mean) by “AGI.” I will then discuss how this new era of computing necessitates a new sociotechnical research agenda on methods and interfaces for studying and interacting with AGI. For instance, how can we extend status quo design and prototyping methods for envisioning novel experiences at the limits of our current imaginations? What novel interaction modalities might AGI (or superintelligence) enable? How do we create interfaces for computing systems that may intentionally or unintentionally deceive an end-user? How do we bridge the “gulf of evaluation” when a system may arrive at an answer through methods that fundamentally differ from human mental models, or that may be too complex for an individual user to grasp? How do we evaluate technologies that may have unanticipated systemic side-effects on society when released into the wild?

 

 

09.18.24: Generative Agents: Interactive Simulacra of Human Behavior
Speaker: Michael Bernstein
Abstract: Effective models of human attitudes and behavior can empower applications ranging from immersive environments to social policy simulation. However, traditional simulations have struggled to capture the complexity and contingency of human behavior. I argue that modern artificial intelligence models allow us to re-examine this limitation. I make my case through generative agents: computational software agents that simulate believable human behavior. Generative agents enable us to populate an interactive sandbox environment inspired by The Sims, a small town of twenty-five agents. Our generative agent architecture empowers agents to remember, reflect, and plan. Extending my line of argument, I explore how we might reason about the accuracy of these models, and how modeling human behavior and attitudes can help us design more effective online social spaces, understand the societal disagreement underlying modern AI models, and better embed societal values into our algorithms.

 

09.04.24: Technopopulism and the Assault on Indian Democracy
Speaker: Joyojeet Pal
Abstract: The idea of technocracy in politics, which presents rational management of policy and administration as a means of legitimacy has taken on new populist logic in the India’s digital age. In this talk, I argue that technology, and specifically the use of digital technology and its accompanying language of modernity has been presented as aspirational form of governance, and as a cover for charismatic leadership in the last three decades. I frame contemporary articulations of this Indian configuration of technopopulism within aspiration related to the technology industry and computing artifacts since the 1990s and trace its progress through the branding and public outreach of Indian politicians like Chandrababu Naidu and Narendra Modi. I propose that social media in particular has exacerbated the purchase of Indian technopopulism, in which a politician’s performance of the language of technological modernity is used to obfuscate underlying institutional capture. In conclusion, I discuss the ways in which technopopulism provides social elites normative cover for supporting a political system that works in their favor.

 

05.01.24: New Opportunities at the Intersection of Graphics, Vision, & HCI
Speaker: Abe Davis
Abstract: Imagine a world where any computational problem can be solved with enough of the right data. We may never quite live in that world, but recent trends are certainly bringing us closer to it. This begs an interesting question: when data offers a reliable solution, what parts of a problem remain hard? And what role should we as humans play in solving it? These questions provide broad motivation for much of the research in my group, which brings together expertise in computer graphics, vision, and human-computer interaction to explore new opportunities at the intersection of these fields. In this talk, I will discuss several projects and how they address three shifting roles that humans play in technology. The first set of work will focus on using Augmented Reality (AR) to create guided data capture systems. The second part will focus on developing new interactive tools for content creation. And finally, I will end by discussing ways to combat the use of these new content creation tools for spreading misinformation.

 

04.26.24: Reflections on Disinformation, Democracy, and Free Expression
Speaker: Dr. Kate Starbird
Host: Cornell Bowers CIS Distinguished Speaker Series
Abstract: Disinformation has become a hot topic in recent years. Depending upon the audience, the problem of pervasive deception online is viewed as a critical societal challenge, an overblown moral panic, or a smokescreen for censoring conservatives. Drawing upon empirical research of U.S. elections (2016 and 2020), in this talk, I’ll describe how disinformation “works” within online spaces, show how we’re all vulnerable to spreading it, and highlight three (interrelated) reasons for why it’s such a difficult challenge to address. The first, noted by scholars and purveyors of disinformation across history, is that disinformation exploits democratic societies’ commitments to free expression. The second is that online disinformation is participatory, taking shape as collaborations between witting agents and unwitting crowds of sincere believers. And the third is that working to address disinformation is adversarial — i.e. the people who benefit from manipulating information spaces do not want that manipulation addressed. I’ll note how the latter has recently manifested as efforts that redefine “censorship” to include a broad range of activities — from academic research into online mis- and disinformation, to platform moderation, to information literacy programs — that are themselves, speech. I’ll conclude by presenting a range of potential interventions for reducing the impact of harmful disinformation that respect and support free expression while also empowering people and platforms to be more resilient to exploitation.

 

04.12.24: Echo Chambers, Filter Bubbles, and Rabbit Holes: Measuring the Impact of Online Platforms
Speaker: Ronald Robertson
Abstract: Measuring both sides of users’ interactions with online platforms—both what users are shown and what users do—is a crucial yet often overlooked aspect in the evaluation of widespread concerns around filter bubbles, echo chambers, and rabbit holes. This talk provides an overview of several studies aimed at evaluating such concerns through an interdisciplinary set of approaches and a specific focus on web search engines (e.g., Google Search). These approaches include behavioral experiments, algorithm audits, and new types of digital trace data. We will primarily focus on the last approach in that list, which we used in recent Nature paper to study the role of Google Search in spreading partisan and unreliable news. In that paper, we reframed “echo chambers” and “filter bubbles” as concerns about user choice and algorithmic curation, developed a browser extension to measure both exposure (what users were shown) and engagement (what users did), paired these measures to demographic surveys, and deployed our data collection tools during the 2018 and 2020 U.S. elections. In both study waves, we found that users' engagement choices contained more identity-congruent and unreliable news sources than their exposure to such news within Google Search, especially for participants who identified as strong Republicans. To highlight the platform-agnostic nature of this approach to studying online behavior, we will also briefly cover a recent paper published in Science Advances that used the same approach to reframe questions around “rabbit holes” and examine the role that YouTube plays in the spread of alternative and extremist content. These projects add to the limited number of studies examining ecological exposure, provide support for the consistent finding that interactions with problematic content are rare and concentrated among a small number of individuals, and highlight the importance of measuring both user choice and algorithmic curation when studying the impact of online platforms.

 

03.27.24: Data Values: Digital Surveillance and the New Epistemology of Psychiatry
Speaker: Mira Vale
Abstract: Recent years have seen a surge of efforts to adapt machine learning techniques for healthcare. Data-intensive tools hold great potential to advance medical discovery and precision, but critics ask how these tools will affect care delivery, medical expertise, and health inequality. This talk investigates this transition within digital psychiatry, a field of research and patient care that uses machine learning and other data-intensive techniques to study mental illness and provide mental healthcare. Drawing on three years of ethnographic research and interviews with digital psychiatry researchers across the United States, I analyze how researchers develop data values, moral sentiments around objectivity, precision, quantification, and automation. While psychiatry has historically emphasized clinical judgment, digital psychiatry shifts the basis of professional authority in psychiatry by valorizing data. As digital psychiatrists seek to make psychiatry scientific, they privilege data modeling and devalue psychiatry’s traditional paradigms like clinical expertise and patients’ self-reports about their symptoms and experiences. Ultimately, I demonstrate how digital data enhances psychiatry’s capacity to produce knowledge while data values narrow psychiatry’s ways of knowing. Amidst calls for an “ethics of AI,” this talk sheds light on how ethics are enacted in practice as they become institutionalized as values, standardized as professional norms, and internalized as intuitions.

 

03.20.24: Artistic Vision: Interactive Computational Guidance for Developing Expertise
Speaker: Jane E
Abstract: Computer scientists have long worked towards the vision of human-AI collaboration for augmenting human capabilities and intellect. My work contributes to this vision by asking: How can computational tools not only help a user complete a task, but also help them develop their own domain expertise while doing so?
I investigate this question by designing new interactive tools for domains of artistic creativity. My work is inspired by the fact that expert artists have trained their eyes to “see” in ways that embed their expert domain knowledge—in this case, core artistic concepts. As instructors, experts have also designed approaches to intentionally communicate their vision to their students. My work designs creativity tools that leverage these expert structures to help novices develop this expert-like "artistic vision"—specifically through providing guidance to scaffold their design processes. In this talk, I will demonstrate my approach for designing tools that embed such guidance for photography and visual design that embed the underlying design principles. I will show that these tools are able to scaffold novices’ to be more aware of these artistic concepts during their creative process.

 

 

03.13.24: Why the First Amendment Protects Misinformation, and Why It Should Continue to Do So
Speaker: Jeff Kosseff
Host: Cornell Bowers CIS Distinguished Speaker Series
Abstract: From lies about vaccines to false claims that elections are rigged, misinformation poses serious challenges for the United States. But the First Amendment protects a great deal of speech that could broadly be considered to be misinformation. In this talk, Jeff Kosseff, a cybersecurity law professor at the United States Naval Academy and author of the recent book, Liar in a Crowded Theater: Freedom of Speech in a World of Misinformation, argues that the First Amendment should continue to provide strong protections for false speech. The harms of misinformation, while substantial, pale in comparison to the potential abuse that would come with greater government control over speech. Kosseff argues that non-regulatory solutions, such as media literacy and revitalized local media, are preferable to increased censorship.

 

03.08.24: Operationalizing Responsible Machine Learning: From Equality Towards Equity
Speaker: Angelina Wang
Abstract: With the widespread proliferation of machine learning, there arises both the opportunity for societal benefit as well as the risk of harm. Approaching responsible machine learning is challenging because technical approaches may prioritize a mathematical definition of fairness that correlates poorly to real-world constructs of fairness due to too many layers of abstraction. Conversely, social approaches that engage with prescriptive theories may produce findings that are too abstract to effectively translate into practice. In my research, I bridge these approaches and utilize social implications to guide technical work. I will discuss three research directions that show how, despite the technically convenient approach of considering equality acontextually, a stronger engagement with societal context allows us to operationalize a more equitable formulation. First, I will introduce a dataset tool that we developed to analyze complex, socially-grounded forms of visual bias. Then, I will provide empirical evidence to support how we should incorporate societal context in bringing intersectionality into machine learning. Finally, I will discuss how in the excitement of using LLMs for tasks like human replacement, we have neglected to consider the importance of human positionality. Overall, I will explore how we can expand a narrow focus on equality in responsible machine learning to encompass a broader understanding of equity that substantively engages with societal context.

 

03.06.24: Making Better Decisions with Human-AI Teams
Speaker: Hussein Mozannar
Abstract: AI systems, including large language models (LLMs), are augmenting the capabilities of humans in settings such as healthcare and programming. I first showcase preliminary evidence of the productivity gains of LLMs in programming tasks. To understand opportunities for model improvements, I developed a taxonomy to understand how programmers interact with a popular LLM extension, GitHub Copilot. This taxonomy reveals how much programming behavior has changed; particularly, time spent verifying LLM suggestions dominates other activities. I then show how we can leverage human feedback to improve the interaction. A key question is how does the human know when to rely on the AI or ignore its suggestions. I propose an onboarding procedure that allows users to have an accurate mental model of the AI for effective collaboration. However, in other settings where human resources are limited (healthcare), we might have to deploy AI selectively without human oversight. I show how to design AI systems that can predict on their own or defer the decision to humans when best to do so. Finally, I discuss how this line of work can build up to a vision of re-imagining human workflows with AI-enabled operating systems.

 

03.01.24: Generative AI for the Cyberphysical World
Speaker: Andrew Spielberg
Abstract: Complex cyberphysical systems are increasingly part of our modern world, in home robotics, heavy machinery, medical devices, and beyond.  The behavior of these systems is jointly driven by their components, form, and on-board artificial intelligence.  As computing and advanced manufacturing techniques expand the types of systems we can build and what built systems around us can do, we require design tools that cut through increasingly complex, often intractable possibilities. Those tools should be accurate, optimizing, explorative, and enable physical realization, with the goal of ideating and fabricating machines that approach the diversity and capability of biological life.
In this talk, I will discuss solutions for co-designing dynamical cyberphysical systems over their physical morphological and embodied artificial intelligence. In particular, I will discuss efficient methods for co-optimizing and co-learning morphology and control, digital fabrication methods that leverage spatially programmable materials for function, and data-driven modeling for overcoming the sim-to-real gap.  These methods will be tied together in a vision for computational invention.

 

 

02.28.24: Content Curation in Online Platforms
Speaker: Manoel Horta Ribeiro
Abstract: Online platforms like Facebook, Wikipedia, Amazon, and Linkedin are embedded in the very fabric of our society. They “curate content:” moderate, recommend, and monetize it, and, in doing so, can impact people’s lives positively or negatively. This talk will highlight the need to go beyond how these curation practices are currently designed and tested. I will argue that academic research can and should guide policy and best practices by discussing two projects I worked on during my doctorate.
First, I will describe a large natural experiment on Facebook that allowed measuring the causal effect of removing rule-breaking comments on users’ subsequent behavior. Second, I will present results on the efficacy of “deplatforming” Parler, a large social media website, on its users’ information diets. Finally, I will discuss future research directions on improving online platforms, emphasizing the opportunities and challenges posed by the popularization of generative AI. Altogether, my work indicates that we can improve online platforms—and, by extension, our lives—if we rigorously investigate the causal effect of content curation practices.

 

 

02.23.24: Augment Human Thought and Creativity with the Power of AR and AI
Speaker: Ryo Suzuki
Abstract: Today's AI interfaces are predominantly confined to computer screens, leaving current AI systems unable to engage with and respond to our physical world. My research goal is to shift this paradigm towards a real-world-oriented human-AI interaction, where AI is blended into our everyday lives by seamlessly integrating augmented reality (AR) and AI interfaces. Toward this goal, I have explored three research directions: 1) Machine learning-driven ubiquitous tangible interfaces: transforming everyday objects into ubiquitous tangible interfaces via AR-integrated interactive machine learning, 2) AI-powered interactive content creation: converting static augmented reality content into interactive ones with AI-powered automated content generation, and 3) AI-mediated augmented communication: enhancing human-to-human communication through AI-mediated augmented reality assistant. Building upon these themes, I also outline my future research directions that focus on incorporating recent advances in generative AI, large language models, and advanced computer vision models into intelligent augmented reality interfaces. In the long run, I believe this seamless integration of AR and AI will significantly enhance human thought and creativity, as it allows us to think and collaborate through our entire body and space, rather than confining ourselves into small rectangle screens. My vision is that the future of computers and AI will no longer be a "tool" for thought, but rather a dynamic "space" for thought, where people can live inside and think through tangible and spatial exploration, just like what we do in a science museum today. Toward this vision, I discuss how the future of computing could transform our entire living world into a dynamic space for thought with the power of AR and AI.

 

02.21.24: Dynamic Allocation of Scarce Resources
Speaker: Afshin Nikzad
Abstract: In many matching problems, such as the allocation of organs and public housing, agents are matched to resources over time.  A common question in such problems is whether to match agents quickly to lower waiting times, or slowly and more carefully to make more or higher-quality matches. I study this tradeoff first in kidney exchanges where the participants’ information is easily observable or elicitable by the planner, and second, in matching markets where there is hidden information. A greedy policy which attempts to match agents upon arrival, ignores the benefit that waiting agents provide by facilitating future matchings. However, I prove that in kidney exchanges the trade-off between a “thicker” market and faster matching vanishes in large markets: the greedy policy leads to shorter waiting times and more agents matched than any other policy. I also empirically confirm this in data from the National Kidney Registry. Greedy matching achieves as many transplants as commonly used policies (1.8% more than monthly batching) and shorter waiting times (16 days faster than monthly batching). This conclusion can change in markets where the agents have private information about their willingness-to-wait for higher quality matches. I will discuss optimal solutions through information design, where we discover tradeoffs between distributional equality and allocative efficiency.

 

02.14.24: High-stakes decisions from low-quality data: Learning and planning for wildlife conservation
Speaker: Lily Xu
Abstract: Wildlife poaching pushes countless species to the brink of extinction, with animal population sizes declining by an average of 70% since 1970. To aid rangers in preventing poaching in protected areas around the world, we have developed a machine learning system to predict poaching hotspots and plan ranger patrols. In this talk, we present technical advances in multi-armed bandits and robust reinforcement learning, guided by research questions that emerged from on-the-ground challenges in deploying this system. We also discuss bridging the gap between research and practice, from field tests in Cambodia to large-scale deployment through integration with SMART, the leading platform for protected area management used by over 1,200 wildlife parks worldwide.

 

01.31.24: Designing Reliable Human-AI Interactions: Translating Languages and Matching Students
Speaker: Niloufar Salehi
Abstract: How can users trust an AI system that fails in unpredictable ways? Machine learning models, while powerful, can produce unpredictable results. This uncertainty becomes even more pronounced in areas where verification is challenging, such as in machine translation, and where reliance depends on adherence to community values, such as student assignment algorithms. Providing users with guidance on when to rely on a system is challenging because models can create a wide range of outputs (e.g. text), error boundaries are highly stochastic, and automated explanations themselves may be incorrect. In this talk, I will first focus on the case of health-care communication to share approaches to improving the reliability of ML-based systems by guiding users to gauge reliability and recover from potential errors. Next, I will focus on the case of student assignment algorithms to examine modeling assumptions and perceptions of fairness in AI systems.

 

01.24.24: Algorithmic Governance
Speaker: Sarah Cen
Abstract: Over the past several years, we have begun facing questions of algorithmic governance: the process of deciding when and how we should regulate algorithms. Algorithmic governance is a rich area of research that has both societal and operational significance, as it will determine not only how algorithms are permitted to intervene on our lives, but also how organizations are permitted to develop and deploy algorithms. In this talk, I will discuss three components of algorithmic governance, then illustrate them through a case study on social media regulation. Within the context of social media, I will focus on how social media platforms filter (or curate) the content that users see. I will demonstrate a way to operationalize regulations on algorithmic filtering that is mindful of the legal landscape on social media. I will further show that operationalizing such regulations does not necessarily impose a performance cost on social media platforms.