- About
- Courses
- Research
- Computational Social Science
- Critical Data Studies
- Data Science
- Economics and Information
- Education Technology
- Ethics, Law and Policy
- Human-Computer Interaction
- Human-Robot Interaction
- Incentives and Computation
- Infrastructure Studies
- Interface Design and Ubiquitous Computing
- Natural Language Processing
- Network Science
- Social Computing and Computer-supported Cooperative Work
- Technology and Equity
- People
- Career
- Undergraduate
- Info Sci Majors
- BA - Information Science (College of Arts & Sciences)
- BS - Information Science (CALS)
- BS - Information Science, Systems, and Technology
- MPS Early Credit Option
- Independent Research
- CPT Procedures
- Student Associations
- Undergraduate Minor in Info Sci
- Our Students and Alumni
- Graduation Info
- Contact Us
- Info Sci Majors
- Masters
- PHD
- Prospective PhD Students
- Admissions
- Degree Requirements and Curriculum
- Grad Student Orgs
- For Current PhDs
- Diversity and Inclusion
- Our Students and Alumni
- Graduation Info
- Program Contacts and Student Advising
Tarleton Gillespie is a senior principal researcher at Microsoft Research, and an affiliated associate professor in the Department of Communication and Department of Information Science at Cornell University. He is the author of Wired Shut: Copyright and the Shape of Digital Culture (MIT, 2007) , co-editor of Media Technologies: Essays on Communication, Materiality, and Society (MIT, 2014), and author of Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions that Shape Social Media (Yale, 2018).
Talk: The Hidden Normativities of Generative AI
Abstract: Generative AI tools like OpenAI ChatGPT, Microsoft Bing Chat, and Google Bard promise to create content for us, from scratch, off large language models trained on massive corpora of content from the Internet and other digital archives. Much that has been said after that is hype. But it is conceivable that generative AI could become a substantial source of creative work - either as the author itself or by providing fodder to those who create. In other words, what if, alongside the authors and journalists and songwriters and commentators, AI tools too will “generate” the content our cultural landscape? If so, then alongside questions of responsible AI, we must consider the concerns central to the study of media representation, industries, and publics. In addition to the allocative harms of algorithmic systems, we must also ask about representational harms (Katzman et al, 2021).
I will share a modest study that investigates to what degree, if at all, generative AI tools produce non-normative cultural representations, when the contested cultural categories like gender, sexuality, race, and class are left underspecified. What kinds of stories do they tend to generate - not once but in the aggregate? If generative AI may substantially populate our public discourse, dependent as it is on the contours of past public discourse on which it was trained, how will it reproduce or undercut the normative tendencies of that public discourse?