Conversational agents (CAs) such as Alexa and Siri are designed to answer questions, offer suggestions – and even display empathy.
But these agents are powered by large language models (LLMs) that ingest massive amounts of human-produced data, and thus can be prone to the same biases as the humans from which the information comes.
Researchers from Cornell Tech, the Cornell Ann S. Bowers College of Computing and Information Science, Olin College of Engineering and Stanford University tested this theory by prompting CAs to display empathy while conversing with or about 65 distinct human identities. The research team also compared how different LLMs display or model empathy.