A color graphic with a purple background and white microchip lines

Cornell’s American Indian and Indigenous Studies Program and the Redistributive Computing Systems Group (RCSG) will present a series of talks this Friday exploring the intersection of Indigenous worldviews and computational technologies.

Indigenous Computing” will be held 1:30 to 4:30 p.m. Friday, April 28, in Gates Hall 114, with a virtual attendance option via Zoom. The event includes talks from Indigenous people working in computer science, information science, and genetics. Registration is encouraged.

“Indigenous people need technologies designed with, by, and in support of our unique lived experiences, identities, knowledge, beliefs, and politics,” said Marina Johnson-Zafiris, a member of the Mohawk Nation and a doctoral student in the field of information science. “As we will see through the speakers, this materializes in different ways – through our interventions, as models, as apps, as protocols – each of them representing our own understandings of Native sovereignty and Indigenous futures.”

Johnson-Zafiris will present “Computing Along the Two Row,” a reference to the treaty between the Haudenosaunee and European colonial settlers recorded in a purple and white beaded belt called the Two Row wampum belt.

Western science has been incredibly successful by adopting a universalist approach – traditionally, good scientists check their identity at the door, said Christopher Csíkszentmihályi, associate professor of information science in the Cornell Ann S. Bowers College of Computing and Information Science and RCSG director.

“While this works well for physics, it creates many problems when science rubs up against the human realm,” he said. “Applied technology like computation benefits when diverse perspectives and experiences are mindfully brought to bear in its creation and implementation.”

“One of the central motivations in the creation of this event is to highlight Native people in the field of computing, a space we have largely been invisibilized,” Johnson-Zafiris said. “Our work provides critical interventions that center principles of relationship and accountability, principles that must come to the forefront of computer and information science research.”

The full schedule is as follows:

1:30 p.m. – Welcome “Indigenous Computing” from Troy Richardson (Saponi/Tuscarora), associate professor of philosophy of education and American Indian and Indigenous Studies at Cornell

1:45 p.m. – “Computing Along the Two Row” by Marina Johnson-Zafiris (Mohawk), doctoral student in the field of information science at Cornell

2:15 p.m. – “Indigenous Language AI” by Daniela Ramos Ojeda (Nahua) ‘25, a computer science major at Cornell

2:45 p.m. – “Developing a Nahuatl Language Translator” by Eduardo Lucero, professor at the Tecnologico Nacional de Mexico, Apizaco, and Sergio Khalil Bello García, senior iOS software engineer at Bitso

3:15 p.m. – “Cultivating Connection: Understanding Relatedness and Kin within Maize Quantitative Genetics” by Merritt Khaipho-Burch (Oglala), doctoral student in the field of plant breeding and genetics at Cornell

3:45 p.m. – “Modeling Dispossession, Youth Homelessness, and Integrated Climate Assessments with and for Indigenous Communities” by Mike Charles (Diné), Cornell Provost’s New Faculty Fellow and incoming assistant professor of biological and environmental engineering

4 p.m. – Closing comments on Indigenous protocol

4:20 p.m. – Coffee and extended discussions

Date Posted: 4/26/2023
A color photo of the trash barrel robot

How do New Yorkers, who are not known for their politeness, react to robots that approach them in public looking for trash? Surprisingly well, actually.

Cornell researchers built and remotely controlled two trash barrel robots – one for landfill waste and one for recycling – at a plaza in Manhattan to see how people would respond to the seemingly autonomous robots. Most people welcomed them and happily gave them trash, though a minority found them to be creepy. The researchers now have plans to see how other communities behave. If you’re a resident of New York City, these trash barrel robots may be coming soon to a borough near you.

A team led by Wendy Ju, associate professor at the Jacobs Technion-Cornell Institute at Cornell Tech and the Technion, and a member of the Department of Information Science in the Cornell Ann S. Bowers College of Computing and Information Science, constructed the robots from a blue or gray barrel mounted on recycled hoverboard parts. They equipped the robots with a 360-degree camera and operated them using a joystick. 

“The robots drew significant attention, promoting interactions with the systems and among members of the public," said co-author Frank Bu, a doctoral student in the field of computer science. "Strangers even instigated conversations about the robots and their implications.” 

Bu and Ilan Mandel, a doctoral student in the field of information science, presented the study, "Trash Barrel Robots in the City," in the video program at the ACM/IEEE International Conference on Human-Robot Interaction last month.

In the video footage and interviews, people expressed appreciation for the service the robots provided and were happy to help move them when they got stuck, or to clear away chairs and other obstacles. Some people summoned the robot when they had trash – waving it like a treat for a dog – and others felt compelled to “feed” the robots waste when they approached.  

However, several people voiced concerns about the cameras and public surveillance. Some raised middle fingers to the robots and one person even knocked one over.

People tended to assume that the robots were “buddies” who were working together, and some expected them to race each other for the trash. As a result, some people threw their trash into the wrong barrel.

This type of research, where a robot appears autonomous but people are controlling it from behind the scenes, is called a Wizard of Oz experiment. It’s helpful during prototype development because it can alert researchers to potential problems autonomous robots are likely to encounter when interacting with humans in the wild.

Ju had previously deployed a trash barrel robot on the Stanford University campus, where people had similarly positive interactions. In New York City, initially she had envisioned new types of mobile furniture, such as chairs and coffee tables.

“When we shared with them the trash barrel videos that we had done at Stanford, all discussions of the chairs and tables were suddenly off the table,” Ju said. “It’s New York! Trash is a huge problem!”

Now, Ju and her team are expanding their study to encompass other parts of the city. “Everyone is sure that their neighborhood behaves very differently,” Ju said. “So, the next thing that we're hoping to do is a five boroughs trash barrel robot study.” 

Michael Samuelian, director of the Urban Tech hub at Cornell Tech, has helped the team to make contact with key partners throughout the city for the next phase of the project.

Wen-Ying “Rei” Lee, a doctoral student in the field of mechanical and aerospace engineering, also contributed to the study.

By Patricia Waldron, a writer for the Cornell Ann S. Bowers College of Computing and Information Science.

Date Posted: 4/19/2023
A color photo showing young people using their cell phones

Youth in the United States are targets of cross-platform digital abuse from peers, strangers, offline acquaintances and even relatives, with threats ranging from harassment and sexual violence to financial fraud, according to a new collaborative study and call-to-action from Cornell and Google researchers.

Aided by firsthand accounts from 36 youth aged 10 to 17 and 65 parents, educators, social workers and other youth advocates, researchers identified the need for more resources to educate youth and parents on digital abuse. They call for better communication and coordination among adult stakeholders in implementing sound protective practices.

The study also calls for human computer interaction (HCI) scholars to study and develop better tools to safeguard youth online, where nearly half of American teenagers experience some form of digital abuse, according to Pew Research.

“We really need to take a closer look at the types of things that young people are experiencing online, because these experiences are not just child problems anymore,” said Diana Freed, a doctoral student in the field of information science and lead author of “Understanding Digital-Safety Experiences of Youth in the U.S.,” which will be presented at the Association for Computing Machinery CHI Conference on Human Factors in Computing Systems in Hamburg, Germany this month. “Young people are experiencing what are typically thought of as adult issues, like financial fraud and sexual violence.”

Youth in the study reported being harassed online by peers, intimate partners, acquaintances and strangers. Harassment could involve fielding toxic comments or having fake social media accounts set up without their authorization, but could also shift into more serious forms of digital abuse – like receiving intimate images they didn’t request – or escalate into threats in the physical world.

“Once your nudes get sent out, you’re done. It’s going to spread,” a youth said in the study. “I’ve seen videos spread from state to state in literally five minutes.”

“She told me she needed money for her child,” said another, detailing a financial scam. “I gave out my bank card and also my online banking code. When I stopped, she started harassing and threatening me.”

Just as today’s youth live and seamlessly move between offline and online worlds, threats often follow them from platform to platform, said Natalie Bazarova, M.S. ‘05, Ph.D. ‘09, professor of communication in the College of Agriculture and Life Sciences and director of the Cornell Social Media Lab.

“The porousness of barriers between digital platforms and online and physical worlds underscores how easily threats can escalate by crossing social contexts and amplifying harms,” she said.

While kids navigate complex and sometimes risky digital lives, for parents and educators alike, there are few formal options for support and resources to educate themselves and kids on potential online harms, researchers found.

“Whether it was the teachers or the parents, they didn’t really understand exactly what social media applications young people were using, let alone how to address the problems,” Freed said.

In many instances, parents’ knowledge about the platforms their kids frequent was limited to information pulled from quick web searches or conversations with friends; far from ironclad sources, she added.

“Some parents would tell us, ‘Online gaming is very safe, but a particular social media app is not safe.’ But is there an open chat on the gaming platform? Can anyone join it? Do you know who your kids are communicating with?” Freed said. “Well-meaning parents can have a very difficult time understanding what questions to ask their kids to improve safety.”

Among their recommendations, researchers call for better educational resources, such as more robust digital safety educational programs in schools and more accessible, actionable resources, like Common Sense Education’s Digital Citizenship curriculum for educators and Social Media Test Drive, a Cornell-led project that Bazarova co-founded and directs. Other recommendations include engaging youth in app and platform design and improving digital-abuse reporting processes on the online apps and platforms young people frequent.

“We may assume, because they’re digital natives, that kids will just know how to protect themselves online,” Freed said. “That’s leaving a lot on young people, families and schools.”

Other co-authors are: Eunice Han ’21; Sunny Consolvo, Patrick Gage Kelley, and Kurt Thomas, all of Google; and Dan Cosley of the National Science Foundation.

This research was supported in part by the National Science Foundation and the USDA’s National Institute of Food and Agriculture.

By Louis DiPietro, a writer for the Cornell Ann S. Bowers College of Computing and Information Science.

Date Posted: 4/18/2023
A color photo of someone presenting a project to another person

After a three-year hiatus, Bits On Our Minds (BOOM), a showcase of cutting-edge digital technology projects created by Cornell students, returns to campus for its 25th anniversary. The event will be held 4-6 p.m. on Thursday, Apr. 27 in Duffield Hall atrium.

BOOM offers student teams and individuals the opportunity to showcase their projects, which will include games, robotics, autonomous vehicles, mobile phone apps, and more, to the Cornell community and beyond. The Cornell Ann S. Bowers College of Computing and Information Science is sponsoring the event, but BOOM is open to participants from across the university, and the wider Ithaca community is invited to attend.

The first BOOM took place in 1998, making this year the 25th anniversary of the event. Due to the COVID-19 pandemic, however, the event was paused from 2020-2022.

A color graphic showing the 2023 BOOM logo

BOOM provides the rare opportunity for students to network with representatives from industry and receive feedback on their work. Several corporate sponsors will be in attendance and participants are encouraged to hone their elevator pitches and meet with sponsors  in a reception following the event. This year, the sponsors include Boeing, Sandia National Labs, LinkedIn, Goldman Sachs, EY, Air Liquide and Pepsi.

Teams will also be competing for cash awards, with projects judged on their novelty, performance, the quality of engineering, project difficulty, social benefits and presentation. The selected teams will receive a commemorative trophy and $750.

BOOM is free and open to the public.

Patricia Waldron is a writer for the Cornell Ann S. Bowers College of Computing and Information Science.

This story was originally published in the Cornell Chronicle.

Date Posted: 4/17/2023
A color photo of an autonomous bus Caption: Autonomous buses in Linköping in Sweden must make freque

The town of Linköping, Sweden, has a small fleet of autonomous electric buses that carry riders along a predetermined route. The bright vehicles, emblazoned with the tagline, “Ride the Future,” have one main problem: Pedestrians and cyclists regularly get too close, causing the buses to brake suddenly, and making riders late for work.

Researchers saw this problem as an opportunity to design new ways of using sound to help autonomous vehicles navigate complex social situations in traffic. Currently, sound is still underexplored as a tool to enable autonomous vehicles and robots to interact with humans and each other. 

The research team found that jingles and beeps effectively move people out of the way. But more importantly, they discovered it’s the timing of the sound – not the sound itself – that allows the bus to meaningfully communicate with people in traffic.

“If we want to create sounds for social engagement, it's really about shifting the focus from ‘what’ sound to ‘when’ sound,” said study co-author Malte Jung, associate professor of information science in the Cornell Ann S. Bowers College of Computing and Information Science (Cornell Bowers CIS).

Lead author Hannah Pelikan, a recent visiting scholar in the Department of Information Science at Cornell Bowers CIS and doctoral student at Linköping University, presented their study, “Designing Robot Sound-In-Interaction: The Case of Autonomous Public Transport Shuttle Buses,” on March 15 at the 2023 ACM/IEEE International Conference on Human-Robot Interaction. The work received a nomination for the best design paper award.

The researchers designed potential bus sounds through an iterative process: They played sounds through a waterproof Bluetooth speaker on the outside of the bus, analyzed video recordings of the resulting interactions, and used that information to select new sounds to test. Either the researchers or a safety driver, who rides along in case the bus gets stuck, triggered the sounds to warn pedestrians and cyclists.

Initially, the researchers tried humming sounds that became louder as people got closer, but low-pitched humming blended into the road noise and a high-pitched version irritated the safety drivers. The repeated sound of a person saying “ahem” was also ineffective. 

They found that “The Wheels on the Bus” and a similar jingle successfully signaled cyclists to clear out before the brakes engaged. The song also elicited smiles and waves from pedestrians, possibly because it reminded them of an ice cream truck, and may be useful for attracting new riders, they concluded.

Standard vehicle noises – beeps and dings – also worked to grab people’s attention; repeating or speeding up the sounds communicated that pedestrians needed to move farther away.

In analyzing the videos, Pelikan and Jung saw that regardless of which sound they played, the timing and duration were most important for signaling the bus’ intentions – just as the honk of a car horn can be a warning or a greeting. A sound that is too late can become incomprehensible, and is ignored as a result.

These insights came from applying conversation analysis, an interdisciplinary approach influenced by sociology, anthropology, and interactional linguistics, which has not been used previously for robot sound design. By transcribing the pedestrians’ reactions in the video recordings in great detail, the researchers were able to see the moment-by-moment impact of the sounds during a traffic interaction.

“We looked very much at the interaction component,” Pelikan said. “How can sound help to make a robot, bus, or other machine explainable in some way, so you immediately understand?”

The study’s approach represents a new way of designing sound that is applicable to any autonomous system or robot, the researchers said. While most sound designers work in quiet labs and create sounds to convey specific meanings, this approach uses the bus as a laboratory to test how people will respond to the sounds in the wild.

“We’ve approached sound design all wrong in human-robot interaction for the past decades,” Jung said. “We wanted to really rethink this and bring in a new perspective.”

Pelikan and Jung said their findings also underline another important factor for  autonomous vehicle design: Traffic is a social phenomenon. While societies may have established rules of the road, people are constantly communicating through their horns, headlights, turn signals and movements. Pelikan and Jung want to give autonomous vehicles a better way to participate in the conversation.

The research received funding from the Swedish Research Council and the National Science Foundation.

By Patricia Waldron, a writer for the Cornell Ann S. Bowers College of Computing and Information Science.

This story was orignially published in the Cornell Chronicle.

Date Posted: 4/17/2023
A color photo of a man and woman standing outside

Social media companies need content moderation systems to keep users safe and prevent the spread of misinformation, but these systems are often based on Western norms, and unfairly penalize users in the Global South, according to new research at Cornell.

Farhana Shahid, a doctoral student in the field of information science in the Cornell Ann S. Bowers College of Computing and Information Science, who led the research, interviewed people from Bangladesh who had received penalties for violating Facebook’s community standards. Users said the content moderation system frequently misinterpreted their posts, removed content that was acceptable in their culture and operated in ways they felt were unfair, opaque and arbitrary.

Shahid said existing content moderation policies perpetuate historical power imbalances that existed under colonialism, when Western countries imposed their rules on countries in the Global South while extracting resources.

“Pick any social media platform and their biggest market will be somewhere in the East,” said co-author Aditya Vashistha, assistant professor of information science in Cornell Bowers CIS.  “Facebook is profiting immensely from the labor of these users and the content and data they are generating. This is very exploitative in nature, when they are not designing for the users, and at the same time, they’re penalizing them and not giving them any explanations of why they are penalized.”

Shahid will present their work, “Decolonizing Content Moderation: Does Uniform Global Community Standard Resemble Utopian Equality or Western Power Hegemony?” in April at the Association for Computing Machinery (ACM) CHI Conference on Human Factors in Computing Systems.

Even though Bengali is the sixth most common language worldwide, Shahid and Vashistha found that content moderation algorithms performed poorly on Bengali posts. The moderation system flagged certain swears in Bengali, while the same words were allowed in English. The system also repeatedly missed important context. When one student joked “Who is willing to burn effigies of the semester?” after final exams, his post was removed because it might incite violence.

Another common complaint was removing posts that were acceptable in the local community, but violated Western values. When a grandmother affectionately called a child with dark skin a “black diamond,” the post was flagged for racism, even though Bangladeshis do not share the American concept of race. In another instance, Facebook deleted a 90,000-member group that provides support during medical emergencies because it shared personal information – phone numbers and blood types in emergency blood donation request posts by group members.

The researchers also found inconsistent moderation of religious posts. One user felt the removal of a photo of the Quran lying in the lap of a Hindu goddess with the words, “No religion teaches to disrespect the holy book of another religion,” was Islamophobic. But another user said he reported posts calling for violence against Hindus and was notified the content did not violate community standards.

The restrictions imposed by Facebook had real-life consequences. Several users were barred from their accounts – sometimes permanently – resulting in lost photos, messages and online connections. People who relied on Facebook to run their businesses lost income during the restrictions, and some activists were silenced when opponents maliciously and incorrectly reported their posts.

Participants reported feeling “harassed,” and frequently did not know which post violated the community guidelines, or why it was offensive. Facebook does employ some local human moderators to remove problematic content, but the arbitrary flagging led many users to assume that moderation was entirely automatic. Several users were embarrassed by the public punishment and angry that they could not appeal, or that their appeal was ignored.

“Obviously, moderation is needed, given the amount of bad content out there, but the effect isn’t equally distributed for all users,” Shahid said. “We envision a different type of content moderation system that doesn’t penalize people, and maybe takes a reformative approach to better educate the citizens on social media platforms.”

Instead of a universal set of Western standards, Shahid and Vashistha recommended that social media platforms consult with community representatives to incorporate local values, laws and norms into their moderation systems. They say users also deserve transparency regarding who or what is flagging their posts and more opportunities to appeal the penalties.

“When we’re looking at a global platform, we need to examine the global implications,” Vashistha said. “If we don’t do this, we’re doing grave injustice to users whose social and professional lives are dependent on these platforms.”

By Patricia Waldron, a writer for the Cornell Ann S. Bowers College of Computing and Information Science.

Date Posted: 4/13/2023
A color photo of a man smiling for a photo

Cornell Ann S. Bowers College of Computing and Information Science announced the appointment of David Mimno as chair of the Department of Information Science, effective January 1, 2024.  

“I am delighted to welcome David as the next chair,” said Kavita Bala, dean of Cornell Bowers CIS. “His dedication to the department and expertise make him the ideal candidate to continue the exciting trajectory of our growing IS department. I look forward to working with him to advance the missions of the department and college.”

Mimno has been a valued member of Cornell’s faculty for nearly a decade and has made numerous positive contributions to the academic community. He holds a Ph.D. from the University of Massachusetts, Amherst, and was previously the head programmer at the Perseus Project at Tufts University and a researcher at Princeton University. His machine learning research has been supported by the National Endowment for the Humanities and the National Science Foundation (NSF). In 2016, he was awarded the prestigious Sloan Research Fellowship from the Alfred P. Sloan Foundation, and in 2017, the NSF’s Faculty Early Career Development (CAREER) Award.

“I got into machine learning research because I saw how transformational computation could be in giving people new ways to connect with the world,” said Mimno. “I joined Cornell’s Department of Information Science because I found a community of people who shared this vision of technology and society. As computation becomes ever more present in every aspect of our lives, with all the good and bad effects that it could have, I can't imagine a more exciting and important time to lead this department.”

Mimno succeeds David Williamson, professor in the School of Operations Research and Information Engineering, who has held the role since July 2021 and is extending his appointment until the end of the year.

Date Posted: 4/13/2023
A photo collage showing eye glasses and a man in wearing them

It may look like Ruidong Zhang is talking to himself, but in fact the doctoral student in the field of information science is silently mouthing the passcode to unlock his nearby smartphone and play the next song in his playlist.

It’s not telepathy: It’s the seemingly ordinary, off-the-shelf eyeglasses he’s wearing, called EchoSpeech – a silent-speech recognition interface that uses acoustic-sensing and artificial intelligence to continuously recognize up to 31 unvocalized commands, based on lip and mouth movements.

Developed by Cornell’s Smart Computer Interfaces for Future Interactions (SciFi) Lab, the low-power, wearable interface requires just a few minutes of user training data before it will recognize commands and can be run on a smartphone, researchers said.

Zhang is the lead author of “EchoSpeech: Continuous Silent Speech Recognition on Minimally-obtrusive Eyewear Powered by Acoustic Sensing,” which will be presented at the Association for Computing Machinery Conference on Human Factors in Computing Systems (CHI) this month in Hamburg, Germany.

“For people who cannot vocalize sound, this silent speech technology could be an excellent input for a voice synthesizer. It could give patients their voices back,” Zhang said of the technology’s potential use with further development.

In its present form, EchoSpeech could be used to communicate with others via smartphone in places where speech is inconvenient or inappropriate, like a noisy restaurant or quiet library. The silent speech interface can also be paired with a stylus and used with design software like CAD, all but eliminating the need for a keyboard and a mouse.

Outfitted with a pair of microphones and speakers smaller than pencil erasers, the EchoSpeech glasses become a wearable AI-powered sonar system, sending and receiving soundwaves across the face and sensing mouth movements. A deep learning algorithm, also developed by SciFi Lab researchers, then analyzes these echo profiles in real time, with about 95% accuracy.

“We’re moving sonar onto the body,” said Cheng Zhang, assistant professor of information science in the Cornell Ann S. Bowers College of Computing and Information Science and director of the SciFi Lab.

“We’re very excited about this system,” he said, “because it really pushes the field forward on performance and privacy. It’s small, low-power and privacy-sensitive, which are all important features for deploying new, wearable technologies in the real world.”

The SciFi Lab has developed several wearable devices that track bodyhand and facial movements using machine learning and wearable, miniature video cameras. Recently, the lab has shifted away from cameras and toward acoustic sensing to track face and body movements, citing improved battery life; tighter security and privacy; and smaller, more compact hardware. EchoSpeech builds off the lab’s similar acoustic-sensing device called EarIO, a wearable earbud that tracks facial movements. 

Most technology in silent-speech recognition is limited to a select set of predetermined commands and requires the user to face or wear a camera, which is neither practical nor feasible, Cheng Zhang said. There also are major privacy concerns involving wearable cameras – for both the user and those with whom the user interacts, he said.

Acoustic-sensing technology like EchoSpeech removes the need for wearable video cameras. And because audio data is much smaller than image or video data, it requires less bandwidth to process and can be relayed to a smartphone via Bluetooth in real time, said François Guimbretière, professor in information science in Cornell Bowers CIS and a co-author.

“And because the data is processed locally on your smartphone instead of uploaded to the cloud,” he said, “privacy-sensitive information never leaves your control.”

Battery life improves exponentially, too, Cheng Zhang said: Ten hours with acoustic sensing versus 30 minutes with a camera.

The team is exploring commercializing the technology behind EchoSpeech, thanks in part to Ignite: Cornell Research Lab to Market gap funding.

In forthcoming work, SciFi Lab researchers are exploring smart-glass applications to track facial, eye and upper body movements.

“We think glass will be an important personal computing platform to understand human activities in everyday settings,” Cheng Zhang said.

Other co-authors were information science doctoral student Ke Li, Yihong Hao ’24, Yufan Wang ’24 and Zhengnan Lai ‘25. This research was funded in part by the National Science Foundation.

By Louis DiPietro, a writer for the Cornell Ann S. Bowers College of Computing and Information Science.

Date Posted: 4/06/2023
A color graphic with the text "Artificial Intelligence" "AI" with futuristic background

People have more efficient conversations, use more positive language and perceive each other more positively when using an artificial intelligence-enabled chat tool, a group of Cornell researchers has found.

Postdoctoral researcher Jess Hohenstein, M.S. ’16, M.S. ’19, Ph.D. ’20, is lead author of “Artificial Intelligence in Communication Impacts Language and Social Relationships,” published April 4 in Scientific Reports.

Co-authors include Malte Jung, associate professor of information science in the Cornell Ann S. Bowers College of Computing and Information Science (Cornell Bowers CIS), and Rene Kizilcec, assistant professor of information science (Cornell Bowers CIS).

Generative AI is poised to impact all aspects of society, communication and work. Every day brings new evidence of the technical capabilities of large language models (LLMs) like ChatGPT and GPT-4, but the social consequences of integrating these technologies into our daily lives are still poorly understood.

AI tools have potential to improve efficiency, but they may have negative social side effects. Hohenstein and colleagues examined how the use of AI in conversations impacts the way that people express themselves and view each other.

“Technology companies tend to emphasize the utility of AI tools to accomplish tasks faster and better, but they ignore the social dimension,” Jung said. “We do not live and work in isolation, and the systems we use impact our interactions with others.”

In addition to greater efficiency and positivity, the group found that when participants think their partner is using more AI-suggested responses, they perceive that partner as less cooperative, and feel less affiliation toward them.

“I was surprised to find that people tend to evaluate you more negatively simply because they suspect that you’re using AI to help you compose text, regardless of whether you actually are,” Hohenstein said. “This illustrates the persistent overall suspicion that people seem to have around AI.”

For their first experiment, co-author Dominic DiFranzo, a former postdoctoral researcher in the Cornell Robots and Groups Lab and now an assistant professor at Lehigh University, developed a smart-reply platform the group called “Moshi” (Japanese for “hello”), patterned after the now-defunct Google “Allo” (French for “hello”), the first smart-reply platform, unveiled in 2016. Smart replies are generated from LLMs to predict plausible next responses in chat-based interactions.

A total of 219 pairs of participants were asked to talk about a policy issue and assigned to one of three conditions: both participants can use smart replies; only one participant can use smart replies; or neither participant can use smart replies.

The researchers found that using smart replies increased communication efficiency, positive emotional language and positive evaluations by communication partners. On average, smart replies accounted for 14.3% of sent messages (1 in 7).

But participants who their partners suspected of responding with smart replies were evaluated more negatively than those who were thought to have typed their own responses, consistent with common assumptions about the negative implications of AI.

In a second experiment, 299 randomly assigned pairs of participants were asked to discuss a policy issue in one of four conditions: no smart replies; the default Google smart replies; smart replies with a positive emotional tone; and ones with a negative emotional tone. The presence of positive and Google smart replies caused conversations to have more positive emotional tone than conversations with negative or no smart replies, highlighting the impact that AI can have on language production in everyday conversations.

“While AI might be able to help you write,” Hohenstein said, “it’s altering your language in ways you might not expect, especially by making you sound more positive. This suggests that by using text-generating AI, you’re sacrificing some of your own personal voice.”

Said Jung: “What we observe in this study is the impact that AI has on social dynamics and some of the unintended consequences that could result from integrating AI in social contexts. This suggests that whoever is in control of the algorithm may have influence on people’s interactions, language and perceptions of each other.”

Other co-authors included Mor Naaman, professor at the Jacobs Technion-Cornell Institute at Cornell Tech and of information science at Cornell Bowers CIS; Karen Levy, associate professor of information science (Cornell Bowers CIS); and collaborators from Lehigh University and Stanford University.

This work was supported by the National Science Foundation.

By Tom Fleischman, Cornell Chronicle

This story was originally published in the Cornell Chronicle.

Date Posted: 4/06/2023
A color photo of a woman smiling for a photo

Jenny Fu has had a longstanding interest in emotions. “I’m fascinated by the idea of what makes people happy,” said Fu, a third-year doctoral student in the field of information science in the Cornell Ann S. Bowers College of Computing and Information Science. 

This fascination has prompted her to embark on research that seeks to support people’s long-term happiness. Her current work in the Robots in Groups lab, directed by Malte Jung, associate professor of information science and the Nancy H. ’62 and Philip M. ’62 Young Sesquicentennial Faculty Fellow, aims to uncover the impacts of artificial intelligence-mediated communication on stress and anxiety levels of university students. 

AI-mediated communication is any type of communication, written or visual, that has an agent or “middle man” to modify, generate, or add to it. For example, AI-generated images, which can be posted to social media, are one form of AI-mediated communication.     

“I'm interested in allowing people to understand the landscape as well as the impact of this technology in the long term,” said Fu. “How can we use technology to have a better social connection and a more meaningful one that allows us to increase and promote our long-term happiness?”

AI changes the nature of social interactions and can impact our quality of social connections, Fu said. She pointed to the rise of remote work amid the COVID-19 pandemic as an example, noting that “remote working does not simulate the same face-to-face interaction, emotions, and friendship bonding with other people.” 

Impression-formation and self-expression through AI-mediated communication are two phenomena Fu hopes to study, particularly in the context of profile creation on websites and apps. She is interested in the ways people express themselves through profile creation using AI-generated text and the way others form impressions based on these AI-mediated profiles. Fu is also interested in analyzing the levels of anxiety produced as a result of both these expressions and impressions. 

Fu gives two ways that AI mediums like auto-text complete and visual filters could compound or regulate a person’s anxiety, depending on their use. A socially anxious person wouldn’t appreciate an AI system suggesting “playing sports with friends” as an activity to add to their social media profile, as this could cause intrusive thoughts that they have no one to play sports with, Fu said. 

On the other hand, AI could be used to reduce a person’s anxiety. Take public speaking, Fu said, where the old axiom for anxious speakers is to imagine addressing an audience of vegetables. Alternate reality smart glasses, which can allow people to alter their visual realities, can make it possible for someone to create an audience of cucumbers and carrots. But the glasses can also change the dynamic of the interaction on the receiving end since the audience does not share the speaker’s view of themselves as vegetables. 

Fu seeks to use various psychological metrics to measure a person’s level of anxiety in response to advancements in AI, which is one aspect of technostress – the distress caused by the effects of technological advancements. Measuring anxiety levels in relation to AI-mediated communication is admittedly quite challenging, Fu said, and she will use a variety of methods such as surveys, interviews, and experimental observations to measure technostress.
As of now, Fu’s research into AI-mediated communication has potential for a whole host of directions, though ultimately Fu hopes to reveal strategies to reduce general anxiety among people. She wants to address the question, “How can we generate insight to inform the future design of technology to support people – to make them more prosocial, as well as make them feel more socially connected and supported?” 

By Olivia Pluska, a guest writer for the Cornell Ann S. Bowers College of Computing and Information Science.

Date Posted: 4/04/2023

Pages