A photo collage showing eye glasses and a man in wearing them

It may look like Ruidong Zhang is talking to himself, but in fact the doctoral student in the field of information science is silently mouthing the passcode to unlock his nearby smartphone and play the next song in his playlist.

It’s not telepathy: It’s the seemingly ordinary, off-the-shelf eyeglasses he’s wearing, called EchoSpeech – a silent-speech recognition interface that uses acoustic-sensing and artificial intelligence to continuously recognize up to 31 unvocalized commands, based on lip and mouth movements.

Developed by Cornell’s Smart Computer Interfaces for Future Interactions (SciFi) Lab, the low-power, wearable interface requires just a few minutes of user training data before it will recognize commands and can be run on a smartphone, researchers said.

Zhang is the lead author of “EchoSpeech: Continuous Silent Speech Recognition on Minimally-obtrusive Eyewear Powered by Acoustic Sensing,” which will be presented at the Association for Computing Machinery Conference on Human Factors in Computing Systems (CHI) this month in Hamburg, Germany.

“For people who cannot vocalize sound, this silent speech technology could be an excellent input for a voice synthesizer. It could give patients their voices back,” Zhang said of the technology’s potential use with further development.

In its present form, EchoSpeech could be used to communicate with others via smartphone in places where speech is inconvenient or inappropriate, like a noisy restaurant or quiet library. The silent speech interface can also be paired with a stylus and used with design software like CAD, all but eliminating the need for a keyboard and a mouse.

Outfitted with a pair of microphones and speakers smaller than pencil erasers, the EchoSpeech glasses become a wearable AI-powered sonar system, sending and receiving soundwaves across the face and sensing mouth movements. A deep learning algorithm, also developed by SciFi Lab researchers, then analyzes these echo profiles in real time, with about 95% accuracy.

“We’re moving sonar onto the body,” said Cheng Zhang, assistant professor of information science in the Cornell Ann S. Bowers College of Computing and Information Science and director of the SciFi Lab.

“We’re very excited about this system,” he said, “because it really pushes the field forward on performance and privacy. It’s small, low-power and privacy-sensitive, which are all important features for deploying new, wearable technologies in the real world.”

The SciFi Lab has developed several wearable devices that track bodyhand and facial movements using machine learning and wearable, miniature video cameras. Recently, the lab has shifted away from cameras and toward acoustic sensing to track face and body movements, citing improved battery life; tighter security and privacy; and smaller, more compact hardware. EchoSpeech builds off the lab’s similar acoustic-sensing device called EarIO, a wearable earbud that tracks facial movements. 

Most technology in silent-speech recognition is limited to a select set of predetermined commands and requires the user to face or wear a camera, which is neither practical nor feasible, Cheng Zhang said. There also are major privacy concerns involving wearable cameras – for both the user and those with whom the user interacts, he said.

Acoustic-sensing technology like EchoSpeech removes the need for wearable video cameras. And because audio data is much smaller than image or video data, it requires less bandwidth to process and can be relayed to a smartphone via Bluetooth in real time, said François Guimbretière, professor in information science in Cornell Bowers CIS and a co-author.

“And because the data is processed locally on your smartphone instead of uploaded to the cloud,” he said, “privacy-sensitive information never leaves your control.”

Battery life improves exponentially, too, Cheng Zhang said: Ten hours with acoustic sensing versus 30 minutes with a camera.

The team is exploring commercializing the technology behind EchoSpeech, thanks in part to Ignite: Cornell Research Lab to Market gap funding.

In forthcoming work, SciFi Lab researchers are exploring smart-glass applications to track facial, eye and upper body movements.

“We think glass will be an important personal computing platform to understand human activities in everyday settings,” Cheng Zhang said.

Other co-authors were information science doctoral student Ke Li, Yihong Hao ’24, Yufan Wang ’24 and Zhengnan Lai ‘25. This research was funded in part by the National Science Foundation.

By Louis DiPietro, a writer for the Cornell Ann S. Bowers College of Computing and Information Science.

Date Posted: 4/06/2023
A color graphic with the text "Artificial Intelligence" "AI" with futuristic background

People have more efficient conversations, use more positive language and perceive each other more positively when using an artificial intelligence-enabled chat tool, a group of Cornell researchers has found.

Postdoctoral researcher Jess Hohenstein, M.S. ’16, M.S. ’19, Ph.D. ’20, is lead author of “Artificial Intelligence in Communication Impacts Language and Social Relationships,” published April 4 in Scientific Reports.

Co-authors include Malte Jung, associate professor of information science in the Cornell Ann S. Bowers College of Computing and Information Science (Cornell Bowers CIS), and Rene Kizilcec, assistant professor of information science (Cornell Bowers CIS).

Generative AI is poised to impact all aspects of society, communication and work. Every day brings new evidence of the technical capabilities of large language models (LLMs) like ChatGPT and GPT-4, but the social consequences of integrating these technologies into our daily lives are still poorly understood.

AI tools have potential to improve efficiency, but they may have negative social side effects. Hohenstein and colleagues examined how the use of AI in conversations impacts the way that people express themselves and view each other.

“Technology companies tend to emphasize the utility of AI tools to accomplish tasks faster and better, but they ignore the social dimension,” Jung said. “We do not live and work in isolation, and the systems we use impact our interactions with others.”

In addition to greater efficiency and positivity, the group found that when participants think their partner is using more AI-suggested responses, they perceive that partner as less cooperative, and feel less affiliation toward them.

“I was surprised to find that people tend to evaluate you more negatively simply because they suspect that you’re using AI to help you compose text, regardless of whether you actually are,” Hohenstein said. “This illustrates the persistent overall suspicion that people seem to have around AI.”

For their first experiment, co-author Dominic DiFranzo, a former postdoctoral researcher in the Cornell Robots and Groups Lab and now an assistant professor at Lehigh University, developed a smart-reply platform the group called “Moshi” (Japanese for “hello”), patterned after the now-defunct Google “Allo” (French for “hello”), the first smart-reply platform, unveiled in 2016. Smart replies are generated from LLMs to predict plausible next responses in chat-based interactions.

A total of 219 pairs of participants were asked to talk about a policy issue and assigned to one of three conditions: both participants can use smart replies; only one participant can use smart replies; or neither participant can use smart replies.

The researchers found that using smart replies increased communication efficiency, positive emotional language and positive evaluations by communication partners. On average, smart replies accounted for 14.3% of sent messages (1 in 7).

But participants who their partners suspected of responding with smart replies were evaluated more negatively than those who were thought to have typed their own responses, consistent with common assumptions about the negative implications of AI.

In a second experiment, 299 randomly assigned pairs of participants were asked to discuss a policy issue in one of four conditions: no smart replies; the default Google smart replies; smart replies with a positive emotional tone; and ones with a negative emotional tone. The presence of positive and Google smart replies caused conversations to have more positive emotional tone than conversations with negative or no smart replies, highlighting the impact that AI can have on language production in everyday conversations.

“While AI might be able to help you write,” Hohenstein said, “it’s altering your language in ways you might not expect, especially by making you sound more positive. This suggests that by using text-generating AI, you’re sacrificing some of your own personal voice.”

Said Jung: “What we observe in this study is the impact that AI has on social dynamics and some of the unintended consequences that could result from integrating AI in social contexts. This suggests that whoever is in control of the algorithm may have influence on people’s interactions, language and perceptions of each other.”

Other co-authors included Mor Naaman, professor at the Jacobs Technion-Cornell Institute at Cornell Tech and of information science at Cornell Bowers CIS; Karen Levy, associate professor of information science (Cornell Bowers CIS); and collaborators from Lehigh University and Stanford University.

This work was supported by the National Science Foundation.

By Tom Fleischman, Cornell Chronicle

This story was originally published in the Cornell Chronicle.

Date Posted: 4/06/2023
A color photo of a woman smiling for a photo

Jenny Fu has had a longstanding interest in emotions. “I’m fascinated by the idea of what makes people happy,” said Fu, a third-year doctoral student in the field of information science in the Cornell Ann S. Bowers College of Computing and Information Science. 

This fascination has prompted her to embark on research that seeks to support people’s long-term happiness. Her current work in the Robots in Groups lab, directed by Malte Jung, associate professor of information science and the Nancy H. ’62 and Philip M. ’62 Young Sesquicentennial Faculty Fellow, aims to uncover the impacts of artificial intelligence-mediated communication on stress and anxiety levels of university students. 

AI-mediated communication is any type of communication, written or visual, that has an agent or “middle man” to modify, generate, or add to it. For example, AI-generated images, which can be posted to social media, are one form of AI-mediated communication.     

“I'm interested in allowing people to understand the landscape as well as the impact of this technology in the long term,” said Fu. “How can we use technology to have a better social connection and a more meaningful one that allows us to increase and promote our long-term happiness?”

AI changes the nature of social interactions and can impact our quality of social connections, Fu said. She pointed to the rise of remote work amid the COVID-19 pandemic as an example, noting that “remote working does not simulate the same face-to-face interaction, emotions, and friendship bonding with other people.” 

Impression-formation and self-expression through AI-mediated communication are two phenomena Fu hopes to study, particularly in the context of profile creation on websites and apps. She is interested in the ways people express themselves through profile creation using AI-generated text and the way others form impressions based on these AI-mediated profiles. Fu is also interested in analyzing the levels of anxiety produced as a result of both these expressions and impressions. 

Fu gives two ways that AI mediums like auto-text complete and visual filters could compound or regulate a person’s anxiety, depending on their use. A socially anxious person wouldn’t appreciate an AI system suggesting “playing sports with friends” as an activity to add to their social media profile, as this could cause intrusive thoughts that they have no one to play sports with, Fu said. 

On the other hand, AI could be used to reduce a person’s anxiety. Take public speaking, Fu said, where the old axiom for anxious speakers is to imagine addressing an audience of vegetables. Alternate reality smart glasses, which can allow people to alter their visual realities, can make it possible for someone to create an audience of cucumbers and carrots. But the glasses can also change the dynamic of the interaction on the receiving end since the audience does not share the speaker’s view of themselves as vegetables. 

Fu seeks to use various psychological metrics to measure a person’s level of anxiety in response to advancements in AI, which is one aspect of technostress – the distress caused by the effects of technological advancements. Measuring anxiety levels in relation to AI-mediated communication is admittedly quite challenging, Fu said, and she will use a variety of methods such as surveys, interviews, and experimental observations to measure technostress.
As of now, Fu’s research into AI-mediated communication has potential for a whole host of directions, though ultimately Fu hopes to reveal strategies to reduce general anxiety among people. She wants to address the question, “How can we generate insight to inform the future design of technology to support people – to make them more prosocial, as well as make them feel more socially connected and supported?” 

By Olivia Pluska, a guest writer for the Cornell Ann S. Bowers College of Computing and Information Science.

Date Posted: 4/04/2023
A color photo of a woman smiling for a photo

Hospitals have begun using “decision support tools” powered by artificial intelligence that can diagnose disease, suggest treatment, or predict a surgery’s outcome. But no algorithm is correct all the time, so how do doctors know when to trust the AI’s recommendation?

A new study led by Qian Yang, assistant professor of information science in the Cornell Ann S. Bowers College of Computing and Information Science, suggests that if AI tools can counsel the doctor like a colleague – pointing out relevant biomedical research that supports the decision – then doctors can better weigh the merits of the recommendation.

The researchers will present the new study, “Harnessing Biomedical Literature to Calibrate Clinicians’ Trust in AI Decision Support Systems,” at the Association for Computing Machinery CHI Conference on Human Factors in Computing Systems.

Previously, most AI researchers have tried to help doctors evaluate suggestions from decision support tools by explaining how the underlying algorithm works, or what data was used to train the AI. But an education in how AI makes its predictions wasn’t sufficient, Yang said. Many doctors wanted to know if the AI had been validated in clinical trials, which typically does not happen with these tools.

“A doctor’s primary job is not to learn how AI works,” Yang said. “If we can build systems that help validate AI suggestions based on clinical trial results and journal articles, which are trustworthy information for doctors, then we can help them understand whether the AI is likely to be right or wrong for each specific case.”

To develop this system, the researchers first interviewed nine doctors across a range of specialties, and three clinical librarians. They discovered that when doctors disagree on the right course of action, they track down results from relevant biomedical research and case studies, taking into account the quality of each study and how closely it applies to the case at hand.  

Yang and her colleagues built a prototype of their clinical decision tool that mimics this process by presenting biomedical evidence alongside the AI’s recommendation. They used GPT-3, a pre-trained large language model, to find and summarize relevant research. ChatGPT is the better-known offshoot of GPT-3, which is tailored for human dialogue.

“We built a system that basically tries to recreate the interpersonal communication that we observed when the doctors give suggestions to each other, and fetches the same kind of evidence from clinical literature to support the AI’s suggestion,” Yang said.

The interface for the decision support tool lists patient information, medical history, and lab test results on one side, with the AI’s personalized diagnosis or treatment suggestion on the other, followed by relevant biomedical studies. In response to doctor feedback, the researchers added a short summary for each study, highlighting details of the patient population, the medical intervention, and the patient outcomes, so doctors can quickly absorb the most important information.

The research team developed prototype decision support tools for three specialities – neurology, psychiatry, and palliative care – and asked three doctors from each speciality to test out the prototype by evaluating sample cases.

In interviews, doctors said they appreciated the clinical evidence, finding it intuitive and easy to understand, and preferred it to an explanation of the AI’s inner workings.

“It's a highly generalizable method.” Yang said. This type of approach could work for all medical specialties and other applications where scientific evidence is needed, such as Q&A platforms to answer patient questions or even automated fact checking of health-related news stories. “I would hope to see it embedded in different kinds of AI systems that are being developed, so we can make them useful for clinical practice,” Yang said.

Co-authors on the study include doctoral students Yiran Zhao and Stephen Yang in the field of information science, and Yuexing Hao in the field of human behavior design. Volodymyr Kuleshov, assistant professor at the Jacobs Technion-Cornell Institute at Cornell Tech and in computer science in Cornell Bowers CIS, Fei Wang, associate professor of population health sciences at Weill Cornell Medicine, and Kexin Quan of the University of California, San Diego also contributed to the study.

The researchers received support from the AI2050 Early Career Fellowship and the Cornell and Weill Cornell Medicine’s Multi-Investigator Seed Grants.

By Patricia Waldron, a writer for the Cornell Ann S. Bowers College of Computing and Information Science.

Date Posted: 4/04/2023
A color photo of a woman smiling for a photo

Faculty members exploring topics ranging from isolation-induced aggression in female mice to the group dynamics of improvisational comedy troupes to the policy decisions that shape homelessness have been named 2023-24 fellows by the Cornell Center for Social Sciences (CCSS).

The 14 faculty members, representing 13 departments and eight colleges and schools, were nominated by their deans. The program seeks to nurture the careers of Cornell’s most promising faculty members in the social sciences by providing time and space for high-impact social scientific scholarship that results in ambitious projects with real-world impact, scholarly publications and external grant funding.

Fellows receive course release, allowing them to spend a semester in residence at CCSS to focus on their research.

“This is the largest cohort of faculty fellows we have ever had and we are excited to see the results of the ambitious research and collaborations,” said Peter Enns, the Robert S. Harrison Director of CCSS.

The 2023-24 faculty fellows:

Natasha Raheja, Anthropology (College of Arts and Sciences): Majority-Minority Politics across the India-Pakistan Border

Mathieu Taschereau-Dumouchel, Economics (A&S): Dynamic Propagation in Production Networks

Bryn Rosenfeld, Government (A&S): Risky Politics and Political Participation under Authoritarian Rule

Kristin Roebuck, History (A&S): Remember Girl Zero: Trafficked Women, Imperial Men, and the Ends of Abolition   

Katherine Tschida, Psychology (A&S): Role of social touch in regulating susceptibility to isolation-induced aggression.    

Nicolas Bottan, Economics (Cornell Jeb E. Brooks School of Public Policy): Social comparisons and economic decisions 

Adriana Reyes, Sociology (Brooks School): Understanding Americans Attitudes towards Caregiving for Older Adults   

Chuan Liao, Global Development (College of Agriculture and Life Sciences): Circular Bionutrient Economy for AgriFood System Transition in Kenya   

Gili Vidan, Information Science (Cornell Ann S. Bowers College of Computing and Information Science): Technologies of Trust: The Making of Electronic Authentication in Postwar U.S.   

Cindy Hsin-Liu Kao, Human Centered Design (College of Human Ecology): Understanding the Social Aspects of On-Skin Interface Usage

Tristan Ivory, International and Comparative Labor (ILR School): Africa Futures Project: Socioeconomic and Geographic Mobility of Ghanaian, Kenyan, and South African Youth   

Brian Lucas, Organizational Labor (ILR): An Inductive Study of Creative Idea Elaboration in Improvisational Comedy Groups

Heeyon Kim, Hotel Administration (Cornell SC Johnson College of Business): Disrupting a Winner-Take-All Market: Pathways for Increasing Status Mobility in the Art World

Charley Willison, Public and Ecosystem Health (College of Veterinary Medicine) : Invisible Policymaking: The Hidden Actors Shaping Homelessness

In addition, the Cornell Center for Social Sciences has launched a new Collaborative Fellowship initiative. This program is designed to foster interdisciplinary teamwork and provide support as small groups of Cornell social scientists work toward specific project outputs. This round, CCSS is funding two Collaborative Fellowship groups, one in summer 2023 and another in summer 2024.

The Collaborative Fellowship projects:

Summer 2023

Jocelyn Poe, City and Regional Planning (Architecture, Art, and Planning) and Jaleesa Reed, Human Centered Design (CHE): Black femininity placed: An exploration of beauty and placemaking in L.A.

Summer 2024

Brittany Bond, Organizational Behavior (ILR); Sunita Sah, Johnson Graduate School of Management (SC Johnson); and Duanyi Yang, Labor Relations, Law, and History (ILR): Organizational Interventions to Alleviate Burnout and Promote Well-Being

By Amy Escalante ’24, a student assistant for the Cornell Center for Social Sciences.

This story was originally published in the Cornell Chronicle.

Date Posted: 3/23/2023
A color graphic showing the Schmidt Futures and Cornell Bowers CIS logos

Ten Cornell postdoctoral researchers who plan to harness the power of artificial intelligence (AI) in areas like materials discovery, physics, biological sciences, and sustainability sciences have been named Eric and Wendy Schmidt AI in Science Postdoctoral Fellows, a Schmidt Futures program.

The announcement of the inaugural cohort comes on the heels of Cornell being selected as one of nine universities worldwide to join the Eric and Wendy Schmidt AI in Science Postdoctoral Fellowship, a $148 million program that is part of a larger $400 million effort from Schmidt Futures to support AI researchers.

Under this fellowship program, the Cornell University AI for Science Institute (CUAISci) will recruit and train a cohort of up to 100 postdoctoral fellows over the next six years in the fields of natural sciences and engineering. Part of the university’s larger Artificial Intelligence Radical Collaboration, the institute comprises Cornell faculty and researchers from diverse fields who seek to apply AI for scientific discovery, with sustainability being the overarching goal.

“Artificial Intelligence is poised to significantly advance fundamental research in a broad range of scientific disciplines. These fellowships are critical in equipping the next generation of scientists with the AI tools and knowledge they need to tackle some of the hardest scientific problems of our time,” said Kavita Bala, dean of the Cornell Ann S. Bowers College of Computing and Information Science. “Together with the Cornell AI Initiative, this inaugural cohort positions Cornell as a leader in AI-enabled scientific research and education.”

“AI is redefining the boundaries of what we thought was possible for a machine, unleashing its full potential to take on human capabilities such as vision and language,” said Carla Gomes, the Ronald C. and Antonia V. Nielsen Professor in Cornell Bowers CIS and co-director of CUAISci. “With its limitless capacity for progress and innovation, AI is set to transform the world of science and usher in a new era of discovery."

“Both Cornell University and Schmidt Futures are committed to training innovators across disciplines to think big and apply cutting-edge AI tools to solve today's most urgent and grand challenges,” added Fengqi You, the Roxanne E. and Michael J. Zak Professor in Energy Systems Engineering and co-director of CUAISci.

This year’s inaugural Schmidt AI in Science Postdoctoral Fellows are:

• Benjamin Decardi-Nelson, systems engineering, studies plant biology-informed AI to unveil the dynamic complexity of plant microclimate interactions in artificial environments on earth and in space for sustainable food production.

• Eliot Miller, Lab of Ornithology, explores the use of automated acoustic identification to inform species distribution models for birds.

• Itay Griniasty, physics, studies how programmable materials can be designed into microscopic machines, and how information geometry uncovers hidden relations and the generalizability of climate simulations of extreme precipitation.

• Felipe Pacheco, ecology and evolutionary biology, studies how to use AI to solve sustainability challenges in the Water-Food-Energy Nexus.

• Alexandros Polyzois, chemistry and chemical biology, aims to develop an AI system to conquer one major remaining barrier toward understanding the chemistry of life: the identification of the millions of unknown chemicals in living organisms, including humans, which will enable paradigm-shifting advances in physiology and medicine.

• Vikram Thapar, chemical and biomolecular engineering, studies multi-scale AI and computational methods from fully atomistic to a machine learning model to identify specific chemical compounds that can self-assemble into desirable, geometrically complex nanostructures.

• Ralitsa Todorova, neurobiology and behavior, works on recording neuronal activity during decision making and sleep, and using machine learning to decode mental imagery in animals.

• Tianyu Wang, applied and engineering physics, studies optical neural networks, which utilize optics instead of electronics to execute machine learning algorithms more efficiently and quickly for data processing and image sensing.

• Xin Wang, chemical and biomolecular engineering, focuses on liquid crystal-based sensors for detecting gasses, microplastics, proteins, and other chemicals using advanced deep learning and computer vision techniques.

• Yu Zhou, School of Integrative Plant Science, studies the response of dryland ecosystems to climate change. Zhou’s project uses AI techniques to study the pattern of model-data mismatches, the underlying causes, and ultimately to improve state-of-the-art process-based models.

Applications for the next cohort of Eric and Wendy Schmidt AI in Science Postdoctoral Fellowship, a Schmidt Futures program are being accepted now. Review of applications starts on April 15, 2023 and will continue until all spots are filled. For more information, visit CUAISci's fellowship information page.

By Louis DiPietro, a writer for the Cornell Ann S. Bowers College of Computing and Information Science.

Date Posted: 3/23/2023
A color photo of someone presenting research to a crowd

An eager crowd packed Gates Hall for the Association of Computer Science Undergraduates’ (ACSU) Research Night on Monday, March 13, showcasing the latest work from students across the Cornell Ann S. Bowers College of Computing and Information Science.

Held every semester, Research Night exists to encourage undergraduate students to pursue research opportunities. Historically, organizers of the event have targeted computer science students, but now welcome students from each of Cornell Bowers CIS’s three departments: computer science, information science, and statistics and data science.

Through research, students can work at the “frontier of knowledge,” said Kavita Bala, dean of Cornell Bowers CIS, in her opening remarks. “The kind of research we’re doing in Cornell – in computing, in CS, and more broadly in the college – is really upending traditional, centuries-old institutions,” Bala noted, citing the fields of finance, transportation, and healthcare as examples.

Following opening remarks, a panel of undergraduate researchers – Emmett Breen ’24, Benny Rubin ’25, and Yolanda Wang ’25 – answered audience questions and discussed how they got involved with research, the advantages and disadvantages of research as compared to an industry internship, and the experience they gained beyond technical skills.

Justin Hsu, assistant professor of computer science, moderated the Q&A session, which included questions submitted by attendees via an online portal. 

Panelists encouraged attendees to pursue research opportunities and noted the barriers for entry are fewer than students may imagine.

“In a research lab, there are different kinds of jobs,” said Wang, who studies computer vision and generative models, a type of AI model that creates new text, images, or videos based on training data. “You don’t need to delve into the deepest, most theoretical thing from the very beginning.”

Both Wang and Breen reached out to professors after their first-year fall semesters and were told they needed to take more courses. However, options existed for both of them: Wang did human-computer interaction research over the summer, while Breen – who studies systems and networking – was assigned a project by the professor he continues to do research with to help him prepare and gain more skills.A color photo of someone presenting research to a crowd

“People who don’t rush through the curriculum [aren’t] at a disadvantage at all, as long as you make an effort to find whatever area of computer science you’re most interested in,” Breen said.

After the panel, attendees engaged with graduate researchers who presented their work during  a poster session. The researchers recognized the opportunity at ACSU Research Night to increase the profile of their work.

Presenting during the poster session, Wentao Guo ’22, M.Eng ’23, was excited specifically by the chance to bring “attention to my work,” which involves deep learning models.

Joy Ming, a doctoral student in the field of information science, presented human-computer interaction research that seeks to support healthcare workers who work in the homes of older adults and people with disabilities.

“A lot of the work that they’re doing is really undervalued or invisible, and so my project’s goal is to make that a little more visible using data collection and data analysis,” Ming said.

Interested students can take part in research during the academic year, either for course credit or pay. In addition, the Bowers Undergraduate Research Experience (BURE) is accepting applications until March 27. Formerly known as the Computer Science Undergraduate Research Program (CSURP), the 10-week summer program provides students guidance from faculty and Ph.D. students, funding of up to $5,000, a series of talks on technical and career projects, and social experiences with other program participants.

By Chris Walkowiak ‘26, a student writer for the Cornell Ann S. Bowers College of Computing and Information Science’s communications team.

Date Posted: 3/16/2023
A color photo of a man standing outside, smiling for a photo

Human assumptions regarding language usage can lead to flawed judgments of whether language was AI- or human-generated, Cornell Tech and Stanford researchers found in a series of experiments.

While individuals’ proficiency at detecting AI-generated language was generally a toss-up across the board, people were consistently influenced by the same verbal cues, leading to the same flawed judgments.

Participants could not differentiate AI-generated from human-generated language, erroneously assuming that mentions of personal experiences and the use of “I” pronouns indicated human authors. They also thought that convoluted phrasing was AI-generated.

“We learned something about humans and what they believe to be either human or AI language,” said Mor Naaman, professor at the Jacobs Technion-Cornell Institute at Cornell Tech and of information science at the Cornell Ann S. Bowers College of Computing and Information Science. “But we also show that AI can take advantage of that, learn from it and then produce texts that can more easily mislead people.”

Maurice Jakesch, Ph.D. ’22, a former member of Naaman’s Social Technologies Lab at Cornell Tech, is lead author of “Human Heuristics for AI-Generated Language Are Flawed,” published March 7 in Proceedings of the National Academy of Sciences. Naaman and Jeff Hancock, professor of communication at Stanford University, are co-authors.

The researchers conducted three main experiments and three more to validate the findings, involving 4,600 participants and 7,600 “verbal self-presentations” – profile text people use to describe themselves on social websites. The experiments were patterned after the Turing test, developed in 1950 by British mathematician Alan Turing, who devised the test to measure a machine’s ability to exhibit intelligent behavior equal to or better than a human.

Instead of testing the machine, the new study tested humans’ ability to detect whether the exhibited intelligence came from a machine or a human. The researchers trained multiple AI language models to generate text in three social contexts where trust in the sender is important: professional (job application); romantic (online dating); and hospitality (Airbnb host profiles).

In the three main experiments, using two different language models, participants identified the source of a self-presentation with only 50% to 52% accuracy. But the responses, the researchers discovered, were not random, as the agreement between respondents’ answers was significantly higher than chance, meaning many participants were drawing the same flawed conclusions.

The researchers conducted an analysis of the heuristics (the process by which a conclusion is reached) participants used in deciding whether language was AI- or human-generated, first by asking participants to explain their judgments, then following up with a computational analysis that confirmed these reports. People cited mentions of family and life experiences, as well as the use of first-person pronouns, as evidence of human language.

However, such language is equally likely to be produced by AI language models.

“People’s intuition goes counter the current design of these language models,” Naaman said. “They produce text that is statistically probable – in other words, language that is common. But people tended to associate uncommon language with AI, a behavior that AI systems can then exploit that to create language, as we call it, ‘more human than human.’”

In three pre-registered validation experiments, the author’s show that, indeed, AI can exploit people’s heuristics to produce text that people more reliably rate as human-written than actual human-written text.

People’s reliance on flawed heuristics in identifying AI-generated language, the authors wrote, is not necessarily indicative of increased machine intelligence. It doesn’t take superior intelligence, they said, to “fool” humans – just a well-placed personal pronoun, or a mention of family.

The authors note that while humans’ ability to discern AI-generated language might be limited, language models that are “self-disclosing by design” would let the user know that the information is not human-generated while preserving the integrity of the message.

This could be achieved either by a language that is clearly nonhuman (avoiding the use of informal speech) or through “AI accents” – a dedicated dialect that could “facilitate and support people’s intuitive judgments without interrupting the flow of communication,” they wrote.

Hancock, a faculty member at Cornell from 2002-15, said this work is “one of the last nails in the coffin” of the Turing test era.

“As a way of thinking about whether something’s intelligent or not,” he said, “our data pretty clearly show that, in pretty important ways of being human – that is, describing yourself professionally, romantically or as a host – it’s over. The machine has passed that test.”

Naaman said this work – particularly relevant with the arrival of AI tools such as ChatGPT – highlights the fact that AI will increasingly be used as a tool to facilitate human-to-human communication.

“This is about not about us talking to AI. It’s us talking to each other through AI,” he said. “And the implications that we show on trust are significant: People will be easily misled and will easily distrust each other – not AI.”

Funding for this work came from the National Science Foundation and the German National Academic Foundation.

By Tom Fleischman, Cornell Chronicle

This story was originally published in the Cornell Chronicle.

Date Posted: 3/10/2023
Two color photos of two men with the NSF logo centered above them

The National Science Foundation (NSF) has selected two faculty from the Cornell Ann S. Bowers College of Computing and Information Science to receive Faculty Early Career Development (CAREER) Awards.

Immanuel Trummer, assistant professor of computer science, and Cheng Zhang, assistant professor of information science, will each receive approximately $600,000 over the next five years to support their research. NSF provides these sustaining grants to early-career scientists who they believe will advance their fields and serve as role models within their institutions.

Trummer’s work focuses on improving database performance through tuning, a series of decisions about how a database processes information internally. Specifically, he leverages large language models to support automated database performance tuning. The performance of database management systems depends on various tuning decisions, including settings for internal configuration parameters as well as the creation of auxiliary data structures. Making such decisions by hand is difficult, which has motivated the development of automated tuning tools. However, crucial information for database tuning is often contained in text documents, such as the database manual or text describing specific data sets and their properties. The current generation of tuning tools cannot exploit such information, making them inefficient. However, the latest generation of text processing methods – large language models based on the Transformer architecture – is often able to extract information from text with little to no task-specific training data. Trummer plans to exploit such methods to parse relevant text for database tuning, extracting information that helps to guide automated tuning efforts.

As the director of the Smart Computer Interfaces for Future Interactions (SciFi) lab, Zhang designs intelligent, privacy-sensitive, and minimally obtrusive wearables that can predict and understand human behavior and intentions in daily activities. Currently, computers struggle to recognize everyday activities due to the lack of high-quality behavioral data, such as body postures. Zhang's team addresses this through wearables endowed with artificial intelligence (AI)-powered active acoustic sensing that track and interpret human body postures of the hands, limbs, face, eyes, and tongue. The research aims to bridge the gap using cutting-edge AI techniques to enable applications in human activity recognition, telemedicine, and improving computer accessibility for individuals with hearing or speech impairments. Ultimately, the SciFi Lab seeks to create systems that function efficiently in real-world settings while protecting user privacy.

Trummer and Zhang join two other Cornell Bowers CIS faculty who recently received NSF CAREER AwardsSumanta Basu, assistant professor of statistics and data science, and Tapomayukh Bhattacharjee, assistant professor of computer science.

By Patricia Waldron, a writer for the Cornell Ann S. Bowers College of Computing and Information Science.

Date Posted: 3/06/2023
A color photo of a woman smiling for a photo

Yongqi (Kay) Zhang ‘22 is understandably excited. She’s settling into her new apartment in a new city – Rogers, Arkansas – and she’s already entertaining thoughts about getting a dog.

In a few short days, the recent graduate of the Cornell Information Science master of professional studies (MPS) program will begin her career as a user experience (UX) designer for Tyson Foods.

“The MPS program is a preparation kit for a full-time career,” she said. “It gave me more skills and knowledge of UX design, provided project experiences that make for a good portfolio, and taught me how to network.”

Her path to Arkansas by way of Ithaca began in south China. There, she majored in computer science at the Chinese University of Hong Kong. Intent on pursuing a master’s degree in UX, Zhang went online to find the best programs and discovered Cornell Information Science’s leading MPS, a one-year professional master’s program where students receive elite education from tech leaders and, come graduation, stand out in the competitive job market. The program’s flexibility, its capstone projects with real companies, and UX being one of four optional focus areas available to students – all of it appealed to Zhang.

“Other programs may have a set of required courses, and the selections are limited,” she said. But within Cornell’s MPS in information science, “you can choose a lot. Although I wanted to be a UX designer, it was also important to me to have other opportunities and possibilities to explore.”

In Spring 2022, she arrived at Cornell’s Ithaca campus with a firm plan for her year of MPS studies. She would take the bulk of her courses in the spring, ready her portfolio during the summer, and juggle a manageable course load while applying for jobs during her final fall semester. That’s precisely what Zhang did.

“I was prepared, and I felt I had plenty of time to do job searching [in the fall],” she said. “Having a plan is important.”

Not all of it was stress-free. Being an international student new to upstate New York, adjusting to speaking English regularly – these were tough adjustments for Zhang, who describes herself as shy. And during her job search, receiving rejection letters or no responses on applications was frustrating and demoralizing. Weekly meet-ups with friends from her local church helped her navigate these challenges.

“You need to have some activities to relieve stress,” said Zhang, when mulling advice to students. “I went to church and talked with people. Managing mental health is important.”

Zhang may have had a firm plan in place for her studies at Cornell, but there was enough space for pleasant surprises. She branched out from her information science courses and took electives in consumer behavior and entrepreneurship, both of which became favorites.

As for courses within the information science curriculum, she noted two that were particularly illuminating. Qualitative User Research and Design Methods (INFO 5400) – led by Gilly Leshed, senior lecturer in information science – bolstered her UX research skills. “Being a UX designer means being an effective UX researcher too,” she said.

Professional Career Development (INFO 5905) – led by Rebecca Salk, the MPS career advisor – readied Zhang for the job hunt, from preparing her portfolio and navigating interviews to effective networking.

For the program’s keystone project course, MPS Project Practicum (INFO 5900), Zhang and her teammates partnered with TISTA, a company specializing in providing IT services to federal, state, and local governments, to build a platform to support mental health among veterans.

“Having a complete project like that in my portfolio was helpful for my job interview,” she said. INFO 5900 “was more than just building the project; my teammates and I learned a lot about communication and the importance of interpersonal skills.”

To prospective students considering Cornell’s MPS program in information science, Zhang advised interested students to apply with clear career goals.

“Set your plan early,” she said, and “go for it.”

Connect with Kay Zhang on LinkedIn.

By Louis DiPietro, a writer for the Cornell Ann S. Bowers College of Computing and Information Science.

Date Posted: 3/01/2023

Pages