A color graphic showing the concept of AI hallucination

Speak a little too haltingly and with long pauses, and a speech-to-text transcriber might put harmful, violent words in your mouth, Cornell researchers have discovered.

Date Posted: 6/18/2024
A color photo showing a scientist in laboratory, analyzing samples

A new, adaptive statistical model developed by a research team involving Cornell will make clinical trials safer and more effective, and – unlike most models – is precise enough to identify when a subset of a trial population is harmed by the treatment.

Developed by researchers from MIT, Microsoft, and Cornell, the model – Causal Latent Analysis for Stopping Heterogeneously (CLASH) – leverages causal machine learning, which uses artificial intelligence to statistically determine the true cause and effect among variables. It continually crunches incoming participant data and alerts trial practitioners if the treatment is causing harm to only a segment of trial participants.

By comparison, most statistical models used to determine when to stop trials early are broadly applied across trial participants and don’t account for heterogeneous populations, researchers said. This can result in harms going undetected in a clinical trial: If an experimental drug is causing serious side effects in elderly patients who make up 10 percent of the trial population, it’s unlikely a statistical model would detect such harm, and the trial would likely continue, exposing those patients to even more harm, researchers said.

“We can’t just be looking at the averages,” said Allison Koenecke, assistant professor of information science in the Cornell Ann S. Bowers College of Computing and Information Science. She is the senior author of “Should I Stop or Should I Go: Early Stopping with Heterogeneous Populations,” which was presented at the 37th Conference on Neural Information Processing Systems (NeurIPS) last December in New Orleans. “You have to look at these different subgroups of people. Our method quantifies and identifies harms to minority populations and brings them to light to practitioners who can then make decisions on whether to stop trials early.”

CLASH is designed to work with a variety of statistical stopping tests, which are used by researchers in randomized experiments as guideposts for whether to continue or end clinical trials early. Researchers said CLASH can also be used in A/B testing, a user-experience test comparing variations of a particular feature to find out which is best.

“We wanted to design a method that would be easy for practitioners to use and incorporate into their existing pipelines,” said Hammaad Adam, a doctoral student who studies machine learning and healthcare equity at MIT and the paper’s lead author. “You could implement some version of CLASH with 10 or 20 lines of code.”

“We need to make sure that harms in minority populations are not glossed over by statistical methods that simply assume all people in an experiment are the same,” Koenecke said. “Our work gives practitioners the tools they need to appropriately consider heterogeneous populations and ensure that minority groups are not being disproportionately harmed.”

Along with Adam and Koenecke, paper authors are: Fan Yin and Huibin (Mary) Hu of Microsoft Corporation, and Neil Tenenholtz, Lorin Crawford, and Lester Mackey of Microsoft Research.

This research was supported through the Cornell Bowers CIS Strategic Partnership Program with LinkedIn.

By Louis DiPietro, a writer for the Cornell Ann S. Bowers College of Computing and Information Science.

Date Posted: 6/18/2024
An image showing a google search bar

Using experiments with COVID-19 related queries, Cornell sociology and information science researchers found that in a public health emergency, most people pick out and click on accurate information.

Although higher-ranked results are clicked more often, they are not more trusted, and misinformation does not damage trust in accurate results that appear on the same page. In fact, banners warning about misinformation decrease trust in misinformation somewhat but decrease trust in accurate information even more, according to “Misinformation Does Not Reduce Trust in Accurate Search Results, But Warning Banners May Backfire” published in Scientific Reports on May 14.

Date Posted: 6/10/2024
A color photo showing a man and a woman standing outside

Twins Alsa Khan and Muhammad Jee explain how their AI platform, Mr. EzPz, could help to make artificial intelligence more reliable for students as well as educators.

Date Posted: 5/29/2024
Graduates from Cornell Bowers CIS’s class of 2024 pose in front of Gates Hall before Commencement on

More than 1,200 graduates from the Cornell Ann S. Bowers College of Computing and Information Science – the largest graduating class in the college’s history – were recognized in separate department recognition ceremonies held Friday, May 24 and Saturday, May 25 in Barton Hall. 

“You are architects of the future,” said Kavita Bala, dean of Cornell Bowers CIS, to the new graduates. “You should couple this feeling of limitless opportunity with a sense of responsibility, not just to innovate but to actively shape a future where technology serves as a force for enduring good.” 

Date Posted: 5/28/2024
A color photo showing a group of people with award plaques

On April 26, the Office of Diversity, Equity, and Inclusion in the Cornell Ann S. Bowers College of Computing and Information Science held their annual Diversity, Equity, Inclusion, and Belonging (DEIB) Awards ceremony in the Statler ballroom.  

The DEIB Awards recognize exceptional leadership by Cornell Bowers CIS faculty and students and honor those who have made outstanding contributions toward uplifting the Bowers CIS community. 

Date Posted: 5/23/2024
A color photo showing graduates walking outside of Gates Hall with the text "2024 Cornell Bowers CIS

Graduation is finally here, and the 2024 Cornell Bowers CIS graduates have so much to be proud of. They have pursued their passions through coursework and impactful research, immersed themselves in new experiences, and created a future full of opportunity.

Hear from undergraduate and graduate students below in their own words, as they look back on their foundational years at Bowers CIS.

Date Posted: 5/22/2024
A color graphic with the letters 'AI' and a silhouette of lady justice with the text Cornell Bowers

From platforms that screen mortgage applications and résumés to predicting the likelihood defendants will re-offend, AI systems and the algorithms behind them are relied on to make quick and efficient decisions in areas with major consequences, including healthcare, hiring, and criminal justice. 

Though the potential for AI is immense, its early adoption has been beset by recurring challenges: a home-loan processing algorithm was far more likely to deny applications from people of color than white applicants; hiring algorithms meant to screen applicants are increasingly being used without a hard look under the hood, and AI-powered software used by the U.S. criminal justice system was twice as likely to falsely predict future criminality in black defendants as white defendants.

Now more than ever – at the dawn of an artificial intelligence (AI)-assisted future, Cornell’s leadership in AI and in areas of ethics and fairness in technology is both guiding the development of better, fairer AI and shaping the minds of future AI innovators.

Date Posted: 5/20/2024
A multidisciplinary team of Cornell students was recently awarded an $80,000 NASA grant to develop n

If the use of drones becomes as prevalent as Mehrnaz Sabet believes, our skies are about to get much more congested, and machine learning could be the answer for safe and autonomous aerial traffic flow.

To help on this front, NASA has awarded a multidisciplinary team led by Sabet, a doctoral student in the field of information science, an $80,000 grant to develop new coordination and communication models for drones that will someday deliver goods and even transport people. 

As part of the grant and in coordination with Cornell, the team has launched a crowdfunding page to help raise additional funds to support the project. 

“Traditional air-traffic control won’t work when there are going to be many different drones in the air,” said Sabet, whose research explores collaborative, autonomous drones. “We’re advancing algorithms that enable drones, whether they are piloted by humans or artificial intelligence, to coordinate with each other and navigate the airspace safely and autonomously.” 

Sabet is the principal investigator on the project, “Learning Cooperative Policies for Adaptive Human-Drone Teaming in Shared Airspace,” and leads a team of seven Cornell undergraduate and Master of Engineering students from the departments of computer science and information science in the Cornell Ann S. Bowers College of Computing and Information Science and electrical and computer engineering in Cornell Engineering.

Sanjiban Choudhury, assistant professor of computer science, and Susan Fussell, professor in the departments of information science and communication, will serve as faculty mentors.

The grant comes from NASA’s University Student Research Challenge program, which awards students whose research projects tackle the biggest technical challenges in aviation. 

“This opportunity is a great example of how impactful collaborations can be fueled by industry or new programs from leading organizations like NASA that support innovative research projects from students for real-world applications,” said Sabet, who handpicked her teammates, all of which are involved with CUAirCornell Cup Robotics, or Cornell Mars Rover.

Along with Sabet, student team members are: Aaron Babu ‘26, Marcus Lee ‘26, Joshua Park ‘26, Francis Pham ‘25, Owen Sorber M.Eng ‘24, Roopak Srinivasan M.Eng ‘24, and Austin Zhao ’26.

To learn more about this project, visit the project website.

Louis DiPietro is a writer for the Cornell Ann S. Bowers College of Computing and Information Science.

Date Posted: 5/13/2024
A photo of a person comforting another with a hand on their shoulder

Conversational agents (CAs) such as Alexa and Siri are designed to answer questions, offer suggestions – and even display empathy.

But these agents are powered by large language models (LLMs) that ingest massive amounts of human-produced data, and thus can be prone to the same biases as the humans from which the information comes.

Researchers from Cornell Tech, the Cornell Ann S. Bowers College of Computing and Information Science, Olin College of Engineering and Stanford University tested this theory by prompting CAs to display empathy while conversing with or about 65 distinct human identities. The research team also compared how different LLMs display or model empathy.

 

Date Posted: 5/08/2024

Pages