A color graphic with the text BURE Bowers Undergraduate Research Experience

Research takes time.

“On top of classes and extracurricular commitments, I often struggle to find enough time for research during the semester,” said James Kim ’25, a computer science and math major.

But this summer, thanks to the Bowers Undergraduate Research Experience (BURE), Kim, along with 60 of his undergraduate peers from the Cornell Ann S. Bowers College of Computing and Information Science, can give research the time it requires. In the process, Kim is discovering a career path.

Date Posted: 8/14/2024
A color graphic with the Cornell Bowers CIS and LinkedIn logos

Four faculty members and four doctoral students from the Cornell Ann S. Bowers College of Computing and Information Science are the latest recipients of annual grants from the college’s five-year partnership with LinkedIn.

This year’s award winners – the third cohort from the Cornell Bowers CIS-LinkedIn strategic partnership – will advance research in areas including algorithmic fairness, reinforcement learning, and large language models.

Launched in 2022 with a multimillion-dollar grant from LinkedIn, the Cornell Bowers CIS-LinkedIn strategic partnership provides funding to faculty and doctoral students advancing research in artificial intelligence. Awards to doctoral students include academic year funding and discretionary funds. The five-year partnership also supports initiatives and student groups that promote diversity, equity, inclusion and belonging.  

Faculty award winners

Sarah Dean, assistant professor of computer science, believes the algorithms that power social network platforms are too short-sighted. The models anticipate short-term engagement, like clicks, but fail to capture longer-term impacts, like a user’s growing distaste of clickbait headlines or educational content that no longer serves their skillset. In “User Behavior Models for Anticipating and Optimizing Long-term Impacts,” Dean seeks to develop models that can anticipate long-term user dynamics and algorithms that can optimize long-term impacts. 

Michael P. Kim, assistant professor of computer science, will explore fairness in algorithmic predictive models in his project, “Prediction as Intervention: Promoting Fairness when Predictions have Consequences.” Today's predictive algorithms can influence the outcomes they are meant to predict. For instance, algorithms may help job seekers connect with relevant companies, making it more likely for them to get hired by the company. Kim's project aims to understand the potential for such algorithms to cause harm by overlooking individuals from marginalized groups, but also to promote new opportunities through deliberate predictions.

Jennifer J. Sun, assistant professor of computer science, aims to leverage large language models (LLMs) to process text data from veterinarians at Cornell College of Veterinary Medicine. The goal of her project, “Learning and Reasoning Reliably from Unstructured Text,” is to use LLMs to develop a system to synthesize the text data into actionable insights for improving animal care, such as predicting surgical complications. Sun aims to develop algorithms that could scale to industry-level applications, for example, for use in tasks such as skills matching and career recommendations.

Daniel Susser will explore misalignments between the ways different actors conceptualize and reason about privacy-enhancing technologies (PETs) – statistical and computational tools designed to help data collectors process and learn from personal information while simultaneously protecting individual privacy. In “Navigating Ethics and Policy Uncertainty with Privacy-Enhancing Technologies,” Susser will develop shared frameworks for data subjects, researchers, companies, and regulators to better reason, deliberate, and communicate about the use of PETs in real-world contexts. 

Doctoral student award winners

Zhaolin Gao, a doctoral student in the field of computer science advised by Wen Sun and Thorsten Joachims, aims to improve methods used in reinforcement learning from human feedback (RLHF), which is used to train large language models. Gao’s project is called “Aligning Language Model with Direct Natural Policy Optimization.”

Kowe Kadoma, a doctoral student in the field of information science advised by Mor Naaman, studies how feelings of inclusion and agency impact user trust in artificial intelligence. In her project, “The Effects of Personalized LLMs on Users’ Trust,” Kadoma will expand on existing research that finds LLMs often produce language with limited variety, which may frustrate or alienate users. The goal is to improve LLMs so that they produce more personalized language that matches users’ language style.

Abhishek Vijaya Kumar, a doctoral student in the field of computer science advised by Rachee Singh, will develop systems and algorithms to efficiently share the memory and compute resources on multi-GPU clusters. The goal of the project, called “Responsive Offloaded Tensors for Faster Generative Inference,” is to improve the performance of memory and compute bound generative models. 

Linda Lu, a doctoral student in the field of computer science advised by Karthik Sridharan, will explore privacy through “machine unlearning,” a paradigm to give users the ability to delete any personal data that could be used to train large language models. Lu’s project is called “A New Algorithmic Design Principle for Privacy in Machine Learning.”

By Louis DiPietro, a writer for the Cornell Ann S. Bowers College of Computing and Information Science.

 

Date Posted: 7/23/2024
A color photo of a man smiling for a photo

Aditya Vasistha creates culturally aware artificial intelligence technologies to improve social and economic outcomes for underserved communities, including more than 250,000 community health workers, low-literate people and blind users of social media.

“I design technologies for the left behind – the 85% of the world limited to a low income, who are working in oppressive conditions and living in societies with deep social, digital and health inequities,” he said.


Date Posted: 7/10/2024
A color photo of a man smiling for a photo

Using experiments with COVID-19 related queries, Cornell sociology and information science researchers found that in a public health emergency, most people pick out and click on accurate information.

Although higher-ranked results are clicked more often, they are not more trusted, and misinformation does not damage trust in accurate results that appear on the same page. In fact, banners warning about misinformation decrease trust in misinformation somewhat but decrease trust in accurate information even more, according to “Misinformation Does Not Reduce Trust in Accurate Search Results, But Warning Banners May Backfire” published in Scientific Reports on May 14.


Date Posted: 7/09/2024
A color photo showing 2 women working on a computer

The Center for Teaching Innovation (CTI) supports Cornell University teaching community members, from teaching assistants and postdoctoral fellows to lecturers to professors, with a full complement of individualized services, programs, institutes, and campus-wide initiatives.

Date Posted: 7/02/2024
A color photo of a woman smiling for a photo

Computer scientist and health equity scholar Emma Pierson has an impact well beyond academia, publishing widely in such outlets as The New York Times, FiveThirtyEight and Wired. Pierson is an assistant professor at Cornell Tech, the Jacobs Technion-Cornell Institute and Technion.

Date Posted: 7/02/2024
A color photo of a woman smiling for a photo

Sterling Williams-Ceci is a doctoral student in information science from Ithaca, New York. She earned her B.A. in psychology and the College Scholar Program at Cornell University and now studies the influence of AI on people’s thoughts about societal issues under the guidance of Michael Macy and Mor Naaman at Cornell and Cornell Tech, respectively.

Date Posted: 7/02/2024
A color photo of a woman smiling for a photo

Portobello, a new driving simulator developed by researchers at Cornell Tech, blends virtual and mixed realities, enabling both drivers and passengers to see virtual objects overlaid in the real world.

This technology opens up new possibilities for researchers to conduct the same user studies both in the lab and on the road – a novel concept the team calls “platform portability.”

Date Posted: 6/27/2024
A color photo showing an arial view of NYC with security icons overlaying the image

The Google Cyber NYC Institutional Research Program has awarded funding to seven new Cornell projects aimed at improving online privacy, safety, and security.

Additionally, as part of this broader program, Cornell Tech has also launched the Security, Trust, and Safety (SETS) Initiative to advance education and research on cybersecurity, privacy, and trust and safety. 

Cornell is one of four New York institutions participating in the Google Cyber NYC program, which is designed to provide solutions to cybersecurity issues in society, while also developing New York City as a worldwide hub for cybersecurity research. 

"The threats to our digital safety are big and complex," said Greg Morrisett, the Jack and Rilla Neafsey Dean and Vice Provost of Cornell Tech and principal investigator on the program. "We need pioneering, cross-disciplinary methods, a pipeline of new talent, and novel technologies to safeguard our digital infrastructure now and for the future. This collaboration will yield new directions to ensure the development of safer, more trustworthy systems."

The seven newly selected research projects from Cornell are:

  • Protecting EmbeddingsVitaly Shmatikov, professor of computer science at Cornell Tech. 

Embeddings are numerical representations of inputs, such as words and images, fed into modern machine learning (ML) models. They are a fundamental building block of generative ML and knowledge retrieval systems, such as vector databases. Shmatikov aims to study security and privacy issues in embeddings, including their vulnerability to malicious inputs and unintended leakage of sensitive information, and to develop new solutions to protect embeddings from attacks.

  • Improving Account Security for At-Risk Users (renewal)Thomas Ristenpart, professor of computer science at Cornell Tech, with co-PI Nicola Dell, associate professor of information science at the Jacobs Technion-Cornell Institute at Cornell Tech. 

Online services often employ account security interfaces (ASIs) to communicate security information to users, such as recent logins and connected devices. ASIs can be useful for survivors of intimate partner violence, journalists, and others whose accounts are more likely to be attacked, but bad actors can spoof devices on many ASIs. Through this project, the researchers will build new cryptographic protocols for identifying devices securely and privately, to prevent spoofing attacks of ASIs, and investigate how to make ASIs more effective and with improved user interfaces.

  • From Blind Faith to Cryptographic Certification in MLMichael P. Kim, assistant professor of computer science. 

Generative language models, like ChatGPT and Gemini, demonstrate great promise, but also pose new risks to users by producing misinformation and abusive content. In existing AI frameworks, individuals must blindly trust that platforms implement their models responsibly to address such risks. Kim proposes to borrow tools from cryptography to build a new framework for trust in modern prediction systems. He will explore techniques to enable platforms to earn users' trust by proving that their models mitigate serious risks.

  • Making Hardware Comprehensively Secure Against Spectre — by Construction (renewal)Andrew Myers, professor of computer science. 

In this renewed project, Myers will continue his work to design secure and efficient hardware systems that are safe from Spectre and other "timing attacks." This type of attack can steal sensitive information, such as passwords, from hardware by analyzing the time required to perform computations. Myers is developing new hardware description languages, which are programming languages that describe the behavior or structure of digital circuits, that will successfully prevent timing attacks.

  • Safe and Trustworthy AI in Home Health Care Work, Nicola Dell, with co-PIs, Deborah Estrin, professor of computer science at Cornell Tech, Madeline Sterling, associate professor of medicine at Weill Cornell Medicine, and Ariel Avgar, the David M. Cohen Professor of Labor Relations at the ILR School.

This team will investigate the trust, safety, and privacy challenges related to implementing artificial intelligence (AI) in home health care. AI has the potential to automate many aspects of home health services, such as patient–care worker matching, shift scheduling, and tracking of care worker performance, but the technology carries risks for both patients and care workers. Researchers will identify areas where the use of AI may require new oversight or regulation, and explore how AI systems can be designed, implemented, and regulated to ensure they are safe, trustworthy, and privacy-preserving for patients, care workers, and other stakeholders.

  • AI for Online Safety of Disabled PeopleAditya Vashistha, assistant professor of information science.

Vashistha will evaluate how AI technologies can be leveraged to protect people with disabilities from receiving ableist hate online. In particular, he will analyze the effectiveness of platform-mediated moderation, which primarily uses toxicity classifiers and language models to filter out hate speech.

  • DEFNET: Defending Networks With Reinforcement LearningNate Foster, professor of computer science, with co-PI Wen Sun, assistant professor of computer science. 

Traditionally, security has been seen as a cat-and-mouse game, where attackers exploit vulnerabilities in computer networks and defenders respond by shoring up weaknesses. Instead, Foster and Sun propose new, automated approaches that will use reinforcement learning – an ML technique where the model makes decisions to achieve the most optimal results – to continuously defend the network. They will focus their work at the network level, training and deploying defensive agents that can monitor network events and configure devices such as routers and firewalls to protect data and prevent disruptions in essential services.

Under director Alexios Mantzarlis, formerly a principal at Google’s Trust and Safety Intelligence team, the newly formed SETS Initiative at Cornell Tech will focus on threats ranging from ransomware and phishing of government officials to breaches of personal information and digital harassment. 

"There are new vectors of abuse every day," said Mantzarlis. He emphasizes that the same vulnerabilities exploited by state actors that threaten national security can also be used by small-time scammers. “If a system is unsafe and your data is leaky, that same system will be a locus of harassment for users.” 

Additionally, SETS will serve as a physical and virtual hub for academia, government, and industry to tackle emerging online threats.

By Patricia Waldron, a writer for the Cornell Ann S. Bowers College of Computing and Information Science.

Date Posted: 6/25/2024
A color graphic showing the ER Crash Cart

Amid the unpredictability and occasional chaos of emergency rooms, a robot has the potential to assist health care workers and support clinical teamwork, Cornell and Michigan State University researchers found.

The research team’s robotic crash cart prototype highlights the potential for robots to assist health care workers in bedside patient care and offers designers a framework to develop and test robots in other unconventional areas.

“When you're trying to integrate a robot into a new environment, especially a high stakes, time-sensitive environment, you can't go straight to a fully autonomous system,” said Angelique Taylor, assistant professor in information science at Cornell Tech and the Cornell Ann S. Bowers College of Computing and Information Science. “We first need to understand how a robot can help. What are the mechanisms in which the robot embodiment can be useful?”

Taylor is the lead author of “Towards Collaborative Crash Cart Robots that Support Clinical Teamwork,” which received a best paper honorable mention in the design category at the Association of Computing Machinery (ACM)/Institute of Electrical and Electronics Engineers (IEEE) International Conference on Human-Robot Interaction in March. 

The paper builds on Taylor’s ongoing research exploring robotics and team dynamics in unpredictable health care settings, like emergency and operating rooms.

Within the medical field, robotics are used in surgery and other health care operations with clear, standardized procedures. The Cornell-Michigan State team, however, set out to learn how a robot can support health care workers in fluid and sometimes chaotic bedside situations, like resuscitating a patient who has gone into cardiac arrest.

The challenges of deploying robots in such unpredictable environments are immense, said Taylor, who has been researching the use of robotics in bedside care since her days as a doctoral student. For starters, patient rooms are often too small to accommodate a stand-alone robot, and current robotics are not yet robust enough to perceive, let alone assist within, the flurry of activity amid emergency situations. Furthermore, beyond the robot’s technical abilities, there remain critical questions concerning its impact on team dynamics, Taylor said.

But the potential for robotics in medicine is huge, particularly in relieving workloads for health care workers, and the team’s research is a solid step in understanding how robotics can help, Taylor said.

The team developed a robotic version of a crash cart, which is a rolling storage cabinet stocked with medical supplies that health care workers use when making their rounds. The robot is equipped with a camera, automated drawers, and – continuing Cornell Bowers CIS researchers’ practice of “garbatrage” – a repurposed hoverboard for maneuvering around.

Through a collaborative design process, researchers worked with 10 health care workers and learned that a robot could benefit teams during bedside care by providing guidance on medical procedures, offering feedback, and tracking tasks, and by managing medications, equipment, and medical supplies. Participants favored a robot with “shared control,” wherein health care workers maintain their autonomy regarding decision-making, while the robot serves as a kind of safeguard and monitors for any possible mistakes in procedures, researchers found.

“Sometimes, fully autonomous robots aren’t necessary,” said Taylor, who directs the Artificial Intelligence and Robotics Lab (AIRLab) at Cornell Tech. “They can cause more harm than good.”

As with similar human-robot studies she has conducted, Taylor said participants expressed concern over job displacement. But she doesn’t foresee it happening.

“Health care workers are highly skilled,” she said. “These environments can be chaotic, and there are too many technical challenges to consider.”

Paper coauthors are Tauhid Tanjim, a doctoral student in the field of information science at Cornell, and Huajie Cao and Hee Rin Lee, both of Michigan State University. 

By Louis DiPietro, a writer for the Cornell Ann S. Bowers College of Computing and Information Science.

Date Posted: 6/24/2024

Pages