A color photo of a woman smiling for a photo

Hospitals have begun using “decision support tools” powered by artificial intelligence that can diagnose disease, suggest treatment, or predict a surgery’s outcome. But no algorithm is correct all the time, so how do doctors know when to trust the AI’s recommendation?

A new study led by Qian Yang, assistant professor of information science in the Cornell Ann S. Bowers College of Computing and Information Science, suggests that if AI tools can counsel the doctor like a colleague – pointing out relevant biomedical research that supports the decision – then doctors can better weigh the merits of the recommendation.

The researchers will present the new study, “Harnessing Biomedical Literature to Calibrate Clinicians’ Trust in AI Decision Support Systems,” at the Association for Computing Machinery CHI Conference on Human Factors in Computing Systems.

Previously, most AI researchers have tried to help doctors evaluate suggestions from decision support tools by explaining how the underlying algorithm works, or what data was used to train the AI. But an education in how AI makes its predictions wasn’t sufficient, Yang said. Many doctors wanted to know if the AI had been validated in clinical trials, which typically does not happen with these tools.

“A doctor’s primary job is not to learn how AI works,” Yang said. “If we can build systems that help validate AI suggestions based on clinical trial results and journal articles, which are trustworthy information for doctors, then we can help them understand whether the AI is likely to be right or wrong for each specific case.”

To develop this system, the researchers first interviewed nine doctors across a range of specialties, and three clinical librarians. They discovered that when doctors disagree on the right course of action, they track down results from relevant biomedical research and case studies, taking into account the quality of each study and how closely it applies to the case at hand.  

Yang and her colleagues built a prototype of their clinical decision tool that mimics this process by presenting biomedical evidence alongside the AI’s recommendation. They used GPT-3, a pre-trained large language model, to find and summarize relevant research. ChatGPT is the better-known offshoot of GPT-3, which is tailored for human dialogue.

“We built a system that basically tries to recreate the interpersonal communication that we observed when the doctors give suggestions to each other, and fetches the same kind of evidence from clinical literature to support the AI’s suggestion,” Yang said.

The interface for the decision support tool lists patient information, medical history, and lab test results on one side, with the AI’s personalized diagnosis or treatment suggestion on the other, followed by relevant biomedical studies. In response to doctor feedback, the researchers added a short summary for each study, highlighting details of the patient population, the medical intervention, and the patient outcomes, so doctors can quickly absorb the most important information.

The research team developed prototype decision support tools for three specialities – neurology, psychiatry, and palliative care – and asked three doctors from each speciality to test out the prototype by evaluating sample cases.

In interviews, doctors said they appreciated the clinical evidence, finding it intuitive and easy to understand, and preferred it to an explanation of the AI’s inner workings.

“It's a highly generalizable method.” Yang said. This type of approach could work for all medical specialties and other applications where scientific evidence is needed, such as Q&A platforms to answer patient questions or even automated fact checking of health-related news stories. “I would hope to see it embedded in different kinds of AI systems that are being developed, so we can make them useful for clinical practice,” Yang said.

Co-authors on the study include doctoral students Yiran Zhao and Stephen Yang in the field of information science, and Yuexing Hao in the field of human behavior design. Volodymyr Kuleshov, assistant professor at the Jacobs Technion-Cornell Institute at Cornell Tech and in computer science in Cornell Bowers CIS, Fei Wang, associate professor of population health sciences at Weill Cornell Medicine, and Kexin Quan of the University of California, San Diego also contributed to the study.

The researchers received support from the AI2050 Early Career Fellowship and the Cornell and Weill Cornell Medicine’s Multi-Investigator Seed Grants.

By Patricia Waldron, a writer for the Cornell Ann S. Bowers College of Computing and Information Science.

Date Posted: 4/04/2023
A color photo of a woman smiling for a photo

Faculty members exploring topics ranging from isolation-induced aggression in female mice to the group dynamics of improvisational comedy troupes to the policy decisions that shape homelessness have been named 2023-24 fellows by the Cornell Center for Social Sciences (CCSS).

The 14 faculty members, representing 13 departments and eight colleges and schools, were nominated by their deans. The program seeks to nurture the careers of Cornell’s most promising faculty members in the social sciences by providing time and space for high-impact social scientific scholarship that results in ambitious projects with real-world impact, scholarly publications and external grant funding.

Fellows receive course release, allowing them to spend a semester in residence at CCSS to focus on their research.

“This is the largest cohort of faculty fellows we have ever had and we are excited to see the results of the ambitious research and collaborations,” said Peter Enns, the Robert S. Harrison Director of CCSS.

The 2023-24 faculty fellows:

Natasha Raheja, Anthropology (College of Arts and Sciences): Majority-Minority Politics across the India-Pakistan Border

Mathieu Taschereau-Dumouchel, Economics (A&S): Dynamic Propagation in Production Networks

Bryn Rosenfeld, Government (A&S): Risky Politics and Political Participation under Authoritarian Rule

Kristin Roebuck, History (A&S): Remember Girl Zero: Trafficked Women, Imperial Men, and the Ends of Abolition   

Katherine Tschida, Psychology (A&S): Role of social touch in regulating susceptibility to isolation-induced aggression.    

Nicolas Bottan, Economics (Cornell Jeb E. Brooks School of Public Policy): Social comparisons and economic decisions 

Adriana Reyes, Sociology (Brooks School): Understanding Americans Attitudes towards Caregiving for Older Adults   

Chuan Liao, Global Development (College of Agriculture and Life Sciences): Circular Bionutrient Economy for AgriFood System Transition in Kenya   

Gili Vidan, Information Science (Cornell Ann S. Bowers College of Computing and Information Science): Technologies of Trust: The Making of Electronic Authentication in Postwar U.S.   

Cindy Hsin-Liu Kao, Human Centered Design (College of Human Ecology): Understanding the Social Aspects of On-Skin Interface Usage

Tristan Ivory, International and Comparative Labor (ILR School): Africa Futures Project: Socioeconomic and Geographic Mobility of Ghanaian, Kenyan, and South African Youth   

Brian Lucas, Organizational Labor (ILR): An Inductive Study of Creative Idea Elaboration in Improvisational Comedy Groups

Heeyon Kim, Hotel Administration (Cornell SC Johnson College of Business): Disrupting a Winner-Take-All Market: Pathways for Increasing Status Mobility in the Art World

Charley Willison, Public and Ecosystem Health (College of Veterinary Medicine) : Invisible Policymaking: The Hidden Actors Shaping Homelessness

In addition, the Cornell Center for Social Sciences has launched a new Collaborative Fellowship initiative. This program is designed to foster interdisciplinary teamwork and provide support as small groups of Cornell social scientists work toward specific project outputs. This round, CCSS is funding two Collaborative Fellowship groups, one in summer 2023 and another in summer 2024.

The Collaborative Fellowship projects:

Summer 2023

Jocelyn Poe, City and Regional Planning (Architecture, Art, and Planning) and Jaleesa Reed, Human Centered Design (CHE): Black femininity placed: An exploration of beauty and placemaking in L.A.

Summer 2024

Brittany Bond, Organizational Behavior (ILR); Sunita Sah, Johnson Graduate School of Management (SC Johnson); and Duanyi Yang, Labor Relations, Law, and History (ILR): Organizational Interventions to Alleviate Burnout and Promote Well-Being

By Amy Escalante ’24, a student assistant for the Cornell Center for Social Sciences.

This story was originally published in the Cornell Chronicle.

Date Posted: 3/23/2023
A color graphic showing the Schmidt Futures and Cornell Bowers CIS logos

Ten Cornell postdoctoral researchers who plan to harness the power of artificial intelligence (AI) in areas like materials discovery, physics, biological sciences, and sustainability sciences have been named Eric and Wendy Schmidt AI in Science Postdoctoral Fellows, a Schmidt Futures program.

The announcement of the inaugural cohort comes on the heels of Cornell being selected as one of nine universities worldwide to join the Eric and Wendy Schmidt AI in Science Postdoctoral Fellowship, a $148 million program that is part of a larger $400 million effort from Schmidt Futures to support AI researchers.

Under this fellowship program, the Cornell University AI for Science Institute (CUAISci) will recruit and train a cohort of up to 100 postdoctoral fellows over the next six years in the fields of natural sciences and engineering. Part of the university’s larger Artificial Intelligence Radical Collaboration, the institute comprises Cornell faculty and researchers from diverse fields who seek to apply AI for scientific discovery, with sustainability being the overarching goal.

“Artificial Intelligence is poised to significantly advance fundamental research in a broad range of scientific disciplines. These fellowships are critical in equipping the next generation of scientists with the AI tools and knowledge they need to tackle some of the hardest scientific problems of our time,” said Kavita Bala, dean of the Cornell Ann S. Bowers College of Computing and Information Science. “Together with the Cornell AI Initiative, this inaugural cohort positions Cornell as a leader in AI-enabled scientific research and education.”

“AI is redefining the boundaries of what we thought was possible for a machine, unleashing its full potential to take on human capabilities such as vision and language,” said Carla Gomes, the Ronald C. and Antonia V. Nielsen Professor in Cornell Bowers CIS and co-director of CUAISci. “With its limitless capacity for progress and innovation, AI is set to transform the world of science and usher in a new era of discovery."

“Both Cornell University and Schmidt Futures are committed to training innovators across disciplines to think big and apply cutting-edge AI tools to solve today's most urgent and grand challenges,” added Fengqi You, the Roxanne E. and Michael J. Zak Professor in Energy Systems Engineering and co-director of CUAISci.

This year’s inaugural Schmidt AI in Science Postdoctoral Fellows are:

• Benjamin Decardi-Nelson, systems engineering, studies plant biology-informed AI to unveil the dynamic complexity of plant microclimate interactions in artificial environments on earth and in space for sustainable food production.

• Eliot Miller, Lab of Ornithology, explores the use of automated acoustic identification to inform species distribution models for birds.

• Itay Griniasty, physics, studies how programmable materials can be designed into microscopic machines, and how information geometry uncovers hidden relations and the generalizability of climate simulations of extreme precipitation.

• Felipe Pacheco, ecology and evolutionary biology, studies how to use AI to solve sustainability challenges in the Water-Food-Energy Nexus.

• Alexandros Polyzois, chemistry and chemical biology, aims to develop an AI system to conquer one major remaining barrier toward understanding the chemistry of life: the identification of the millions of unknown chemicals in living organisms, including humans, which will enable paradigm-shifting advances in physiology and medicine.

• Vikram Thapar, chemical and biomolecular engineering, studies multi-scale AI and computational methods from fully atomistic to a machine learning model to identify specific chemical compounds that can self-assemble into desirable, geometrically complex nanostructures.

• Ralitsa Todorova, neurobiology and behavior, works on recording neuronal activity during decision making and sleep, and using machine learning to decode mental imagery in animals.

• Tianyu Wang, applied and engineering physics, studies optical neural networks, which utilize optics instead of electronics to execute machine learning algorithms more efficiently and quickly for data processing and image sensing.

• Xin Wang, chemical and biomolecular engineering, focuses on liquid crystal-based sensors for detecting gasses, microplastics, proteins, and other chemicals using advanced deep learning and computer vision techniques.

• Yu Zhou, School of Integrative Plant Science, studies the response of dryland ecosystems to climate change. Zhou’s project uses AI techniques to study the pattern of model-data mismatches, the underlying causes, and ultimately to improve state-of-the-art process-based models.

Applications for the next cohort of Eric and Wendy Schmidt AI in Science Postdoctoral Fellowship, a Schmidt Futures program are being accepted now. Review of applications starts on April 15, 2023 and will continue until all spots are filled. For more information, visit CUAISci's fellowship information page.

By Louis DiPietro, a writer for the Cornell Ann S. Bowers College of Computing and Information Science.

Date Posted: 3/23/2023
A color photo of someone presenting research to a crowd

An eager crowd packed Gates Hall for the Association of Computer Science Undergraduates’ (ACSU) Research Night on Monday, March 13, showcasing the latest work from students across the Cornell Ann S. Bowers College of Computing and Information Science.

Held every semester, Research Night exists to encourage undergraduate students to pursue research opportunities. Historically, organizers of the event have targeted computer science students, but now welcome students from each of Cornell Bowers CIS’s three departments: computer science, information science, and statistics and data science.

Through research, students can work at the “frontier of knowledge,” said Kavita Bala, dean of Cornell Bowers CIS, in her opening remarks. “The kind of research we’re doing in Cornell – in computing, in CS, and more broadly in the college – is really upending traditional, centuries-old institutions,” Bala noted, citing the fields of finance, transportation, and healthcare as examples.

Following opening remarks, a panel of undergraduate researchers – Emmett Breen ’24, Benny Rubin ’25, and Yolanda Wang ’25 – answered audience questions and discussed how they got involved with research, the advantages and disadvantages of research as compared to an industry internship, and the experience they gained beyond technical skills.

Justin Hsu, assistant professor of computer science, moderated the Q&A session, which included questions submitted by attendees via an online portal. 

Panelists encouraged attendees to pursue research opportunities and noted the barriers for entry are fewer than students may imagine.

“In a research lab, there are different kinds of jobs,” said Wang, who studies computer vision and generative models, a type of AI model that creates new text, images, or videos based on training data. “You don’t need to delve into the deepest, most theoretical thing from the very beginning.”

Both Wang and Breen reached out to professors after their first-year fall semesters and were told they needed to take more courses. However, options existed for both of them: Wang did human-computer interaction research over the summer, while Breen – who studies systems and networking – was assigned a project by the professor he continues to do research with to help him prepare and gain more skills.A color photo of someone presenting research to a crowd

“People who don’t rush through the curriculum [aren’t] at a disadvantage at all, as long as you make an effort to find whatever area of computer science you’re most interested in,” Breen said.

After the panel, attendees engaged with graduate researchers who presented their work during  a poster session. The researchers recognized the opportunity at ACSU Research Night to increase the profile of their work.

Presenting during the poster session, Wentao Guo ’22, M.Eng ’23, was excited specifically by the chance to bring “attention to my work,” which involves deep learning models.

Joy Ming, a doctoral student in the field of information science, presented human-computer interaction research that seeks to support healthcare workers who work in the homes of older adults and people with disabilities.

“A lot of the work that they’re doing is really undervalued or invisible, and so my project’s goal is to make that a little more visible using data collection and data analysis,” Ming said.

Interested students can take part in research during the academic year, either for course credit or pay. In addition, the Bowers Undergraduate Research Experience (BURE) is accepting applications until March 27. Formerly known as the Computer Science Undergraduate Research Program (CSURP), the 10-week summer program provides students guidance from faculty and Ph.D. students, funding of up to $5,000, a series of talks on technical and career projects, and social experiences with other program participants.

By Chris Walkowiak ‘26, a student writer for the Cornell Ann S. Bowers College of Computing and Information Science’s communications team.

Date Posted: 3/16/2023
A color photo of a man standing outside, smiling for a photo

Human assumptions regarding language usage can lead to flawed judgments of whether language was AI- or human-generated, Cornell Tech and Stanford researchers found in a series of experiments.

While individuals’ proficiency at detecting AI-generated language was generally a toss-up across the board, people were consistently influenced by the same verbal cues, leading to the same flawed judgments.

Participants could not differentiate AI-generated from human-generated language, erroneously assuming that mentions of personal experiences and the use of “I” pronouns indicated human authors. They also thought that convoluted phrasing was AI-generated.

“We learned something about humans and what they believe to be either human or AI language,” said Mor Naaman, professor at the Jacobs Technion-Cornell Institute at Cornell Tech and of information science at the Cornell Ann S. Bowers College of Computing and Information Science. “But we also show that AI can take advantage of that, learn from it and then produce texts that can more easily mislead people.”

Maurice Jakesch, Ph.D. ’22, a former member of Naaman’s Social Technologies Lab at Cornell Tech, is lead author of “Human Heuristics for AI-Generated Language Are Flawed,” published March 7 in Proceedings of the National Academy of Sciences. Naaman and Jeff Hancock, professor of communication at Stanford University, are co-authors.

The researchers conducted three main experiments and three more to validate the findings, involving 4,600 participants and 7,600 “verbal self-presentations” – profile text people use to describe themselves on social websites. The experiments were patterned after the Turing test, developed in 1950 by British mathematician Alan Turing, who devised the test to measure a machine’s ability to exhibit intelligent behavior equal to or better than a human.

Instead of testing the machine, the new study tested humans’ ability to detect whether the exhibited intelligence came from a machine or a human. The researchers trained multiple AI language models to generate text in three social contexts where trust in the sender is important: professional (job application); romantic (online dating); and hospitality (Airbnb host profiles).

In the three main experiments, using two different language models, participants identified the source of a self-presentation with only 50% to 52% accuracy. But the responses, the researchers discovered, were not random, as the agreement between respondents’ answers was significantly higher than chance, meaning many participants were drawing the same flawed conclusions.

The researchers conducted an analysis of the heuristics (the process by which a conclusion is reached) participants used in deciding whether language was AI- or human-generated, first by asking participants to explain their judgments, then following up with a computational analysis that confirmed these reports. People cited mentions of family and life experiences, as well as the use of first-person pronouns, as evidence of human language.

However, such language is equally likely to be produced by AI language models.

“People’s intuition goes counter the current design of these language models,” Naaman said. “They produce text that is statistically probable – in other words, language that is common. But people tended to associate uncommon language with AI, a behavior that AI systems can then exploit that to create language, as we call it, ‘more human than human.’”

In three pre-registered validation experiments, the author’s show that, indeed, AI can exploit people’s heuristics to produce text that people more reliably rate as human-written than actual human-written text.

People’s reliance on flawed heuristics in identifying AI-generated language, the authors wrote, is not necessarily indicative of increased machine intelligence. It doesn’t take superior intelligence, they said, to “fool” humans – just a well-placed personal pronoun, or a mention of family.

The authors note that while humans’ ability to discern AI-generated language might be limited, language models that are “self-disclosing by design” would let the user know that the information is not human-generated while preserving the integrity of the message.

This could be achieved either by a language that is clearly nonhuman (avoiding the use of informal speech) or through “AI accents” – a dedicated dialect that could “facilitate and support people’s intuitive judgments without interrupting the flow of communication,” they wrote.

Hancock, a faculty member at Cornell from 2002-15, said this work is “one of the last nails in the coffin” of the Turing test era.

“As a way of thinking about whether something’s intelligent or not,” he said, “our data pretty clearly show that, in pretty important ways of being human – that is, describing yourself professionally, romantically or as a host – it’s over. The machine has passed that test.”

Naaman said this work – particularly relevant with the arrival of AI tools such as ChatGPT – highlights the fact that AI will increasingly be used as a tool to facilitate human-to-human communication.

“This is about not about us talking to AI. It’s us talking to each other through AI,” he said. “And the implications that we show on trust are significant: People will be easily misled and will easily distrust each other – not AI.”

Funding for this work came from the National Science Foundation and the German National Academic Foundation.

By Tom Fleischman, Cornell Chronicle

This story was originally published in the Cornell Chronicle.

Date Posted: 3/10/2023
Two color photos of two men with the NSF logo centered above them

The National Science Foundation (NSF) has selected two faculty from the Cornell Ann S. Bowers College of Computing and Information Science to receive Faculty Early Career Development (CAREER) Awards.

Immanuel Trummer, assistant professor of computer science, and Cheng Zhang, assistant professor of information science, will each receive approximately $600,000 over the next five years to support their research. NSF provides these sustaining grants to early-career scientists who they believe will advance their fields and serve as role models within their institutions.

Trummer’s work focuses on improving database performance through tuning, a series of decisions about how a database processes information internally. Specifically, he leverages large language models to support automated database performance tuning. The performance of database management systems depends on various tuning decisions, including settings for internal configuration parameters as well as the creation of auxiliary data structures. Making such decisions by hand is difficult, which has motivated the development of automated tuning tools. However, crucial information for database tuning is often contained in text documents, such as the database manual or text describing specific data sets and their properties. The current generation of tuning tools cannot exploit such information, making them inefficient. However, the latest generation of text processing methods – large language models based on the Transformer architecture – is often able to extract information from text with little to no task-specific training data. Trummer plans to exploit such methods to parse relevant text for database tuning, extracting information that helps to guide automated tuning efforts.

As the director of the Smart Computer Interfaces for Future Interactions (SciFi) lab, Zhang designs intelligent, privacy-sensitive, and minimally obtrusive wearables that can predict and understand human behavior and intentions in daily activities. Currently, computers struggle to recognize everyday activities due to the lack of high-quality behavioral data, such as body postures. Zhang's team addresses this through wearables endowed with artificial intelligence (AI)-powered active acoustic sensing that track and interpret human body postures of the hands, limbs, face, eyes, and tongue. The research aims to bridge the gap using cutting-edge AI techniques to enable applications in human activity recognition, telemedicine, and improving computer accessibility for individuals with hearing or speech impairments. Ultimately, the SciFi Lab seeks to create systems that function efficiently in real-world settings while protecting user privacy.

Trummer and Zhang join two other Cornell Bowers CIS faculty who recently received NSF CAREER AwardsSumanta Basu, assistant professor of statistics and data science, and Tapomayukh Bhattacharjee, assistant professor of computer science.

By Patricia Waldron, a writer for the Cornell Ann S. Bowers College of Computing and Information Science.

Date Posted: 3/06/2023
A color photo of a woman smiling for a photo

Yongqi (Kay) Zhang ‘22 is understandably excited. She’s settling into her new apartment in a new city – Rogers, Arkansas – and she’s already entertaining thoughts about getting a dog.

In a few short days, the recent graduate of the Cornell Information Science master of professional studies (MPS) program will begin her career as a user experience (UX) designer for Tyson Foods.

“The MPS program is a preparation kit for a full-time career,” she said. “It gave me more skills and knowledge of UX design, provided project experiences that make for a good portfolio, and taught me how to network.”

Her path to Arkansas by way of Ithaca began in south China. There, she majored in computer science at the Chinese University of Hong Kong. Intent on pursuing a master’s degree in UX, Zhang went online to find the best programs and discovered Cornell Information Science’s leading MPS, a one-year professional master’s program where students receive elite education from tech leaders and, come graduation, stand out in the competitive job market. The program’s flexibility, its capstone projects with real companies, and UX being one of four optional focus areas available to students – all of it appealed to Zhang.

“Other programs may have a set of required courses, and the selections are limited,” she said. But within Cornell’s MPS in information science, “you can choose a lot. Although I wanted to be a UX designer, it was also important to me to have other opportunities and possibilities to explore.”

In Spring 2022, she arrived at Cornell’s Ithaca campus with a firm plan for her year of MPS studies. She would take the bulk of her courses in the spring, ready her portfolio during the summer, and juggle a manageable course load while applying for jobs during her final fall semester. That’s precisely what Zhang did.

“I was prepared, and I felt I had plenty of time to do job searching [in the fall],” she said. “Having a plan is important.”

Not all of it was stress-free. Being an international student new to upstate New York, adjusting to speaking English regularly – these were tough adjustments for Zhang, who describes herself as shy. And during her job search, receiving rejection letters or no responses on applications was frustrating and demoralizing. Weekly meet-ups with friends from her local church helped her navigate these challenges.

“You need to have some activities to relieve stress,” said Zhang, when mulling advice to students. “I went to church and talked with people. Managing mental health is important.”

Zhang may have had a firm plan in place for her studies at Cornell, but there was enough space for pleasant surprises. She branched out from her information science courses and took electives in consumer behavior and entrepreneurship, both of which became favorites.

As for courses within the information science curriculum, she noted two that were particularly illuminating. Qualitative User Research and Design Methods (INFO 5400) – led by Gilly Leshed, senior lecturer in information science – bolstered her UX research skills. “Being a UX designer means being an effective UX researcher too,” she said.

Professional Career Development (INFO 5905) – led by Rebecca Salk, the MPS career advisor – readied Zhang for the job hunt, from preparing her portfolio and navigating interviews to effective networking.

For the program’s keystone project course, MPS Project Practicum (INFO 5900), Zhang and her teammates partnered with TISTA, a company specializing in providing IT services to federal, state, and local governments, to build a platform to support mental health among veterans.

“Having a complete project like that in my portfolio was helpful for my job interview,” she said. INFO 5900 “was more than just building the project; my teammates and I learned a lot about communication and the importance of interpersonal skills.”

To prospective students considering Cornell’s MPS program in information science, Zhang advised interested students to apply with clear career goals.

“Set your plan early,” she said, and “go for it.”

Connect with Kay Zhang on LinkedIn.

By Louis DiPietro, a writer for the Cornell Ann S. Bowers College of Computing and Information Science.

Date Posted: 3/01/2023
A color photo of people seated at a table, speaking to each other

Undergraduate students and future innovators who recently chose the Cornell Ann S. Bowers College of Computing and Information Science as their academic home were recognized and celebrated during the college’s New Majors Welcome event.

More than 200 newly declared majors to Cornell Bowers CIS enjoyed dinner and conversation – and received red and black college scarves and beanies – at the event, which was held Wednesday, Feb. 15, in the Statler Hotel Ballroom. Roughly 30 faculty members and several support staff from the college’s three departments – computer science, information science, and statistics and data science – ate with students and shared knowledge about the departments and the many opportunities available to Cornell Bowers CIS undergraduates, whether in academics, research, or clubs.

Attendees heard from student leaders in campus groups like Women in Computing at CornellAssociation of Computer Science UndergraduatesUnderrepresented Minorities in Computing, and the Information Science Student Association, as well as staffers from the college’s Student Service Office and Office of Diversity, Equity, and Inclusion.

“You are joining our community at one of the most exciting times in the fields of computer science, information science and statistics and data science,” said Kavita Bala, dean of Cornell Bowers CIS, in her opening remarks. “New computing and information technologies are poised to upend centuries-old institutions like transportation, healthcare, commerce, urban infrastructure, finance, and more.” 

Undergraduates are admitted to Cornell through one of three admitting colleges: Cornell Engineering, the College of Agriculture and Life Sciences, or the College of Arts and Sciences. They then find their way to Cornell Bowers CIS by declaring majors such as biometry and statistics, information science, computer science, statistical science, or information science, systems, and technology.A graphic outlining the undergrad opportunities available at Cornell Bowers CIS

More students than ever are flocking to Cornell Bowers CIS to gain fundamental knowledge and skills needed to make an impact on our increasingly connected world. Today, the college’s undergraduate majors total more than 2,000, reflecting the relevance of the college’s leading, interdisciplinary education.

“There are so many fields you can get involved with,” Bala said, “and when you get that degree and go out there, you will have an incredible and lasting real-world impact.”

Date Posted: 2/27/2023
A color photo of a woman smiling for a photo

Emily Tseng is a doctoral student in information science (IS) from Singapore and Nashville, Tennessee. She earned a bachelor’s degree from Princeton University, with a focus on global health and mathematical modeling for infectious disease epidemics, and now studies human-computer interaction, machine learning, and data privacy at Cornell.

What is your area of research and why is it important?

I work broadly in human-computer interaction (HCI) and social computing, borrowing approaches from machine learning, computer security and privacy, and global health. I’m interested in how we build computational tools for new systems of caregiving. In fields like medicine, health, and social work, our efforts to improve people’s lives involve gathering large amounts of data on their trauma or pain, analyzing it using machine learning methods increasingly less and less subject to human oversight, and then using that analysis to make design and policy decisions. I want us to be able to do this rigorously and with attention to core human values like privacy, agency, and equity. To me, this is a combination of evolution in our data analysis techniques (e.g., privacy-preserving machine learning), our approaches to gathering and curating datasets (e.g., informed consent), and our technology design frameworks (e.g., participatory design). So far, I’ve worked specifically with text-based psychotherapy platforms, mobile data collection systems in home health care, and computer security and privacy for survivors of intimate partner violence.

What are the larger implications of this research?

Care is core to society, and care systems are increasingly being remade as systems for data collection and analysis—consider, for example, how much time your doctor spends checking boxes on your electronic health record. This means we can do amazing things with that data, like developing new therapeutics, forecasting epidemics, and improving care for underrepresented people—just look at the explosion in machine learning for health. But this also means we’re collecting more data about more people than ever before, and the history of research shows us that can often be extractive and harmful. My goal is to ensure we build the data-driven future of care systems responsibly and ensure this future is uplifting for data subjects and for broader society.

What does it mean to you to have been awarded a Microsoft Research Ph.D. Fellowship?

I’m thrilled to have been selected—it’s an honor and a massive statement of confidence for my work. More importantly, these awards always reflect a community’s worth of effort, and in my Ph.D. I have had the privilege of extraordinary support from my advisors, my peers, and the field of IS. I’m immensely grateful to them. I also take seriously my responsibility to pay it forward—I wrote up a blog post on my approach to fellowship applications that I hope can help other junior scholars. 

You received a best paper award at CHI 2022. Can you tell us about the paper?

It was a tremendous honor to be recognized by the CHI community! “Care Infrastructures for Digital Security and Privacy in Intimate Partner Violence” was a project I led out of the IPV Tech Research Group, which is spearheaded here at Cornell Tech by co-PIs Nicki Dell and Tom Ristenpart. Our research group is interested in characterizing the many ways digital technology exacerbates intimate partner violence (IPV)–so domestic violence, harassment, stalking, etc. by a current or former spouse or significant partner. We pull that knowledge through to interventions that help victims at the Clinic to End Tech Abuse (CETA), a volunteer organization where survivors get 1:1 computer security and privacy support from consultants trained in the specific threat model of IPV. We call this approach “clinical computer security.”

Services like CETA can be a huge benefit to survivors facing targeted and persistent digital threats, but they’re often delivered as one-off tech support. In this paper, we presented an eight-month study of an approach to computer security and privacy inspired by the feminist ethic of care: think less Geek Squad and more primary care physician. We built various technical and social systems to make this work and used it in CETA to support 72 survivors over eight months (from December 2020 to August 2021). Then, through a reflexive qualitative study, we examined how well the model worked in practice. Our paper shows we were able to help more survivors and meet increasing demand for this type of care—but to grow the service, we need to reckon with tensions like ensuring safe connections to survivors, adapting to their changing needs, establishing boundaries on consultants’ time, and assessing risks in the face of uncertainty. 

Since this paper was published, we’ve been using the protocol in CETA, and as of February 2023 we’ve helped over 300 survivors. In ongoing work, we’re building on this infrastructure to continue research with survivors on the many ways IPV affects their lives and to explore how the research process can be made both data-driven and participatory—stay tuned for that.

What are your hobbies or interests outside of your research or scholarship?

In 2020 I started playing soccer in various pickup groups NYC as a pandemic coping mechanism. Now it’s a full-blown hobby: I play several times a week and follow all sorts of leagues. Talk to me about the women’s game especially!

Why did you choose Cornell to pursue your degree?

I was attracted to the interdisciplinarity baked into the IS Ph.D. Cornell offers top-flight training in computer science—especially machine learning—and exposure to cutting-edge ML across both industry and academia. Cornell also offers top-flight training in how to think about technology’s social contexts and consequences—and will have you critically examining what the words ‘data’ and ‘technology’ even mean. It can sometimes be frustrating to operate between traditions, but I personally find it freeing to pursue a research question from any and all angles and to learn from peers seeking to do the same.

Date Posted: 2/21/2023
A color photo of a woman smiling for a photo

Phoebe Sengers, professor of information science in the Cornell Ann S. Bowers College of Computing and Information Science and science & technology studies in the College of Arts and Sciences, has been elected to the CHI Academy, an honorary group of leading scholars in the field of human-computer interaction.

Led by the Association for Computing Machinery Special Interest Group on Computer–Human Interaction (SIGCHI), the CHI Academy annually elects a cohort of scholars whose contributions have helped shape the discipline and/or industry, as well as spurred further research and innovation. Sengers is one of eight selected for this year’s cohort. She joins fellow Cornellians Sue Fussell, professor of information science in Cornell Bowers CIS and communication in College of Agriculture and Life Sciences, and Tanzeem Choudhury, the Roger and Joelle Burnell Professor in Integrated Health and Technology at the Jacobs Technion-Cornell Institute at Cornell Tech in Information Sciences, who were elected to the CHI Academy in 2016 and 2022, respectively.

Sengers’ work integrates ethnographic and historical analysis of the social implications of technology with design methods to suggest alternative future possibilities. Her approach brings critical, qualitative scholarship into close conversation with technology design practice to ask what ways of being and values are left out of the imagination of technology design, and what new alternatives for design may appear when we take those ways of being and values into account. Her current research focuses on technology on the rural and remote periphery, identifying how urban assumptions in the design of infrastructure tend to sideline rural communities; she is using this work to develop an alternative design imaginary that centers small-scale places.

At Cornell, Sengers directs the Culturally Embedded Computing research group. She is also affiliated with the departments of Computer Science, Visual Studies, and Art, a member of the Cornell Institute for Digital Agriculture, and a faculty fellow of the Atkinson Center for Sustainability.

Her many awards and honors include a Cornell Public Voices Fellowship (2017), a Fellow in the Cornell Society for the Humanities (2007), a National Science Foundation (NSF) CAREER award (2003), and a Fulbright Fellowship (1998). 

By Louis DiPietro, a writer for the Cornell Ann S. Bowers College of Computing and Information Science.

Date Posted: 2/16/2023

Pages