- About
- Courses
- Research
- Computational Social Science
- Critical Data Studies
- Data Science
- Economics and Information
- Education Technology
- Ethics, Law and Policy
- Human-Computer Interaction
- Human-Robot Interaction
- Incentives and Computation
- Infrastructure Studies
- Interface Design and Ubiquitous Computing
- Natural Language Processing
- Network Science
- Social Computing and Computer-supported Cooperative Work
- Technology and Equity
- People
- Career
- Undergraduate
- Info Sci Majors
- BA - Information Science (College of Arts & Sciences)
- BS - Information Science (CALS)
- BS - Information Science, Systems, and Technology
- MPS Early Credit Option
- Independent Research
- CPT Procedures
- Student Associations
- Undergraduate Minor in Info Sci
- Our Students and Alumni
- Graduation Info
- Contact Us
- Info Sci Majors
- Masters
- PHD
- Prospective PhD Students
- Admissions
- Degree Requirements and Curriculum
- Grad Student Orgs
- For Current PhDs
- Diversity and Inclusion
- Our Students and Alumni
- Graduation Info
- Program Contacts and Student Advising
The Tech/Law Colloquium speaker for Tuesday, November 7, will be Ifeoma Ajunwa, an assistant professor at Cornell University’s Industrial and Labor Relations School (ILR), and a faculty associate member of Cornell Law School. Ajunwa is interested in how the law and private firms respond to job applicants or employees perceived as “risky.” She looks at the legal parameters for the assessment of such risk and also how technology and organizational behavior mediates risk reduction by private firms. She examines the sociological processes in regards to how such risk is constructed and the discursive ways such risk assessment is deployed in the maintenance of inequality. Ajunwa also examines ethical issues arising from how firms off-set risk to employees.
Talk: Hiring by Algorithm
Watch the livestream here.
Abstract: In the past decade, advances in computing processes such as data mining and machine learning have prompted corporations to rely on algorithmic decision making with the presumption that such decisions are efficient and fair. The use of such technologies in the hiring process represents a particularly sensitive legal arena. In this Article, I note the increasing use of automated hiring platforms by large corporations and how such technologies might facilitate unlawful employment discrimination, whether due to (inadvertent) disparate impact on protected classes or the technological capability to substitute facially neutral proxies for protected demographic details. I also parse some of the proposed technological solutions to discrimination in hiring and examine them for the potential for unintended outcomes. I argue that technologically-based solutions should be employed only in support of legislative and litigation-driven redress mechanisms that encourage employers to adopt fair hiring practices. I make the policy argument that audits of automated hiring platforms should be a mandated business practice that serves the ends of equal opportunity in employment. Notably, akin to Professor Ayres’ and Professor Gerarda’s Fair Employment Mark, employers that subject their automated hiring platform to external audits could receive a certification that serves to distinguish them in the labor market. Finally, borrowing from Tort law, I argue that an employer’s failure to audit its automated hiring platforms for disparate impact should serve as prima facie evidence of discriminatory intent under Title VII.