- About
- Courses
- Research
- Research
- Computational Social Science
- Critical Data Studies
- Data Science
- Economics and Information
- Education Technology
- Ethics, Law and Policy
- Human-Computer Interaction
- Human-Robot Interaction
- Incentives and Computation
- Infrastructure Studies
- Interface Design and Ubiquitous Computing
- Natural Language Processing
- Network Science
- Social Computing and Computer-supported Cooperative Work
- Technology and Equity
- People
- Career
- Undergraduate
- Undergraduate
- Info Sci Majors
- BA - Information Science (College of Arts & Sciences)
- BS - Information Science (CALS)
- BS - Information Science, Systems, and Technology
- MPS Early Credit Option
- Independent Research
- CPT Procedures
- Student Associations
- Undergraduate Minor in Info Sci
- Our Students and Alumni
- Graduation Info
- Contact Us
- Masters
- PHD
- PHD
- Prospective PhD Students
- Admissions
- Degree Requirements and Curriculum
- Grad Student Orgs
- For Current PhDs
- Diversity and Inclusion
- Our Students and Alumni
- Graduation Info
- Program Contacts and Student Advising
Gierad Laput is Ph.D. candidate at Carnegie Mellon University’s School of Computer Science. His research in HCI lies at the intersection of interactive systems, sensing, and applied machine learning. He designs, builds, and evaluates novel interactive technologies that greatly enhance input expressivity for users and contextual awareness for devices. His research has been recognized with a Google Ph.D. Fellowship, a Swartz Entrepreneurial Fellowship, a Qualcomm Innovation Fellowship, an Adobe Research Fellowship, and a Disney Research Fellowship. He is also a recipient of the Fast Company Innovation by Design Award, along with 6 Best Paper Awards and Nominations at premier venues in human-computer interaction. He is also Editor-in-Chief of XRDS, ACM’s flagship magazine for students.
Talk: Context-Driven Implicit Interactions
Abstract: As computing proliferates into everyday life, systems that understand people’s context of use are of paramount importance. Regardless of whether the platform is a mobile device, a wearable, or embedded in the environment, context offers an implicit dimension that will become highly important if we are to power more human-centric experiences. Context-driven sensing will become a foundational element for many high-impact applications, from specific domains such as elder care, health monitoring, and empowering people with disabilities, to much broader areas such as smart infrastructures, robotics, and novel interactive experiences for consumers.
In this talk, I discuss the construction and evaluation of sensing technologies that can be practically deployed and yet still greatly enhance contextual awareness, primarily drawing upon machine learning to unlock a wide range of applications. I attack this problem area on two fronts: 1) supporting sensing expressiveness via context-sensitive wearable devices, and 2) achieving general-purpose sensing through sparse environment instrumentation. I discuss algorithms and pipelines that extract meaningful signals and patterns from sensor data to enable high-level abstraction and interaction. I also discuss system and human-centric challenges, and I conclude with a vision of how rich contextual awareness can enable more powerful experiences across broader domains.