- About
- Courses
- Research
- Computational Social Science
- Critical Data Studies
- Data Science
- Economics and Information
- Education Technology
- Ethics, Law and Policy
- Human-Computer Interaction
- Human-Robot Interaction
- Incentives and Computation
- Infrastructure Studies
- Interface Design and Ubiquitous Computing
- Natural Language Processing
- Network Science
- Social Computing and Computer-supported Cooperative Work
- Technology and Equity
- People
- Career
- Undergraduate
- Info Sci Majors
- BA - Information Science (College of Arts & Sciences)
- BS - Information Science (CALS)
- BS - Information Science, Systems, and Technology
- MPS Early Credit Option
- Independent Research
- CPT Procedures
- Student Associations
- Undergraduate Minor in Info Sci
- Our Students and Alumni
- Graduation Info
- Contact Us
- Info Sci Majors
- Masters
- PHD
- Prospective PhD Students
- Admissions
- Degree Requirements and Curriculum
- Grad Student Orgs
- For Current PhDs
- Diversity and Inclusion
- Our Students and Alumni
- Graduation Info
- Program Contacts and Student Advising
Ryo Suzuki is an Assistant Professor of Computer Science at the University of Calgary, where he directs the Programmable Reality Lab. Before joining the University of Calgary, he earned his PhD at the University of Colorado Boulder in 2020. His research mission is to enhance human thought and creativity by transforming the entire living environment into a dynamic space for thought, where people can think through tangible and spatial exploration with real objects in the real world, not with virtual objects on screens. Since 2016, he has published more than thirty full-paper publications at top HCI and robotics venues, such as CHI, UIST, IROS, and ICRA, and received three awarded papers. He has regularly served as a program committee member for CHI and UIST among others. He is also currently working as a part-time researcher at Google AR and a visiting professor at Tohoku University. Previously he also worked as a research intern at Stanford University, UC Berkeley, the University of Tokyo, Adobe Research, and Microsoft Research.
Abstract: Today's AI interfaces are predominantly confined to computer screens, leaving current AI systems unable to engage with and respond to our physical world. My research goal is to shift this paradigm towards a real-world-oriented human-AI interaction, where AI is blended into our everyday lives by seamlessly integrating augmented reality (AR) and AI interfaces. Toward this goal, I have explored three research directions: 1) Machine learning-driven ubiquitous tangible interfaces: transforming everyday objects into ubiquitous tangible interfaces via AR-integrated interactive machine learning, 2) AI-powered interactive content creation: converting static augmented reality content into interactive ones with AI-powered automated content generation, and 3) AI-mediated augmented communication: enhancing human-to-human communication through AI-mediated augmented reality assistant. Building upon these themes, I also outline my future research directions that focus on incorporating recent advances in generative AI, large language models, and advanced computer vision models into intelligent augmented reality interfaces. In the long run, I believe this seamless integration of AR and AI will significantly enhance human thought and creativity, as it allows us to think and collaborate through our entire body and space, rather than confining ourselves into small rectangle screens. My vision is that the future of computers and AI will no longer be a "tool" for thought, but rather a dynamic "space" for thought, where people can live inside and think through tangible and spatial exploration, just like what we do in a science museum today. Toward this vision, I discuss how the future of computing could transform our entire living world into a dynamic space for thought with the power of AR and AI.