Daniel Leithinger is an Assistant Professor of Computer Science at the ATLAS Institute at the University of Colorado Boulder. His research in Human Computer Interaction (HCI) focuses on the design of  novel shape-changing interfaces and spaces. Together with his team of graduate researchers at the THING Lab, his research combines vision-driven and user-centered design with storytelling and engineering to produce award-winning works published at academic conferences like ACM CHI, UIST, DIS and TEI. Daniel received his PhD at the MIT Media Lab in 2015, and co-founded Lumii (now Fathom Optics), a glasses-free 3D display and print company. He has received design awards from Red Dot, Fast Company, the International Design Excellence Awards and Laval Virtual and exhibited works at Ars Electronica, Siggraph, and the Cooper Hewitt Design Museum.

Talk: "Designing Elastic Space"

Watch this talk via Zoom // passcode: 357582

Abstract: We are at the cusp of emerging digital realities that promise virtual environments to play, work and learn online. But interacting with them comes at the cost of separating us from the real world around us, and whenever we try to integrate the physical with the digital, friction emerges. What if instead, the physical world were as malleable as the virtual, with objects that dynamically transform to support the astonishing dexterity and expressiveness of the human body? My talk outlines how this vision of an Elastic Space has informed a series of prototypes ranging from shape-shifting tables to transforming swarm robots. My research addresses technical challenges through novel materials, actuators, and modular robotics. Yet even more important than these technical developments is to democratize how we design Elastic Space. If reprogrammable physical environments are to become a reality, we cannot rely on the intuition of a few select decision makers. Instead, we need to join forces with diverse artists, designers, and stakeholders to develop equitable, collaborative design tools for rapid ideation and iteration. My talk proposes how we can accomplish this through a combination of low-fidelity prototyping, mixed reality, and storytelling.