Sang-won's research focuses on robotic interfaces that work together with human users. The realization of his work spans interactive installations, mechatronic devices for visual/musical exploration, and novel digital interfaces. His vision aims to create synergies between machine systems and humans, with technology essentially becoming a natural extension of our hands and empowering us. This way, he challenges the fear and criticism around AI and automation that they replace human endeavors, by showing what beautiful outcomes us and machines can accomplish together.

The impact of his research spans from publications in CHI, TEI, and NIME, journals including LEONARDO and IEEE Pervasive Computing, to design awards and art exhibitions. Several of his work were awarded the Fast Company Innovation by Design Award, and have been shown in art exhibitions at SIGGRAPH ASIA, CHI, TEI, and more. Recently, he exhibited at Asia Culture Center along with some of the most prominent new media artists. In 2014, he was an artist-in-residence at Microsoft Research Studio 99, where he created Remnance of Form – an interactive installation with transforming shadows. His work has received extensive media coverage from BBC, WIRED, Discovery, Fast Company and so on.

He currently works at ArtMatr as a creative director, collaborating with artists such as Barnaby Furnas using their robotic printing technology. He received his PhD from MIT Media Lab in 2018. Prior to that, he was a software engineer at Samsung Electronics where he led the software development of eyeCan, an open-source DIY eye-mouse designed for people with motor disability. This project became the foundation of Samsung’s C-LAB. The eyeCan project was covered by major newspapers in Korea, also he was invited to give talks in TEDx events, Seoul Digital Forum, and Tech Plus Forum. He received his Bachelor and Master of Science from KAIST, focusing on 3D Computer Vision and Machine Learning.

Talk: Integrated Human-Machine Expression

Abstract: Throughout history we have augmented our physical abilities with machines. Some of the earliest examples go back to the 13th century when flying machines and the ideas behind today's exoskeletons were first conceived. Today, as technology permeates every aspect of our lives, it is not hard to envision a much closer integration of machines into the tasks we carry out.

This talk will present different robotic interfaces that illustrate the varied ways machine actions can be coordinated with our hands in artistic and musical contexts. Each of these consists of multiple iterations of actual, tested designs: a series a collaborative human-drone drawing systems enabling novel expressive capability, a series of semi-automated guitar systems enabling extended musical expression as well as new instrument-learning opportunities, and finally an oil-based painting machine that is an on-going work. The experiments performed with these prototypes give insight into the impact of such robotic integration on the human user: the user is nudged to adapt to the new condition and re-calibrate their anticipation associated with a certain input action, the role division allows the user to explore and understand aspects outside their given skills or physical limits, and the robotic extension inspires practice outside their regular course. The insights from these experiments present paths to various future research — regarding user interface design, control and agency of such collaborative systems, and the new aesthetics and creative processes viable through those systems.