MovIPrint, 01/2017 – present, Studio TiX, London

Collaboration: Yen-Ting Cho, Yen-Ling KuoMovIPrint_2

Body movement enables humans to interact with their environment and make sense of the world. Whilst previous research examines body movement to create fabricable models, it treats the human body as a means of control. We argue that there are more dynamics generated by full body movement that are not yet being captured and utilized. These dynamics help create more organic and unexpected crafts using existing fabrication methods. Based on our explorations of fabrication and motion tracking systems, we present MovIPrint, a framework that encodes full body movement into fabricable models. This work was presented at ACMMM 2019.

[Taiwan Design Expo’17]


MovISee, 03/2012 – present, Studio TiX, London

Collaboration: Yen-Ting Cho, Yen-Ling KuoMovISee

MovISee is a digital software for people to create personal visual outputs. We use a depth camera to create mixed reality for people to explore the selected information and ultimately transform their understanding the ability of their body movement to create composite customized visual outputs. This work was presented at SIGGRAPH ASIA 2016.

[Official Website]


Face Interface, 06/2014 – 09/2014, National Taiwan University, Taiwan

Collaboration: Da-Yuan Huang, Chiao-Yin Hsiaoface_1

An on-body interface transforms human body into touch surface. However, such interfaces are usually limited by a small number of touch widgets and the mnemonics for command mappings. This work presents the use of the Face as an on-body Interface. Physical affordances on the face provide place-holders for various touch widgets, enabling rich commands. Semantics behind each facial organ suggest notable anchors for establishing command-mappings.


Driver Assistance System, 01/2013-07/2014, National Taiwan University, Taiwan

Collaboration: Chun-Kang Peng, Kuan-Wen Chen, Yong-Sheng Chendriving

This project aims to use wide-angle fisheye cameras to significantly reduce the required number of cameras for building vehicle surrounding monitoring system. To overcome the fisheye lens distortion, we conducted camera calibration to obtain the accurate rectified view. We also integrated depth images to adjust image projection model to remove ghost effect and distortion when combining images from multiple cameras. Given the high-quality perspective corrected images, we designed an assistive driving system to better visualize vehicles and obstacles on the road in third-person viewpoints. This work was presented at Asian Conference on Computer Vision (ACCV’14).


i-m-Cave, 11/2013 – 03/2014, National Taiwan University, Taiwan

Collaboration: Da-Yuan Huang, Shen-Chi Chen, Li-Erh Chang, Po-Shiun Chen

imcavei-m-Cave, a multi-touch tabletop system for virtually touring the Mogao Caves. To recreate the experience of such a tour, a field study was conducted and two key design considerations, exploration and restoration, were identified. For exploration, an innovative tangible figurine that can perform human-like neck extension/flexion is developed, and users can control the figurine to freely visit every corner of the virtual caves. With respect to restoration, users can restore digital artifacts and observe the rejuvenation and aging of artifacts with midair hand gestures and mobile devices, as if the users were shuttling back and forth between the present and the past. This work was presented at IEEE International Conference on Multimedia and Expo (ICME ’14).

[Official Website]


TouchSense, 08/2013 – 12/2013, National Taiwan University, Taiwan

Collaboration: Da-Yuan Huang, Ming-Chang Tsai, Ying-Chao Tung, Min-Lun Tsai, Li-Wei Chan

TouchsenseThis project, TouchSense, provides additional touchscreen input vocabulary by distinguishing the areas of users’ finger pads contacting the touchscreen. It requires minimal touch input area and minimal movement, making it especially ideal for wearable devices such as smart watches and smart glasses. Results from two human-factor studies showed that users could tap a touchscreen with five or more distinct areas of their finger pads. Also, they were able to tap with more distinct areas closer to their fingertips. We developed a TouchSense smart watch prototype using inertial measurement sensors, and developed two example applications: a calculator and a text editor. We also collected user feedback via an explorative study. This work was presented at CHI’14.

Video