Open access
Author
Date
2019Type
- Doctoral Thesis
ETH Bibliography
yes
Altmetrics
Abstract
Capturing and modeling dynamic 3D shapes is a core problem in computer graphics and essential in many application areas. In this thesis, we present input devices and complementary algorithms to digitalize movement and deformation. We design our devices to be self-sensing, meaning that they rely on internal sensors only and therefore, neither require cameras nor any other external setup. This makes them very mobile and unaffected by issues typically problematic for vision-based systems such as light changes, fast motions, objects outside the field of view and above all, occlusions.
In the first part, we address the problem of articulated 3D character animation. We present a modular and tangible input device with embedded Hall effect sensors - in contrast to existing hardware solutions, our design is not prone to gimbal locking. We demonstrate in a user experiment that this leads to speedup by a factor of 2. Furthermore, we introduce an algorithm that deduces small and easily controllable input devices from professional rigs and a mapping from the devices’ reduced degrees of freedom to the full ones of the unmodified rigs. We discuss a variety of animation results created with characters available online.
In the second part, we introduce a method to capture dense surface deformations without requiring line of sight. To that end, we propose a soft and stretchable sensor that measures its local area stretch densely. Moreover, we contribute a fabrication pipeline for such sensors, using only tools readily available in modern fablabs. The sensor concept and fabrication are verified in a series of controlled experiments. Finally, a wearable sensor prototype paired with a data-driven prior is employed to capture moving body parts like a wrist or an elbow and objects like an inflating and deflating balloon.
In the third part, we propose a glove for accurate hand pose estimation. It builds on the stretch sensor array concept introduced in the second part. The resulting glove features 44 sensors and is fully soft, stretchable and thin. We use a data-driven model that exploits the spatial layout of the sensor itself. The glove’s abilities are demonstrated in a series of ablative experiments, exploring different models and calibration methods. Show more
Permanent link
https://doi.org/10.3929/ethz-b-000388351Publication status
publishedExternal links
Search print copy at ETH Library
Publisher
ETH ZurichOrganisational unit
03911 - Sorkine Hornung, Olga / Sorkine Hornung, Olga
Funding
162958 - Deformation and Motion Modeling using Modular, Sensor-based Input Devices (SNF)
More
Show all metadata
ETH Bibliography
yes
Altmetrics