Journal: ACM Transactions on Graphics
Abbreviation
ACM trans. graph.
Publisher
Association for Computing Machinery
251 results
Search Results
Publications1 - 10 of 251
- Wallpaper Pattern Alignment along Garment SeamsItem type: Journal Article
ACM Transactions on GraphicsWolff, Katja; Sorkine-Hornung, Olga (2019) - Semantic Soft SegmentationItem type: Journal Article
ACM Transactions on GraphicsAksoy, Yağiz; Oh, Tae-Hyun; Paris, Sylvain; et al. (2018) - Star-Shaped Metrics for Mechanical Metamaterial DesignItem type: Journal Article
ACM Transactions on GraphicsMartínez, Jonàs; Skouras, Mélina; Schumacher, Christian; et al. (2019) - A system for retargeting of streaming videoItem type: Conference Paper
ACM Transactions on Graphics ~ Proceedings of ACM SIGGRAPH Asia 2009, Yokohama, JapanKrähenbühl, Philipp; Lang, Manuel; Hornung, Alexander; et al. (2009) - Capturing Subjective First-Person View Shots with Drones for Automated CinematographyItem type: Journal Article
ACM Transactions on GraphicsAshtari, Amirsaman; Stevšić, Stefan; Nägeli, Tobias; et al. (2020)We propose an approach to capture subjective first-person view (FPV) videos by drones for automated cinematography. FPV shots are intentionally not smooth to increase the level of immersion for the audience, and are usually captured by a walking camera operator holding traditional camera equipment. Our goal is to automatically control a drone in such a way that it imitates the motion dynamics of a walking camera operator, and, in turn, capture FPV videos. For this, given a user-defined camera path, orientation, and velocity, we first present a method to automatically generate the operator's motion pattern and the associated motion of the camera, considering the damping mechanism of the camera equipment. Second, we propose a general computational approach that generates the drone commands to imitate the desired motion pattern. We express this task as a constrained optimization problem, where we aim to fulfill high-level user-defined goals, while imitating the dynamics of the walking camera operator and taking the drone's physical constraints into account. Our approach is fully automatic, runs in real time, and is interactive, which provides artistic freedom in designing shots. It does not require a motion capture system, and works both indoors and outdoors. The validity of our approach has been confirmed via quantitative and qualitative evaluations. - Computational Stereo Camera System with Programmable Control LoopItem type: Journal Article
ACM Transactions on GraphicsHeinzle, Simon; Greisen, Pierre; Gallup, David; et al. (2011) - Rig animation with a tangible and modular input deviceItem type: Journal Article
ACM Transactions on GraphicsGlauser, Oliver; Ma, Wan-Chun; Panozzo, Daniele; et al. (2016) - A Temporal Coherent Topology Optimization Approach for Assembly Planning of Bespoke Frame StructuresItem type: Journal Article
ACM Transactions on GraphicsWang, Ziqi; Kennel-Maushart, Florian; Huang, Yijiang; et al. (2023)We present a computational framework for planning the assembly sequence of bespoke frame structures. Frame structures are one of the most commonly used structural systems in modern architecture, providing resistance to gravitational and external loads. Building frame structures requires traversing through several partially built states. If the assembly sequence is planned poorly, these partial assemblies can exhibit substantial deformation due to self-weight, slowing down or jeopardizing the assembly process. Finding a good assembly sequence that minimizes intermediate deformations is an interesting yet challenging combinatorial problem that is usually solved by heuristic search algorithms. In this paper, we propose a new optimization-based approach that models sequence planning using a series of topology optimization problems. Our key insight is that enforcing temporal coherent constraints in the topology optimization can lead to sub-structures with small deformations while staying consistent with each other to form an assembly sequence. We benchmark our algorithm on a large data set and show improvements in both performance and computational time over greedy search algorithms. In addition, we demonstrate that our algorithm can be extended to handle assembly with static or dynamic supports. We further validate our approach by generating a series of results in multiple scales, including a real-world prototype with a mixed reality assistant using our computed sequence and a simulated example demonstrating a multi-robot assembly application. - EgoLocate: Real-Time Motion Capture, Localization, and Mapping with Sparse Body-mounted SensorsItem type: Journal Article
ACM Transactions on GraphicsYi, Xinyu; Zhou, Yuxiao; Habermann, Marc; et al. (2023)Human and environment sensing are two important topics in Computer Vision and Graphics. Human motion is often captured by inertial sensors, while the environment is mostly reconstructed using cameras. We integrate the two techniques together in EgoLocate, a system that simultaneously performs human motion capture (mocap), localization, and mapping in real time from sparse body-mounted sensors, including 6 inertial measurement units (IMUs) and a monocular phone camera. On one hand, inertial mocap suffers from large translation drift due to the lack of the global positioning signal. EgoLo-cate leverages image-based simultaneous localization and mapping (SLAM) techniquesto locate the human in the reconstructed scene. Onthe other hand, SLAM often fails when the visual feature is poor. EgoLocate involves inertial mocap to provide a strong prior for the camera motion. Experiments show that localization, a key challenge for both two fields, is largely improved by our technique, compared with the state of the art of the two fields. Our codes are available for research at https://xinyu-yi.github.io/EgoLocate/. - Linear Subspace Design for Real-Time Shape DeformationItem type: Journal Article
ACM Transactions on GraphicsWang, Yu; Jacobson, Alec; Barbič, Jernej; et al. (2015)We propose a method to design linear deformation subspaces, unifying linear blend skinning and generalized barycentric coordinates. Deformation subspaces cut down the time complexity of variational shape deformation methods and physics-based animation (reduced-order physics). Our subspaces feature many desirable properties: interpolation, smoothness, shape-awareness, locality, and both constant and linear precision. We achieve these by minimizing a quadratic deformation energy, built via a discrete Laplacian inducing linear precision on the domain boundary. Our main advantage is speed: subspace bases are solutions to a sparse linear system, computed interactively even for generously tessellated domains. Users may seamlessly switch between applying transformations at handles and editing the subspace by adding, removing or relocating control handles. The combination of fast computation and good properties means that designing the right subspace is now just as creative as manipulating handles. This paradigm shift in handle-based deformation opens new opportunities to explore the space of shape deformations.
Publications1 - 10 of 251