Search
Results
-
LIC-Fusion 2.0: LiDAR-Inertial-Camera Odometry with Sliding-Window Plane-Feature Tracking
(2020)2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)Multi-sensor fusion of multi-modal measurements from commodity inertial, visual and LiDAR sensors to provide robust and accurate 6DOF pose estimation holds great potential in robotics and beyond. In this paper, building upon our prior work (i.e., LIC-Fusion), we develop a sliding-window filter based LiDAR-Inertial-Camera odometry with online spatiotemporal calibration (i.e., LIC-Fusion 2.0), which introduces a novel sliding-window ...Conference Paper -
Scalable Point Cloud-based Reconstruction with Local Implicit Functions
(2020)2020 International Conference on 3D Vision (3DV)Surface reconstruction from point clouds has been a well-studied research topic with applications in computer vision and computer graphics. Recently, several learningbased methods were proposed for 3D shape representation through implicit functions which among others can be used for point cloud-based reconstruction. Although delivering compelling results for synthetic object datasets of overseeable size, they fail to represent larger ...Conference Paper -
To Learn or Not to Learn: Visual Localization from Essential Matrices
(2020)2020 IEEE International Conference on Robotics and Automation (ICRA)Visual localization is the problem of estimating a camera within a scene and a key technology for autonomous robots. State-of-the-art approaches for accurate visual localization use scene-specific representations, resulting in the overhead of constructing these models when applying the techniques to new scenes. Recently, learned approaches based on relative pose estimation have been proposed, carrying the promise of easily adapting to new ...Conference Paper -
Precomputed Radiance Transfer for Reflectance and Lighting Estimation
(2020)2020 International Conference on 3D Vision (3DV)Decomposing scenes into reflectance and lighting is an important task for applications such as relighting, image matching or content creation. Advanced light transport effects like occlusion and indirect lighting are often ignored, leading to subpar decompositions in which the albedo needs to compensate for insufficiencies in the estimated shading. We show how to account for these advanced lighting effects by utilizing precomputed radiance ...Conference Paper -
KAPLAN: A 3D Point Descriptor for Shape Completion
(2020)2020 International Conference on 3D Vision (3DV)We present a novel 3D shape completion method that operates directly on unstructured point clouds, thus avoiding resource-intensive data structures like voxel grids. To this end, we introduce KAPLAN, a 3D point descriptor that aggregates local shape information via a series of 2D convolutions. The key idea is to project the points in a local neighborhood onto multiple planes with different orientations. In each of those planes, point ...Conference Paper -
Self-Supervised Learning of Non-Rigid Residual Flow and Ego-Motion
(2020)2020 International Conference on 3D Vision (3DV)Most of the current scene flow methods choose to model scene flow as a per point translation vector without differentiating between static and dynamic components of 3D motion. In this work we present an alternative method for end-to-end scene flow learning by joint estimation of non-rigid residual flow and ego-motion flow for dynamic 3D scenes. We propose to learn the relative rigid transformation from a pair of point clouds followed by ...Conference Paper -
Revisiting Radial Distortion Absolute Pose
(2020)2019 IEEE/CVF International Conference on Computer Vision (ICCV)Conference Paper -
OmniSLAM: Omnidirectional Localization and Dense Mapping for Wide-baseline Multi-camera Systems
(2020)2020 IEEE International Conference on Robotics and Automation (ICRA)In this paper, we present an omnidirectional localization and dense mapping system for a wide-baseline multiview stereo setup with ultra-wide field-of-view (FOV) fisheye cameras, which has a 360° coverage of stereo observations of the environment. For more practical and accurate reconstruction, we first introduce improved and light-weighted deep neural networks for the omnidirectional depth estimation, which are faster and more accurate ...Conference Paper -
RoutedFusion: Learning Real-Time Depth Map Fusion
(2020)2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)The efficient fusion of depth maps is a key part of most state-of-the-art 3D reconstruction methods. Besides requiring high accuracy, these depth fusion methods need to be scalable and real-time capable. To this end, we present a novel real-time capable machine learning-based method for depth map fusion. Similar to the seminal depth map fusion approach by Curless and Levoy, we only update a local group of voxels to ensure real-time ...Conference Paper -
Why Having 10,000 Parameters in Your Camera Model Is Better Than Twelve
(2020)2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)Camera calibration is an essential first step in setting up 3D Computer Vision systems. Commonly used parametric camera models are limited to a few degrees of freedom and thus often do not optimally fit to complex real lens distortion. In contrast, generic camera models allow for very accurate calibration due to their flexibility. Despite this, they have seen little use in practice. In this paper, we argue that this should change. We ...Conference Paper