HyperSLAM: A Generic and Modular Approach to Sensor Fusion and Simultaneous Localization and Mapping in Continuous-Time
Metadata only
Date
2020Type
- Conference Paper
ETH Bibliography
yes
Altmetrics
Abstract
Within recent years, Continuous-Time Simultaneous Localization And Mapping (CTSLAM) formalisms have become subject to increased attention from the scientific community due to their vast potential in facilitating motioncorrected feature reprojection and direct unsynchronized multi-rate sensor fusion. They also hold the promise of yielding better estimates in traditional sensor setups (e.g. visual, inertial) when compared to conventional discretetime approaches. Related works mostly rely on cubic, C 2 -continuous, uniform cumulative B-Splines to exemplify and demonstrate the benefits inherent to continuous-time representations. However, as this type of splines gives rise to continuous trajectories by blending uniformly distributed SE 3 transformations in time, it is prone to underor overparametrize underlying motions with varying volatility and prohibits dynamic trajectory refinement or sparsification by design. In light of this, we propose employing a more generalized and efficient non-uniform split interpolation method in R×SU 2 ×R 3 and commence with development of `HyperSLAM', a generic and modular CTSLAM framework. The efficacy of our approach is exemplified in proof-ofconcept simulations based on a visual, monocular setup. © 2020 IEEE Show more
Publication status
publishedExternal links
Book title
2020 International Conference on 3D Vision (3DV)Pages / Article No.
Publisher
IEEEEvent
Notes
Due to the Coronavirus (COVID-19) the conference was conducted virtually.More
Show all metadata
ETH Bibliography
yes
Altmetrics