- Conference Paper
Visual localization is the problem of estimating a camera within a scene and a key technology for autonomous robots. State-of-the-art approaches for accurate visual localization use scene-specific representations, resulting in the overhead of constructing these models when applying the techniques to new scenes. Recently, learned approaches based on relative pose estimation have been proposed, carrying the promise of easily adapting to new scenes. However, they are currently significantly less accurate than state-of-the-art approaches. In this paper, we are interested in analyzing this behavior. To this end, we propose a novel framework for visual localization from relative poses. Using a classical feature-based approach within this framework, we show state-of-the-art performance. Replacing the classical approach with learned alternatives at various levels, we then identify the reasons for why deep learned approaches do not perform well. Based on our analysis, we make recommendations for future work. © 2020 IEEE. Show more
Book title2020 IEEE International Conference on Robotics and Automation (ICRA)
Pages / Article No.
Organisational unit03766 - Pollefeys, Marc / Pollefeys, Marc
NotesDue to the Coronavirus (COVID-19) the conference was conducted virtually.
MoreShow all metadata