RegFormer: An Efficient Projection-Aware Transformer Network for Large-Scale Point Cloud Registration
Metadata only
Date
2024-01-15Type
- Conference Paper
ETH Bibliography
yes
Altmetrics
Abstract
Although point cloud registration has achieved remarkable advances in object-level and indoor scenes, large-scale registration methods are rarely explored. Challenges mainly arise from the huge point number, complex distribution, and outliers of outdoor LiDAR scans. In addition, most existing registration works generally adopt a two-stage paradigm: They first find correspondences by extracting discriminative local features and then leverage estimators (eg. RANSAC) to filter outliers, which are highly dependent on well-designed descriptors and post-processing choices. To address these problems, we propose an end-to-end transformer network (RegFormer) for large-scale point cloud alignment without any further post-processing. Specifically, a projection-aware hierarchical transformer is proposed to capture long-range dependencies and filter outliers by extracting point features globally. Our transformer has linear complexity, which guarantees high efficiency even for large-scale scenes. Furthermore, to effectively reduce mismatches, a bijective association transformer is designed for regressing the initial transformation. Extensive experiments on KITTI and NuScenes datasets demonstrate that our RegFormer achieves competitive performance in terms of both accuracy and efficiency. Codes are available at https://github.com/IRMVLab/RegFormer. Show more
Publication status
publishedExternal links
Book title
2023 IEEE/CVF International Conference on Computer Vision (ICCV)Pages / Article No.
Publisher
IEEEEvent
Organisational unit
02659 - Institut für Visual Computing / Institute for Visual Computing
Notes
Conference lecture held on October 5, 2023.More
Show all metadata
ETH Bibliography
yes
Altmetrics