Abstract
iDAR Upsampling is a challenging task for the perception systems of robots and autonomous vehicles, due to the sparse and irregular structure of large-scale scene contexts. Recent works propose to solve this problem by converting LiDAR data from 3D Euclidean space into an image super-resolution problem in 2D image space. Although their methods can generate high-resolution range images with fine-grained details, the resulting 3D point clouds often blur out details and predict invalid points. In this paper, we propose TULIP, a new method to reconstruct high resolution LiDAR point clouds from low-resolution LiDAR input. We also follow a range image-based approach but specifically modify the patch and window geometries of a Swin-Transformer-based network to better fit the characteristics of range images. We conducted several experiments on three public real-world and simulated datasets. TULIP outperforms state-of-the-art methods in all relevant metrics and generates robust and more realistic point clouds than prior works. The code is available at https: //github.com/ethz-asl/TULIP.git. Show more
Permanent link
https://doi.org/10.3929/ethz-b-000682359Publication status
publishedExternal links
Editor
Book title
2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)Pages / Article No.
Publisher
IEEEEvent
Organisational unit
03737 - Siegwart, Roland Y. / Siegwart, Roland Y.02284 - NFS Digitale Fabrikation / NCCR Digital Fabrication
More
Show all metadata
ETH Bibliography
yes
Altmetrics