Open access
Date
2023-09Type
- Review Article
Abstract
3D point cloud panoptic segmentation is the combined task to (i) assign each point to a semantic class and (ii) separate the points in each class into object instances. Recently there has been an increased interest in such comprehensive 3D scene understanding, building on the rapid advances of semantic segmentation due to the advent of deep 3D neural networks. Yet, to date there is very little work about panoptic segmentation of outdoor mobile-mapping data, and no systematic comparisons. The present paper tries to close that gap. It reviews the building blocks needed to assemble a panoptic segmentation pipeline and the related literature. Moreover, a modular pipeline is set up to perform comprehensive, systematic experiments to assess the state of panoptic segmentation in the context of street mapping. As a byproduct, we also provide the first public dataset for that task, by extending the NPM3D dataset to include instance labels. That dataset and our source code are publicly available.1We discuss which adaptations are need to adapt current panoptic segmentation methods to outdoor scenes and large objects. Our study finds that for mobile mapping data, KPConv performs best but is slower, while PointNet++ is fastest but performs significantly worse. Sparse CNNs are in between. Regardless of the backbone, instance segmentation by clustering embedding features is better than using shifted coordinates. Show more
Permanent link
https://doi.org/10.3929/ethz-b-000631071Publication status
publishedExternal links
Journal / series
ISPRS Journal of Photogrammetry and Remote SensingVolume
Pages / Article No.
Publisher
ElsevierSubject
Mobile mapping point clouds; 3D panoptic segmentation; 3D semantic segmentation; 3D instance segmentation; 3D deep learning backbonesOrganisational unit
03886 - Schindler, Konrad / Schindler, Konrad
More
Show all metadata