Segment3D: Learning Fine-Grained Class-Agnostic 3D Segmentation Without Manual Labels
METADATA ONLY
Loading...
Author / Producer
Date
2025
Publication Type
Conference Paper
ETH Bibliography
yes
Citations
Altmetric
METADATA ONLY
Data
Rights / License
Abstract
Current 3D scene segmentation methods are heavily dependent on manually annotated 3D training datasets. Such manual annotations are labor-intensive, and often lack fine-grained details. Furthermore, models trained on this data typically struggle to recognize object classes beyond the annotated training classes, i.e., they do not generalize well to unseen domains and require additional domain-specific annotations. In contrast, recent 2D foundation models have demonstrated strong generalization and impressive zero-shot abilities, inspiring us to incorporate these characteristics from 2D models into 3D models. Therefore, we explore the use of image segmentation foundation models to automatically generate high-quality training labels for 3D segmentation models. The resulting model, Segment3D, generalizes significantly better than the models trained on costly manual 3D labels and enables easily adding new training data to further boost the segmentation performance.
Permanent link
Publication status
published
Book title
Computer Vision – ECCV 2024
Journal / series
Volume
15092
Pages / Article No.
278 - 295
Publisher
Springer
Event
18th European Conference on Computer Vision (ECCV 2024)
Edition / version
Methods
Software
Geographic location
Date collected
Date created
Subject
Class-agnostic 3D segmentation; 3D Scene Understanding
Organisational unit
03766 - Pollefeys, Marc / Pollefeys, Marc
02154 - Media Technology Center (MTC) / Media Technology Center (MTC)