Search
Results
-
NeRFing it: Offline Object Segmentation Through Implicit Modeling
(2023)2023 IEEE International Conference on Robotics and Automation (ICRA)Most recently proposed methods for robotic per-ception are based on deep learning, which require very large datasets to perform well. The accuracy of a learned model is mainly dependent on the data distribution it was trained on. Thus for deploying such models, it is crucial to use training data belonging to the robot's environment. However, collecting and labeling data is a significant bottleneck, necessitating efficient data collection ...Conference Paper -
Neural Implicit Vision-Language Feature Fields
(2023)2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)Recently, groundbreaking results have been presented on open-vocabulary semantic image segmentation. Such methods segment each pixel in an image into arbitrary categories provided at run-time in the form of text prompts, as opposed to a fixed set of classes defined at training time. In this work, we present a zero-shot volumetric open-vocabulary semantic scene segmentation method. Our method builds on the insight that we can fuse image ...Conference Paper -
Baking in the Feature: Accelerating Volumetric Segmentation by Rendering Feature Maps
(2023)2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)Methods have recently been proposed that densely segment 3D volumes into classes using only color images and expert supervision in the form of sparse semantically annotated pixels. While impressive, these methods still require a relatively large amount of supervision and segmenting an object can take several minutes in practice. Such systems typically only optimize the representation on the scene they are fitting, without leveraging prior ...Conference Paper -
Semi-automatic 3D Object Keypoint Annotation and Detection for the Masses
(2022)2022 26th International Conference on Pattern Recognition (ICPR)Creating computer vision datasets requires careful planning and lots of time and effort. In robotics research, we often have to use standardized objects, such as the YCB object set, for tasks such as object tracking, pose estimation, grasping and manipulation, as there are datasets and pre-learned methods available for these objects. This limits the impact of our research since learning-based computer vision methods can only be used in ...Conference Paper