Marius Fehr
Loading...
23 results
Search Results
Publications1 - 10 of 23
- Topomap: Topological Mapping and Navigation Based on Visual SLAM MapsItem type: Working Paper
arXivBlöchliger, Fabian; Fehr, Marius; Dymczyk, Marcin; et al. (2017) - Predicting Unobserved Space For Planning via Depth Map AugmentationItem type: Working Paper
arXivFehr, Marius; Taubner, Tim; Liu, Yang; et al. (2019)Safe and efficient path planning is crucial for autonomous mobile robots. A prerequisite for path planning is to have a comprehensive understanding of the 3D structure of the robot's environment. On MAVs, this is commonly achieved using low-cost sensors, such as stereo or RGB-D cameras. These sensors may fail to provide depth measurements in textureless or IR-absorbing areas and have limited effective range. In path planning, this results in inefficient trajectories or failure to recognize a feasible path to the goal, hence significantly impairing the robot's mobility. Recent advances in deep learning enable us to exploit prior experience about the shape of the world and hence to infer complete depth maps from color images and additional sparse depth measurements. In this work, we present an augmented planning system and investigate the effects of employing state-of-the-art depth completion techniques, specifically trained to augment sparse depth maps originating from RGB-D sensors, semi-dense methods, and stereo matchers. We extensively evaluate our approach in online path planning experiments based on simulated data, as well as global path planning experiments based on real-world MAV data. We show that our augmented system, provided with only sparse depth perception, can reach on-par performance to ground truth depth input in simulated online planning experiments. On real-world MAV data the augmented system demonstrates superior performance compared to a planner based on very dense RGB-D depth maps. - 3D3L: Deep Learned 3D Keypoint Detection and Description for LiDARsItem type: Conference Paper
2021 IEEE International Conference on Robotics and Automation (ICRA)Streiff, Dominc; Bernreiter, Lukas; Tschopp, Florian; et al. (2021)With the advent of powerful, light-weight 3D LiDARs, they have become the hearth of many navigation and SLAM algorithms on various autonomous systems. Pointcloud registration methods working with unstructured pointclouds such as ICP are often computationally expensive or require a good initial guess. Furthermore, 3D feature-based registration methods have never quite reached the robustness of 2D methods in visual SLAM. With the continuously increasing resolution of LiDAR range images, these 2D methods not only become applicable but should exploit the illumination-independent modalities that come with it, such as depth and intensity. In visual SLAM, deep learned 2D features and descriptors perform exceptionally well compared to traditional methods. In this publication, we use a state-of-the-art 2D feature network as a basis for 3D3L, exploiting both intensity and depth of LiDAR range images to extract powerful 3D features. Our results show that these keypoints and descriptors extracted from LiDAR scan images outperform state-of-the-art on different benchmark metrics and allow for robust scan-to-scan alignment as well as global localization. - Modelify: An approach to incrementally build 3D object models for map completionItem type: Journal Article
The International Journal of Robotics ResearchFurrer, Fadri; Novkovic, Tonci; Fehr, Marius; et al. (2023)The capabilities of discovering new knowledge and updating the previously acquired one are crucial for deploying autonomous robots in unknown and changing environments. Spatial and objectness concepts are at the basis of several robotic functionalities and are part of the intuitive understanding of the physical world for us humans. In this paper, we propose a method, which we call Modelify, to incrementally map the environment at the level of objects in a consistent manner. We follow an approach where no prior knowledge of the environment is required. The only assumption we make is that objects in the environment are separated by concave boundaries. The approach works on an RGB-D camera stream, where object-like segments are extracted and stored in an incremental database. Segment description and matching are performed by exploiting 2D and 3D information, allowing to build a graph of all segments. Finally, a matching score guides a Markov clustering algorithm to merge segments, thus completing object representations. Our approach allows creating single (merged) instances of repeating objects, objects that were observed from different viewpoints, and objects that were observed in previous mapping sessions. Thanks to our matching and merging strategies this also works with only partially overlapping segments. We perform evaluations on indoor and outdoor datasets recorded with different RGB-D sensors and show the benefit of using a clustering method to form merge candidates and keypoints detected in both 2D and 3D. Our new method shows better results than previous approaches while being significantly faster. A newly recorded dataset and the source code are released with this publication. - maplab 2.0 - A Modular and Multi-Modal Mapping FrameworkItem type: Journal Article
IEEE Robotics and Automation LettersCramariuc, Andrei; Bernreiter, Lukas; Tschopp, Florian; et al. (2023)Integration of multiple sensor modalities and deep learning into Simultaneous Localization And Mapping (SLAM) systems are areas of significant interest in current research. Multi modality is a stepping stone towards achieving robustness in challenging environments and interoperability of heterogeneous multi robot systems with varying sensor setups. With maplab 2.0, we provide a versatile open-source platform that facilitates developing, testing, and integrating new modules and features into a fullyfledged SLAM system. Through extensive experiments, we show that maplab 2.0's accuracy is comparable to the state-of-the-art on the HILTI 2021 benchmark. Additionally, we showcase the flexibility of our system with three use cases: i) large-scale (similar to 10 km) multi-robot multi-session (23 missions) mapping, ii) integration of non-visual landmarks, and iii) incorporating a semantic object-based loop closure module into the mapping framework. - Visual-Inertial Teach and Repeat for Aerial InspectionItem type: Conference PaperFehr, Marius; Schneider, Thomas; Dymczyk, Marcin; et al. (2018)Industrial facilities often require periodic visual inspections of key installations. Examining these points of interest is time consuming, potentially hazardous or require special equipment to reach. Micro Air Vehicles (MAVs) are ideal platforms to automate this expensive and tedious task. In this work we present a novel system that enables a human operator to teach a visual inspection task to an autonomous aerial vehicle by simply demonstrating the task using a handheld device. To enable robust operation in confined, GPS-denied environments, the system employs the Google Tango visual-inertial mapping framework [1] as the only source of pose estimates. In a first step the operator records the desired inspection path and defines the inspection points. The mapping framework then computes a feature-based localization map, which is shared with the robot. After take-off, the robot estimates its pose based on this map and plans a smooth trajectory through the waypoints defined by the operator. Furthermore, the system is able to track the poses of other robots or the operator, localized in the same map, and follow them in real-time while keeping a safe distance.
- Predicting Unobserved Space For Planning via Depth Map AugmentationItem type: Conference Paper
2019 19th International Conference on Advanced Robotics (ICAR)Fehr, Marius; Taubner, Tim; Liu, Yang; et al. (2019)Safe and efficient path planning is crucial for autonomous mobile robots. A prerequisite for path planning is to have a comprehensive understanding of the 3D structure of the robot's environment. On Micro Air Vehicles (MAVs) this is commonly achieved using low-cost sensors, such as stereo or RGB-D cameras. These sensors may fail to provide depth measurements in textureless or IR-absorbing areas and have limited effective range. In path planning, this results in inefficient trajectories or failure to recognize a feasible path to the goal, hence significantly impairing the robot's mobility. Recent advances in deep learning enables us to exploit prior experience about the shape of the world and hence to infer complete depth maps from color images and additional sparse depth measurements. In this work, we present an augmented planning system and investigate the effects of employing state-of-the-art depth completion techniques, specifically trained to augment sparse depth maps originating from RGB-D sensors, semi-dense methods and stereo matchers. We extensively evaluate our approach in online path planning experiments based on simulated data, as well as global path planning experiments based on real world Micro Air Vehicle (MAV) data. We show that our augmented system, provided with only sparse depth perception, can reach on-par performance to ground truth depth input in simulated online planning experiments. On real world MAV data the augmented system demonstrates superior performance compared to a planner based on very dense RGB-D depth maps. - Dense Multi-Session Mapping for Autonomous RobotsItem type: Doctoral ThesisFehr, Marius (2020)
- maplab: An Open Framework for Research in Visual-inertial Mapping and LocalizationItem type: Working Paper
arXivSchneider, Thomas; Dymczyk, Marcin; Fehr, Marius; et al. (2017) - Maplab: An Open Framework for Research in Visual-Inertial Mapping and LocalizationItem type: Journal Article
IEEE Robotics and Automation LettersSchneider, Thomas; Dymczyk, Marcin; Fehr, Marius; et al. (2018)
Publications1 - 10 of 23