Lukas Bernreiter
Loading...
19 results
Search Results
Publications1 - 10 of 19
- Accurate Mapping and Planning for Autonomous RacingItem type: Conference Paper
2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)Andresen, Leiv; Brandemuehl, Adrian; Hönger, Alex; et al. (2020)This paper presents the perception, mapping, and planning pipeline implemented on an autonomous race car. It was developed by the 2019 AMZ driverless team for the Formula Student Germany (FSG) 2019 driverless competition, where it won 1st place overall. The presented solution combines early fusion of camera and LiDAR data, a layered mapping approach, and a planning approach that uses Bayesian filtering to achieve high-speed driving on unknown race tracks while creating accurate maps. We benchmark the method against our team's previous solution, which won FSG 2018, and show improved accuracy when driving at the same speeds. Furthermore, the new pipeline makes it possible to reliably raise the maximum driving speed in unknown environments from 3~m/s to 12~m/s while still mapping with an acceptable RMSE of 0.29~m. - Fourier is a Roboticist: Benefits of Spectral Representations for Collaborative Multi-Modal Localization and MappingItem type: Doctoral ThesisBernreiter, Lukas (2023)In the last decades, there have been great efforts to bring robotic teams to the necessary autonomy level that is required for the safe and efficient exploration of unknown environments. Particularly, long-term, large-scale, or time-critical missions can profit from deploying multiple self-sustaining robotic systems due to their improved accuracy and robustness. The typical paradigm for deploying multiple robots in these cases is that each robot maps and operates in a distinct region of the environment and then collaborates by sharing the gathered information with the other robots. Accordingly, the collective knowledge of all robots allows collaborative teams to foster a more accurate joint estimation which can significantly contribute to the success of a mission. These aspects are essential for rescue robotics in disaster response since multiple robots can explore the environment faster and, when equipped with complementary sensing modalities, can increase the chances of successfully locating human survivors or identifying hazardous areas. However, many of these environments are unstructured and degraded, thus imposing significant challenges on a robot’s perception and locomotion systems. Robotic teams can mitigate these challenges by utilizing various sensor cues, such as visual, thermal, and depth, and by employing different robotic systems, such as ground and aerial robots. Although many perception systems are already designed to rely on several input modalities, the support of various heterogeneous sensors with different characteristics and parameters is often impaired. Consequently, heterogeneous sensor systems might inflict substantial limitations on the system’s capabilities that can inhibit the overall performance of the robots. Nevertheless, a perception system’s ability to generalize well to different sensors ensures long-term applicability with less fine-tuning, which is highly relevant to search and rescue applications. Therefore, resilient mapping and localization systems require a refined orchestration of the input modalities to be robust and suitable for the multitude of robot and sensor deployment scenarios. In this doctoral thesis, we investigated the use of compact spectral representations to analyze the characteristic properties of measurements and trajectory estimations. In particular, we identified several crucial aspects and open problems of collaborative multi-robot mapping and localization where spectral approaches yield useful insights. Assuming an existing centralized multi-robot framework for communication and joint optimization, we initially deal with the multi-modal global localization task to reduce estimation errors by recognizing already visited places at the centralized server. Since the measurement representation and sensor fusion strategy constitutes a significant aspect for achieving good accuracy and precision, we propose a robust localization pipeline that exploits the spectral domain along with a novel spherical representation to circumvent issues associated with noise and drifts. In the second part, we broadcast the optimized multi-robot map to the individual robots to reduce the onboard estimation errors. We can efficiently identify discrepancies between the onboard and the server estimates by analyzing their spectral properties. The novelty of this approach lies within the spectral analysis that enables our system to compensate for the type of onboard failure by classifying the structural disparities. We show that our approach can even overcome estimation failures and degeneracies in the onboard estimation by exploiting the combined knowledge from the robotic team. In the final part of this thesis, we explore localization and mapping approaches with active consideration of semantic information in the environment. We infer an accurate semantic segmentation of surroundings by virtue of a spherical spectral analysis. Utilizing the resulting semantic segmentations to build a constellation of semantic objects provides a more unique representation of a scene than only a description using appearance or geometric primitives. Therefore, we further investigate the use of semantic objects apparent in structured urban environments to improve the detection of already visited places by conservatively filtering the mapped locations. In summary, this thesis advances the research in robust global localization, collaborative multi-robot mapping, and semantic scene understanding by means of spectral analysis in the Euclidean, graph, and spherical domains. We demonstrate the effectiveness of our systems in multiple experiments within complex underground scenarios but also in structured urban scenarios. Finally, the problems addressed in the context of the thesis can also readily help to solve many of the typical robot localization and mapping problems in various other environments.
- Team CERBERUS Wins the DARPA Subterranean Challenge: Technical Overview and Lessons LearnedItem type: Journal Article
Field RoboticsTranzatto, Marco; Dharmadhikari, Mihir; Bernreiter, Lukas; et al. (2024)This article presents the CERBERUS robotic system-of-systems, which won the DARPA Subterranean Challenge Final Event in 2021. The Subterranean Challenge was organized by DARPA with the vision to facilitate the novel technologies necessary to reliably explore diverse underground environments despite the grueling challenges they present for robotic autonomy. Due to their geometric complexity, degraded perceptual conditions combined with lack of GNSS support, austere navigation conditions, and denied communications, subterranean settings render autonomous operations particularly demanding. In response to this challenge, we developed the CERBERUS system which exploits the synergy of legged and flying robots, coupled with robust control especially for overcoming perilous terrain, multi-modal and multi-robot perception for localization and mapping in conditions of sensor degradation, and resilient autonomy through unified exploration path planning and local motion planning that reflects robot-specific limitations. Based on its ability to explore diverse underground environments and its high-level command and control by a single human supervisor, CERBERUS demonstrated efficient exploration, reliable detection of objects of interest, and accurate mapping. In this article, we report results from both the preliminary runs and the final Prize Round of the DARPA Subterranean Challenge, and discuss highlights and challenges faced, alongside lessons learned for the benefit of the community. - 3D3L: Deep Learned 3D Keypoint Detection and Description for LiDARsItem type: Conference Paper
2021 IEEE International Conference on Robotics and Automation (ICRA)Streiff, Dominc; Bernreiter, Lukas; Tschopp, Florian; et al. (2021)With the advent of powerful, light-weight 3D LiDARs, they have become the hearth of many navigation and SLAM algorithms on various autonomous systems. Pointcloud registration methods working with unstructured pointclouds such as ICP are often computationally expensive or require a good initial guess. Furthermore, 3D feature-based registration methods have never quite reached the robustness of 2D methods in visual SLAM. With the continuously increasing resolution of LiDAR range images, these 2D methods not only become applicable but should exploit the illumination-independent modalities that come with it, such as depth and intensity. In visual SLAM, deep learned 2D features and descriptors perform exceptionally well compared to traditional methods. In this publication, we use a state-of-the-art 2D feature network as a basis for 3D3L, exploiting both intensity and depth of LiDAR range images to extract powerful 3D features. Our results show that these keypoints and descriptors extracted from LiDAR scan images outperform state-of-the-art on different benchmark metrics and allow for robust scan-to-scan alignment as well as global localization. - maplab 2.0 - A Modular and Multi-Modal Mapping FrameworkItem type: Journal Article
IEEE Robotics and Automation LettersCramariuc, Andrei; Bernreiter, Lukas; Tschopp, Florian; et al. (2023)Integration of multiple sensor modalities and deep learning into Simultaneous Localization And Mapping (SLAM) systems are areas of significant interest in current research. Multi modality is a stepping stone towards achieving robustness in challenging environments and interoperability of heterogeneous multi robot systems with varying sensor setups. With maplab 2.0, we provide a versatile open-source platform that facilitates developing, testing, and integrating new modules and features into a fullyfledged SLAM system. Through extensive experiments, we show that maplab 2.0's accuracy is comparable to the state-of-the-art on the HILTI 2021 benchmark. Additionally, we showcase the flexibility of our system with three use cases: i) large-scale (similar to 10 km) multi-robot multi-session (23 missions) mapping, ii) integration of non-visual landmarks, and iii) incorporating a semantic object-based loop closure module into the mapping framework. - Flying Robotic Workers: High-precision Aerial Physical InteractionItem type: PresentationPantic, Michael; Girod, Rik; Lanegger, Christian; et al. (2022)We present our research in aerial robotics and 3D mapping at the Swiss Robotics Day 2022 in Lausanne, Switzerland. This includes a booth open to researchers and the general public and demos where we show contact-based inspection.
- A framework for collaborative multi-robot mapping using spectral graph waveletsItem type: Journal Article
The International Journal of Robotics ResearchBernreiter, Lukas; Khattak, Shehryar Masaud Khan; Ott, Lionel; et al. (2024)The exploration of large-scale unknown environments can benefit from the deployment of multiple robots for collaborative mapping. Each robot explores a section of the environment and communicates onboard pose estimates and maps to a central server to build an optimized global multi-robot map. Naturally, inconsistencies can arise between onboard and server estimates due to onboard odometry drift, failures, or degeneracies. The mapping server can correct and overcome such failure cases using computationally expensive operations such as inter-robot loop closure detection and multi-modal mapping. However, the individual robots do not benefit from the collaborative map if the mapping server provides no feedback. Although server updates from the multi-robot map can greatly alleviate the robotic mission strategically, most existing work lacks them, due to their associated computational and bandwidth-related costs. Motivated by this challenge, this paper proposes a novel collaborative mapping framework that enables global mapping consistency among robots and the mapping server. In particular, we propose graph spectral analysis, at different spatial scales, to detect structural differences between robot and server graphs, and to generate necessary constraints for the individual robot pose graphs. Our approach specifically finds the nodes that correspond to the drift's origin rather than the nodes where the error becomes too large. We thoroughly analyze and validate our proposed framework using several real-world multi-robot field deployments where we show improvements of the onboard system up to 90% and can recover the onboard estimation from localization failures and even from the degeneracies within its estimation. - Collaborative Robot Mapping using Spectral Graph AnalysisItem type: Conference Paper
2022 International Conference on Robotics and Automation (ICRA)Bernreiter, Lukas; Khattak, Shehryar; Ott, Lionel; et al. (2022)In this paper, we deal with the problem of creating globally consistent pose graphs in a centralized multi-robot SLAM framework. For each robot to act autonomously, individual onboard pose estimates and maps are maintained, which are then communicated to a central server to build an optimized global map. However, inconsistencies between onboard and server estimates can occur due to onboard odometry drift or failure. Furthermore, robots do not benefit from the collaborative map if the server provides no feedback in a computationally tractable and bandwidth-efficient manner. Motivated by this challenge, this paper proposes a novel collaborative mapping framework to enable accurate global mapping among robots and server. In particular, structural differences between robot and server graphs are exploited at different spatial scales using graph spectral analysis to generate necessary constraints for the individual robot pose graphs. The proposed approach is thoroughly analyzed and validated using several real-world multi-robot field deployments where we show improvements of the onboard system up to 90%. - CERBERUS in the DARPA Subterranean ChallengeItem type: Review Article
Science RoboticsTranzatto, Marco; Miki, Takahiro; Dharmadhikari, Mihir; et al. (2022)This article presents the core technologies and deployment strategies of Team CERBERUS that enabled our winning run in the DARPA Subterranean Challenge finals. CERBERUS is a robotic system-of-systems involving walking and flying robots presenting resilient autonomy, as well as mapping and navigation capabilities to explore complex underground environments. - Multiple Hypothesis Semantic Mapping for Robust Data AssociationItem type: Journal Article
IEEE Robotics and Automation LettersBernreiter, Lukas; Gawel, Abel Roman; Sommer, Hannes; et al. (2019)
Publications1 - 10 of 19