- Doctoral Thesis
Rights / licenseIn Copyright - Non-Commercial Use Permitted
Multi-robot systems offer several advantages over their single-robot counterpart such as robustness to robot failure and faster exploration in time-critical search and rescue missions. In order to collaborate in these scenarios, the robots need to jointly build a unified map representation where they can co-localize each other. The goal of this thesis is to develop a real-time solution to the Simultaneous Localization and Mapping (SLAM) problem for multiple robots equipped with 3D sensors. In particular, we focus on LiDAR sensors which can be used to generate precise reconstructions of the environment and are robust to changes in lighting conditions. There exist multiple challenges with respect to the multi-robot SLAM problem with 3D point clouds. First, a global place recognition technique is often required as the relative transformations between the robots are not always known. Second, multi- robot systems generate large quantities of data which need to be processed efficiently for achieving real-time performance. Finally, such systems often operate under bandwidth-limited wireless communication channels. A compact representation that can easily be stored and transmitted is therefore required. This thesis specifically targets addressing these three challenges. In our work, we perform global localization using a novel segment extraction and matching algorithm. In essence, 3D point cloud measurements are segmented and each segment is compressed to a compact descriptor. Matching descriptors are retrieved in a map and subsequently filtered based on geometric consistency. The output of this algorithm is a 6 Degrees of Freedom (DoF) pose in a global map, without using prior position information. Globally recognizing places on the basis of segments can be more efficient than using key-point descriptors, as fewer descriptors are usually required to describe places. Additionally, we have developed a set of incremental and time-effective algorithms that exploit the inherent sequential nature of 3D LiDAR measurements. To further address the real-time requirement, we present an ego-motion estimator which attains efficiency by non-uniformly sampling knots over a continuous-time trajectory. A compact map representation is achieved by using a novel data-driven descriptor for 3D point clouds. This descriptor can be extracted by a Convolutional Neural Network (CNN) with an autoencoder-like architecture. The novelty of this approach is that it simultaneously allows us to perform robot localization, 3D environment reconstruction, and semantic extraction. These compact point cloud descriptors can easily be transmitted and used, for example to provide structural feedback to end-users operating in remote locations. We have incorporated all these functionalities in a complete multi-robot SLAM solution that can operate in real-time. The effectiveness of our system has been demonstrated in multiple experiments both in urban driving and search and rescue environments. Specifically, we achieve LiDAR-based global localization at 10Hz in the largest map of the KITTI dataset. Using our mapping approach, a single computer can process in real-time data generated by five Velodyne LiDAR sensors. When retrieving matching descriptors, our data-driven approach leads to an increase of area under the ROC curve of 28.3% over state of the art eigenvalue-based descriptors. Finally, this descriptor allows us to generate dense reconstructions while offering a compression ratio up to 43.5x. Show more
External linksSearch print copy at ETH Library
Organisational unit03737 - Siegwart, Roland Y. / Siegwart, Roland Y.
MoreShow all metadata