Show simple item record

dc.contributor.author
Dubé, Renaud
dc.contributor.supervisor
Siegwart, Roland
dc.contributor.supervisor
Kaess, Michael
dc.contributor.supervisor
Stachniss, Cyrill
dc.date.accessioned
2019-02-06T17:39:03Z
dc.date.available
2019-02-05T17:29:57Z
dc.date.available
2019-02-06T10:12:32Z
dc.date.available
2019-02-06T17:39:03Z
dc.date.issued
2018
dc.identifier.uri
http://hdl.handle.net/20.500.11850/323152
dc.identifier.doi
10.3929/ethz-b-000323152
dc.description.abstract
Multi-robot systems offer several advantages over their single-robot counterpart such as robustness to robot failure and faster exploration in time-critical search and rescue missions. In order to collaborate in these scenarios, the robots need to jointly build a unified map representation where they can co-localize each other. The goal of this thesis is to develop a real-time solution to the Simultaneous Localization and Mapping (SLAM) problem for multiple robots equipped with 3D sensors. In particular, we focus on LiDAR sensors which can be used to generate precise reconstructions of the environment and are robust to changes in lighting conditions. There exist multiple challenges with respect to the multi-robot SLAM problem with 3D point clouds. First, a global place recognition technique is often required as the relative transformations between the robots are not always known. Second, multi- robot systems generate large quantities of data which need to be processed efficiently for achieving real-time performance. Finally, such systems often operate under bandwidth-limited wireless communication channels. A compact representation that can easily be stored and transmitted is therefore required. This thesis specifically targets addressing these three challenges. In our work, we perform global localization using a novel segment extraction and matching algorithm. In essence, 3D point cloud measurements are segmented and each segment is compressed to a compact descriptor. Matching descriptors are retrieved in a map and subsequently filtered based on geometric consistency. The output of this algorithm is a 6 Degrees of Freedom (DoF) pose in a global map, without using prior position information. Globally recognizing places on the basis of segments can be more efficient than using key-point descriptors, as fewer descriptors are usually required to describe places. Additionally, we have developed a set of incremental and time-effective algorithms that exploit the inherent sequential nature of 3D LiDAR measurements. To further address the real-time requirement, we present an ego-motion estimator which attains efficiency by non-uniformly sampling knots over a continuous-time trajectory. A compact map representation is achieved by using a novel data-driven descriptor for 3D point clouds. This descriptor can be extracted by a Convolutional Neural Network (CNN) with an autoencoder-like architecture. The novelty of this approach is that it simultaneously allows us to perform robot localization, 3D environment reconstruction, and semantic extraction. These compact point cloud descriptors can easily be transmitted and used, for example to provide structural feedback to end-users operating in remote locations. We have incorporated all these functionalities in a complete multi-robot SLAM solution that can operate in real-time. The effectiveness of our system has been demonstrated in multiple experiments both in urban driving and search and rescue environments. Specifically, we achieve LiDAR-based global localization at 10Hz in the largest map of the KITTI dataset. Using our mapping approach, a single computer can process in real-time data generated by five Velodyne LiDAR sensors. When retrieving matching descriptors, our data-driven approach leads to an increase of area under the ROC curve of 28.3% over state of the art eigenvalue-based descriptors. Finally, this descriptor allows us to generate dense reconstructions while offering a compression ratio up to 43.5x.
en_US
dc.format
application/pdf
en_US
dc.language.iso
en
en_US
dc.publisher
ETH Zurich
en_US
dc.rights.uri
http://rightsstatements.org/page/InC-NC/1.0/
dc.title
Real-Time Multi-Robot Localization and Mapping with 3D Point Clouds
en_US
dc.type
Doctoral Thesis
dc.rights.license
In Copyright - Non-Commercial Use Permitted
dc.date.published
2019-02-06
ethz.size
145 p.
en_US
ethz.code.ddc
DDC - DDC::6 - Technology, medicine and applied sciences::621.3 - Electric engineering
ethz.code.ddc
DDC - DDC::0 - Computer science, information & general works::004 - Data processing, computer science
ethz.identifier.diss
25582
en_US
ethz.publication.place
Zurich
en_US
ethz.leitzahl
ETH Zürich::00002 - ETH Zürich::00012 - Lehre und Forschung::00007 - Departemente::02130 - Dep. Maschinenbau und Verfahrenstechnik / Dep. of Mechanical and Process Eng.::02620 - Inst. f. Robotik u. Intelligente Systeme / Inst. Robotics and Intelligent Systems::03737 - Siegwart, Roland Y. / Siegwart, Roland Y.
en_US
ethz.date.deposited
2019-02-05T17:30:21Z
ethz.source
FORM
ethz.eth
yes
en_US
ethz.availability
Open access
en_US
ethz.rosetta.installDate
2019-02-06T17:39:30Z
ethz.rosetta.lastUpdated
2020-02-15T16:57:51Z
ethz.rosetta.exportRequired
true
ethz.rosetta.versionExported
true
ethz.COinS
ctx_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.atitle=Real-Time%20Multi-Robot%20Localization%20and%20Mapping%20with%203D%20Point%20Clouds&rft.date=2018&rft.au=Dub%C3%A9,%20Renaud&rft.genre=unknown&rft.btitle=Real-Time%20Multi-Robot%20Localization%20and%20Mapping%20with%203D%20Point%20Clouds
 Search via SFX

Files in this item

Thumbnail

Publication type

Show simple item record