Mapping and Localization with Heterogeneous Robots
dc.contributor.author
Gawel, Abel
dc.contributor.supervisor
Siegwart, Roland
dc.contributor.supervisor
Leutenegger, Stefan
dc.date.accessioned
2019-01-11T12:11:43Z
dc.date.available
2019-01-11T10:46:08Z
dc.date.available
2019-01-11T12:11:43Z
dc.date.issued
2018
dc.identifier.uri
http://hdl.handle.net/20.500.11850/315116
dc.identifier.doi
10.3929/ethz-b-000315116
dc.description.abstract
Research on UGVs and MAVs has made great progress in recent years. These platforms typically come with different characteristics. Aerial robots can be rapidly deployed and survey large areas. However, their payload and operation times are typically limited, allowing only limited on-board computations, and sensor outfit. UGVs on the other side offer extended operation times, and high payloads, but are considerably slower than aerial robots. In contemporary applications, it is highly desirable to use teams of such heterogeneous robots to exploit these complementary features. One major application is the localization between these robots, e.g., by rapidly constructing a map using a MAV, and then localizing an UGV in this map. However, the localization between different view-points and potentially different sensors is a difficult task. We therefore identify a need to advance science on localization for heterogeneous robots.
The overall approach of this thesis is to investigate different abstractions of robotic mapping data that yield invariance to the heterogeneities of ground and aerial robots. We formulate the challenge of localization as a feature matching problem. The first contribution is focused on localizing between Vision and LiDAR data. Here, we propose an abstraction towards one of the data. Since localizing in 3D data is more view-point invariant than image data, our choice is to represent the data towards the LiDAR, i.e., as 3D point-clouds. We therefore propose to either perform a dense reconstruction from the vision data, or use sparse direct key-point mapping. The main challenge is then to bridge the different sampling characteristics of the data by a suitable data description. We find that 3D descriptors can be a promising avenue for the localization between the data. While established 3D descriptors can give good performance on matching between LiDAR and densely reconstructed data, they suffer when using sparse 3D key-point data from VI mapping. Inspired by successful invariant 2D image descriptors, we thus transfer their working principle to 3D space, and propose a novel descriptor based on binary density-comparisons of 3D points. Our experiments show that the descriptor works well for the challenge of localizing between Vision and LiDAR data.
While our first contribution is applicable to localization between the two most common mapping sensor configurations in robotic applications, we extend our formulation in the second part of this thesis. Instead of using an abstract appearance of an environment, e.g., as 3D points, or image features, we take a step further by considering the underlying structure of man-made environments. We find that the underlying semantic meaning of scenes does not change due to view-point, appearance, or season. With recent advances in semantic scene understanding, we therefore find that a localization based on semantics is a promising avenue for localization. While preserving the overall formulation of the feature-based localization architecture, we develop a novel map representation, and feature extraction method that accounts for semantic information, and spatial topology of scenes. Here, we propose to represent maps as graphs of connected semantic instances. Therefore, the localization problem is reduced to searching a sub-graph, i.e., the query, in a potentially large graph, i.e., the database. However, finding a sub-graph of a graph is an np-complete problem and computation prohibitively expensive for common robotic applications. Motivated by this, we propose to use graph descriptors that can capture the local structure of sub-graphs and can conveniently be matched in a KNN fashion, and can therefore be used in our general abstraction-layer based localization framework. One concern with the 3D based methods is scaling as the computational efforts for matching many high-dimensional features are generally high and scale linearly with the size of the environment. Thus, using semantic graphs, we can represent the environment very compactly with a single vertex per semantic object compared to multiple 3D key-points and features in our structure-based work. Hence, descriptor matching becomes more light-weight in larger scale scenarios. We evaluate the effectiveness of our approach on km-scale scenarios both on simulated and real data. The results show a much higher degree of view-point invariance of the proposed approach compared to state of the art appearance-based algorithms.
en_US
dc.format
application/pdf
en_US
dc.language.iso
en
en_US
dc.publisher
ETH Zurich
en_US
dc.rights.uri
http://rightsstatements.org/page/InC-NC/1.0/
dc.subject
Mapping
en_US
dc.subject
Localization
en_US
dc.subject
Semantic Segmentation
en_US
dc.subject
SLAM
en_US
dc.subject
ROBOT VISION
en_US
dc.subject
Robotics
en_US
dc.subject
ROBOT POSITION + ROBOT ORIENTATION
en_US
dc.title
Mapping and Localization with Heterogeneous Robots
en_US
dc.type
Doctoral Thesis
dc.rights.license
In Copyright - Non-Commercial Use Permitted
ethz.size
156 p.
en_US
ethz.code.ddc
DDC - DDC::6 - Technology, medicine and applied sciences::621.3 - Electric engineering
ethz.grant
Long-Term Human-Robot Teaming for Robot-Assisted Disaster Response
en_US
ethz.identifier.diss
25567
en_US
ethz.publication.place
Zurich
en_US
ethz.publication.status
published
en_US
ethz.leitzahl
ETH Zürich::00002 - ETH Zürich::00012 - Lehre und Forschung::00007 - Departemente::02130 - Dep. Maschinenbau und Verfahrenstechnik / Dep. of Mechanical and Process Eng.::02620 - Inst. f. Robotik u. Intelligente Systeme / Inst. Robotics and Intelligent Systems::03737 - Siegwart, Roland Y. / Siegwart, Roland Y.
en_US
ethz.leitzahl.certified
ETH Zürich::00002 - ETH Zürich::00012 - Lehre und Forschung::00007 - Departemente::02130 - Dep. Maschinenbau und Verfahrenstechnik / Dep. of Mechanical and Process Eng.::02620 - Inst. f. Robotik u. Intelligente Systeme / Inst. Robotics and Intelligent Systems::03737 - Siegwart, Roland Y. / Siegwart, Roland Y.
en_US
ethz.grant.agreementno
609763
ethz.grant.fundername
EC
ethz.grant.funderDoi
10.13039/501100000780
ethz.grant.program
FP7
ethz.relation.compiles
20.500.11850/118400
ethz.relation.compiles
10.3929/ethz-a-010819655
ethz.relation.compiles
20.500.11850/125011
ethz.relation.compiles
20.500.11850/253679
ethz.relation.compiles
handle/20.500.11850/236352
ethz.date.deposited
2019-01-11T10:46:10Z
ethz.source
FORM
ethz.eth
yes
en_US
ethz.availability
Open access
en_US
ethz.rosetta.installDate
2019-01-11T12:12:58Z
ethz.rosetta.lastUpdated
2021-02-15T03:17:39Z
ethz.rosetta.versionExported
true
ethz.COinS
ctx_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.atitle=Mapping%20and%20Localization%20with%20Heterogeneous%20Robots&rft.date=2018&rft.au=Gawel,%20Abel&rft.genre=unknown&rft.btitle=Mapping%20and%20Localization%20with%20Heterogeneous%20Robots
Files in this item
Publication type
-
Doctoral Thesis [29436]