Show simple item record

dc.contributor.author
Fehr, Marius
dc.contributor.author
Taubner, Tim
dc.contributor.author
Liu, Yang
dc.contributor.author
Siegwart, Roland
dc.contributor.author
Cadena, Cesar
dc.date.accessioned
2020-01-30T06:32:33Z
dc.date.available
2020-01-29T17:26:00Z
dc.date.available
2020-01-30T06:32:33Z
dc.date.issued
2019
dc.identifier.uri
http://hdl.handle.net/20.500.11850/395622
dc.identifier.doi
10.3929/ethz-b-000395622
dc.description.abstract
Safe and efficient path planning is crucial for autonomous mobile robots. A prerequisite for path planning is to have a comprehensive understanding of the 3D structure of the robot's environment. On MAVs, this is commonly achieved using low-cost sensors, such as stereo or RGB-D cameras. These sensors may fail to provide depth measurements in textureless or IR-absorbing areas and have limited effective range. In path planning, this results in inefficient trajectories or failure to recognize a feasible path to the goal, hence significantly impairing the robot's mobility. Recent advances in deep learning enable us to exploit prior experience about the shape of the world and hence to infer complete depth maps from color images and additional sparse depth measurements. In this work, we present an augmented planning system and investigate the effects of employing state-of-the-art depth completion techniques, specifically trained to augment sparse depth maps originating from RGB-D sensors, semi-dense methods, and stereo matchers. We extensively evaluate our approach in online path planning experiments based on simulated data, as well as global path planning experiments based on real-world MAV data. We show that our augmented system, provided with only sparse depth perception, can reach on-par performance to ground truth depth input in simulated online planning experiments. On real-world MAV data the augmented system demonstrates superior performance compared to a planner based on very dense RGB-D depth maps.
en_US
dc.format
application/pdf
dc.language.iso
en
en_US
dc.publisher
Cornell University
en_US
dc.rights.uri
http://rightsstatements.org/page/InC-NC/1.0/
dc.title
Predicting Unobserved Space For Planning via Depth Map Augmentation
en_US
dc.type
Working Paper
dc.rights.license
In Copyright - Non-Commercial Use Permitted
dc.date.published
2019-11-13
ethz.journal.title
arXiv
ethz.pages.start
1911.05761
en_US
ethz.size
7 p.
en_US
ethz.identifier.arxiv
1911.05761
ethz.publication.place
Ithaca, NY
en_US
ethz.publication.status
published
en_US
ethz.leitzahl
ETH Zürich::00002 - ETH Zürich::00012 - Lehre und Forschung::00007 - Departemente::02130 - Dep. Maschinenbau und Verfahrenstechnik / Dep. of Mechanical and Process Eng.::02620 - Inst. f. Robotik u. Intelligente Systeme / Inst. Robotics and Intelligent Systems::03737 - Siegwart, Roland Y. / Siegwart, Roland Y.
en_US
ethz.leitzahl.certified
ETH Zürich::00002 - ETH Zürich::00012 - Lehre und Forschung::00007 - Departemente::02130 - Dep. Maschinenbau und Verfahrenstechnik / Dep. of Mechanical and Process Eng.::02620 - Inst. f. Robotik u. Intelligente Systeme / Inst. Robotics and Intelligent Systems::03737 - Siegwart, Roland Y. / Siegwart, Roland Y.
en_US
ethz.date.deposited
2020-01-29T17:26:08Z
ethz.source
FORM
ethz.eth
yes
en_US
ethz.availability
Open access
en_US
ethz.rosetta.installDate
2020-01-30T06:32:43Z
ethz.rosetta.lastUpdated
2022-03-29T00:53:06Z
ethz.rosetta.versionExported
true
ethz.COinS
ctx_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.atitle=Predicting%20Unobserved%20Space%20For%20Planning%20via%20Depth%20Map%20Augmentation&rft.jtitle=arXiv&rft.date=2019&rft.spage=1911.05761&rft.au=Fehr,%20Marius&Taubner,%20Tim&Liu,%20Yang&Siegwart,%20Roland&Cadena,%20Cesar&rft.genre=preprint&
 Search print copy at ETH Library

Files in this item

Thumbnail

Publication type

Show simple item record