Show simple item record

dc.contributor.author
Zhou, Jiawei
dc.contributor.author
Mascaro, Ruben
dc.contributor.author
Cadena, Cesar
dc.contributor.author
Chli, Margarita
dc.contributor.author
Teixeira, Lucas
dc.date.accessioned
2024-09-16T09:08:35Z
dc.date.available
2024-09-15T21:55:34Z
dc.date.available
2024-09-16T07:24:34Z
dc.date.available
2024-09-16T09:08:35Z
dc.date.issued
2024
dc.identifier.uri
http://hdl.handle.net/20.500.11850/693951
dc.identifier.doi
10.3929/ethz-b-000693951
dc.description.abstract
Conducting visual assessments under the canopy using mobile robots is an emerging task in smart farming and forestry. However, it is challenging to register images across different data-collection days, especially across seasons, due to the self-occluding geometry and temporal dynamics in forests and orchards. This paper proposes a new approach for registering under-canopy image sequences in general and in these situations. Our methodology leverages standard GPS data and deep-learning-based perspective to bird’s-eye view conversion to provide an initial estimation of the positions of the trees in images and their association across datasets. Furthermore, it introduces an innovative strategy for extracting tree trunks and clean ground surfaces from noisy and sparse 3D reconstructions created from the image sequences, utilizing these features to achieve precise alignment. Our robust alignment method effectively mitigates position and scale drift, which may arise from GPS inaccuracies and Sparse Structure from Motion (SfM) limitations. We evaluate our approach on three challenging real-world datasets, demonstrating that our method outperforms ICP-based methods on average by 50%, and surpasses FGR and TEASER++ by over 90% in alignment accuracy. These results highlight our method’s cost efficiency and robustness, even in the presence of severe outliers and sparsity. https://github.com/VIS4ROBlab/bev_undercanopy_registration
en_US
dc.format
application/pdf
en_US
dc.language.iso
en
en_US
dc.rights.uri
http://rightsstatements.org/page/InC-NC/1.0/
dc.title
Temporal- and Viewpoint-Invariant Registration for Under-Canopy Footage using Deep-Learning-based Bird's-Eye View Prediction
en_US
dc.type
Conference Paper
dc.rights.license
In Copyright - Non-Commercial Use Permitted
ethz.size
8 p.
en_US
ethz.version.deposit
acceptedVersion
en_US
ethz.event
2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
en_US
ethz.event.location
Abu Dhabi
en_US
ethz.event.date
October 14-18, 2024
en_US
ethz.notes
This work has been partly funded by the European Research Council (ERC), as part of the project SkEyes (Grant agreement no. 101089328), ETH Zurich Research Grant No. 21-1 ETH-27, and by Unity Technologies.
en_US
ethz.publication.status
accepted
en_US
ethz.leitzahl
ETH Zürich::00002 - ETH Zürich::00012 - Lehre und Forschung::00007 - Departemente::02130 - Dep. Maschinenbau und Verfahrenstechnik / Dep. of Mechanical and Process Eng.::02620 - Inst. f. Robotik u. Intelligente Systeme / Inst. Robotics and Intelligent Systems::03737 - Siegwart, Roland Y. / Siegwart, Roland Y.
en_US
ethz.leitzahl
ETH Zürich::00002 - ETH Zürich::00012 - Lehre und Forschung::00007 - Departemente::02130 - Dep. Maschinenbau und Verfahrenstechnik / Dep. of Mechanical and Process Eng.::02620 - Inst. f. Robotik u. Intelligente Systeme / Inst. Robotics and Intelligent Systems::09570 - Hutter, Marco / Hutter, Marco
en_US
ethz.date.deposited
2024-09-15T21:55:34Z
ethz.source
FORM
ethz.eth
yes
en_US
ethz.availability
Open access
en_US
ethz.rosetta.exportRequired
true
ethz.COinS
ctx_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.atitle=Temporal-%20and%20Viewpoint-Invariant%20Registration%20for%20Under-Canopy%20Footage%20using%20Deep-Learning-based%20Bird's-Eye%20View%20Prediction&rft.date=2024&rft.au=Zhou,%20Jiawei&Mascaro,%20Ruben&Cadena,%20Cesar&Chli,%20Margarita&Teixeira,%20Lucas&rft.genre=proceeding&rft.btitle=Temporal-%20and%20Viewpoint-Invariant%20Registration%20for%20Under-Canopy%20Footage%20using%20Deep-Learning-based%20Bird's-Eye%20View%20Prediction
 Search print copy at ETH Library

Files in this item

Thumbnail

Publication type

Show simple item record