Show simple item record

dc.contributor.author
Wei, Yi
dc.contributor.author
Liu, Shaohui
dc.contributor.author
Zhou, Jie
dc.contributor.author
Lu, Jiwen
dc.date.accessioned
2023-09-07T14:02:35Z
dc.date.available
2023-08-21T06:57:02Z
dc.date.available
2023-08-24T06:58:57Z
dc.date.available
2023-09-07T14:02:35Z
dc.date.issued
2023-09-01
dc.identifier.issn
0162-8828
dc.identifier.issn
1939-3539
dc.identifier.other
10.1109/TPAMI.2023.3263464
en_US
dc.identifier.uri
http://hdl.handle.net/20.500.11850/627408
dc.description.abstract
In this work, we present a new multi-view depth estimation method NerfingMVS that utilizes both conventional reconstruction and learning-based priors over the recently proposed neural radiance fields (NeRF). Unlike existing neural network based optimization method that relies on estimated correspondences, our method directly optimizes over implicit volumes, eliminating the challenging step of matching pixels in indoor scenes. The key to our approach is to utilize the learning-based priors to guide the optimization process of NeRF. Our system first adapts a monocular depth network over the target scene by finetuning on its MVS reconstruction from COLMAP. Then, we show that the shape-radiance ambiguity of NeRF still exists in indoor environments and propose to address the issue by employing the adapted depth priors to monitor the sampling process of volume rendering. Finally, a per-pixel confidence map acquired by error computation on the rendered image can be used to further improve the depth quality. We further present NerfingMVS++, where a coarse-to-fine depth priors training strategy is proposed to directly utilize sparse SfM points and the uniform sampling is replaced by Gaussian sampling to boost the performance. Experiments show that our NerfingMVS and its extension NerfingMVS++ achieve state-of-the-art performances on indoor datasets ScanNet and NYU Depth V2. In addition, we show that the guided optimization scheme does not sacrifice the original synthesis capability of neural radiance fields, improving the rendering quality on both seen and novel views. Code is available at https://github.com/weiyithu/NerfingMVS.
en_US
dc.language.iso
en
en_US
dc.publisher
IEEE
en_US
dc.subject
Depth estimation
en_US
dc.subject
3D reconstruction
en_US
dc.subject
Multi-view stereo
en_US
dc.subject
neural radiance fields
en_US
dc.title
Depth-Guided Optimization of Neural Radiance Fields for Indoor Multi-View Stereo
en_US
dc.type
Journal Article
dc.date.published
2023-03-31
ethz.journal.title
IEEE Transactions on Pattern Analysis and Machine Intelligence
ethz.journal.volume
45
en_US
ethz.journal.issue
9
en_US
ethz.journal.abbreviated
IEEE Trans. Pattern Anal. Mach. Intell.
ethz.pages.start
10835
en_US
ethz.pages.end
10849
en_US
ethz.identifier.wos
ethz.identifier.scopus
ethz.publication.place
New York, NY
en_US
ethz.publication.status
published
en_US
ethz.date.deposited
2023-08-21T06:57:13Z
ethz.source
WOS
ethz.eth
yes
en_US
ethz.availability
Metadata only
en_US
ethz.rosetta.installDate
2023-09-07T14:02:36Z
ethz.rosetta.lastUpdated
2023-09-07T14:02:36Z
ethz.rosetta.versionExported
true
ethz.COinS
ctx_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.atitle=Depth-Guided%20Optimization%20of%20Neural%20Radiance%20Fields%20for%20Indoor%20Multi-View%20Stereo&rft.jtitle=IEEE%20Transactions%20on%20Pattern%20Analysis%20and%20Machine%20Intelligence&rft.date=2023-09-01&rft.volume=45&rft.issue=9&rft.spage=10835&rft.epage=10849&rft.issn=0162-8828&1939-3539&rft.au=Wei,%20Yi&Liu,%20Shaohui&Zhou,%20Jie&Lu,%20Jiwen&rft.genre=article&rft_id=info:doi/10.1109/TPAMI.2023.3263464&
 Search print copy at ETH Library

Files in this item

FilesSizeFormatOpen in viewer

There are no files associated with this item.

Publication type

Show simple item record