Show simple item record

dc.contributor.author
Tewari, Ayush
dc.contributor.author
Thies, Justus
dc.contributor.author
Mildenhall, Ben
dc.contributor.author
Srinivasan, Pratul
dc.contributor.author
Tretschk, Edgar
dc.contributor.author
Wang, Yifan
dc.contributor.author
Lassner, Christoph
dc.contributor.author
Sitzmann, Vincent
dc.contributor.author
Martin-Brualla, Ricardo
dc.contributor.author
Lombardi, Stephen
dc.contributor.author
Simon, Tomas
dc.contributor.author
Theobalt, Christian
dc.contributor.author
Niessner, Matthias
dc.contributor.author
Barron, Jonathan T.
dc.contributor.author
Wetzstein, Gordon
dc.contributor.author
Zollhöfer, Michael
dc.contributor.author
Golyanik, Vladislav
dc.date.accessioned
2022-08-04T12:55:24Z
dc.date.available
2022-07-09T11:23:44Z
dc.date.available
2022-07-27T12:21:47Z
dc.date.available
2022-07-27T12:24:33Z
dc.date.available
2022-07-27T12:48:44Z
dc.date.available
2022-08-04T12:55:24Z
dc.date.issued
2022-05
dc.identifier.issn
1467-8659
dc.identifier.issn
0167-7055
dc.identifier.other
10.1111/cgf.14507
en_US
dc.identifier.uri
http://hdl.handle.net/20.500.11850/557030
dc.description.abstract
Synthesizing photo-realistic images and videos is at the heart of computer graphics and has been the focus of decades of research. Traditionally, synthetic images of a scene are generated using rendering algorithms such as rasterization or ray tracing, which take specifically defined representations of geometry and material properties as input. Collectively, these inputs define the actual scene and what is rendered, and are referred to as the scene representation (where a scene consists of one or more objects). Example scene representations are triangle meshes with accompanied textures (e.g., created by an artist), point clouds (e.g., from a depth sensor), volumetric grids (e.g., from a CT scan), or implicit surface functions (e.g., truncated signed distance fields). The reconstruction of such a scene representation from observations using differentiable rendering losses is known as inverse graphics or inverse rendering. Neural rendering is closely related, and combines ideas from classical computer graphics and machine learning to create algorithms for synthesizing images from real-world observations. Neural rendering is a leap forward towards the goal of synthesizing photo-realistic image and video content. In recent years, we have seen immense progress in this field through hundreds of publications that show different ways to inject learnable components into the rendering pipeline. This state-of-the-art report on advances in neural rendering focuses on methods that combine classical rendering principles with learned 3D scene representations, often now referred to as neural scene representations. A key advantage of these methods is that they are 3D-consistent by design, enabling applications such as novel viewpoint synthesis of a captured scene. In addition to methods that handle static scenes, we cover neural scene representations for modeling non-rigidly deforming objects and scene editing and composition. While most of these approaches are scene-specific, we also discuss techniques that generalize across object classes and can be used for generative tasks. In addition to reviewing these state-of-the-art methods, we provide an overview of fundamental concepts and definitions used in the current literature. We conclude with a discussion on open challenges and social implications.
en_US
dc.language.iso
en
en_US
dc.publisher
Wiley-Blackwell
en_US
dc.title
Advances in Neural Rendering
en_US
dc.type
Conference Paper
dc.date.published
2022-05-24
ethz.journal.title
Computer Graphics Forum
ethz.journal.volume
41
en_US
ethz.journal.issue
2
en_US
ethz.journal.abbreviated
Comput. Graph. Forum
ethz.pages.start
703
en_US
ethz.pages.end
735
en_US
ethz.event
43rd Annual Conference of the European Association for Computer Graphics (EUROGRAPHICS 2022)
en_US
ethz.event.location
Reims, France
en_US
ethz.event.date
April 25-29, 2022
en_US
ethz.identifier.wos
ethz.identifier.scopus
ethz.publication.place
Oxford
en_US
ethz.publication.status
published
en_US
ethz.date.deposited
2022-07-09T11:24:20Z
ethz.source
WOS
ethz.eth
yes
en_US
ethz.availability
Metadata only
en_US
ethz.rosetta.installDate
2022-07-27T12:21:55Z
ethz.rosetta.lastUpdated
2022-07-27T12:21:55Z
ethz.rosetta.exportRequired
true
ethz.rosetta.versionExported
true
ethz.COinS
ctx_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.atitle=Advances%20in%20Neural%20Rendering&rft.jtitle=Computer%20Graphics%20Forum&rft.date=2022-05&rft.volume=41&rft.issue=2&rft.spage=703&rft.epage=735&rft.issn=1467-8659&0167-7055&rft.au=Tewari,%20Ayush&Thies,%20Justus&Mildenhall,%20Ben&Srinivasan,%20Pratul&Tretschk,%20Edgar&rft.genre=proceeding&rft_id=info:doi/10.1111/cgf.14507&
 Search print copy at ETH Library

Files in this item

FilesSizeFormatOpen in viewer

There are no files associated with this item.

Publication type

Show simple item record