Show simple item record

dc.contributor.author
Granskog, Jonathan
dc.contributor.author
Rousselle, Fabrice
dc.contributor.author
Papas, Marios
dc.contributor.author
Novák, Jan
dc.date.accessioned
2020-09-10T13:22:28Z
dc.date.available
2020-09-04T20:11:56Z
dc.date.available
2020-09-10T13:22:28Z
dc.date.issued
2020-07
dc.identifier.issn
0730-0301
dc.identifier.issn
1557-7368
dc.identifier.other
10.1145/3386569.3392475
en_US
dc.identifier.uri
http://hdl.handle.net/20.500.11850/438589
dc.description.abstract
We present a technique for adaptively partitioning neural scene representations. Our method disentangles lighting, material, and geometric information yielding a scene representation that preserves the orthogonality of these components, improves interpretability of the model, and allows compositing new scenes by mixing components of existing ones. The proposed adaptive partitioning respects the uneven entropy of individual components and permits compressing the scene representation to lower its memory footprint and potentially reduce the evaluation cost of the model. Furthermore, the partitioned representation enables an in-depth analysis of existing image generators. We compare the flow of information through individual partitions, and by contrasting it to the impact of additional inputs (G-buffer), we are able to identify the roots of undesired visual artifacts, and propose one possible solution to remedy the poor performance. We also demonstrate the benefits of complementing traditional forward renderers by neural representations and synthesis, e.g. to infer expensive shading effects, and show how these could improve production rendering in the future if developed further. © 2020 Association for Computing Machinery.
en_US
dc.language.iso
en
en_US
dc.publisher
Association for Computing Machinery
en_US
dc.subject
Rendering
en_US
dc.subject
Neural networks
en_US
dc.subject
Neural scene representations
en_US
dc.subject
Disentanglement
en_US
dc.subject
Attribution
en_US
dc.title
Compositional neural scene representations for shading inference
en_US
dc.type
Journal Article
dc.date.published
2020-07-08
ethz.journal.title
ACM Transactions on Graphics
ethz.journal.volume
39
en_US
ethz.journal.issue
4
en_US
ethz.journal.abbreviated
ACM trans. graph.
ethz.pages.start
135
en_US
ethz.size
13 p.
en_US
ethz.identifier.wos
ethz.identifier.scopus
ethz.publication.place
New York, NY
en_US
ethz.publication.status
published
en_US
ethz.date.deposited
2020-09-04T20:12:06Z
ethz.source
SCOPUS
ethz.eth
yes
en_US
ethz.availability
Metadata only
en_US
ethz.rosetta.installDate
2020-09-10T13:22:39Z
ethz.rosetta.lastUpdated
2021-02-15T17:08:59Z
ethz.rosetta.versionExported
true
ethz.COinS
ctx_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.atitle=Compositional%20neural%20scene%20representations%20for%20shading%20inference&rft.jtitle=ACM%20Transactions%20on%20Graphics&rft.date=2020-07&rft.volume=39&rft.issue=4&rft.spage=135&rft.issn=0730-0301&1557-7368&rft.au=Granskog,%20Jonathan&Rousselle,%20Fabrice&Papas,%20Marios&Nov%C3%A1k,%20Jan&rft.genre=article&rft_id=info:doi/10.1145/3386569.3392475&
 Search print copy at ETH Library

Files in this item

FilesSizeFormatOpen in viewer

There are no files associated with this item.

Publication type

Show simple item record