Any-Shot GIN: Generalizing Implicit Networks for Reconstructing Novel Classes
Abstract
We address the task of estimating the 3D shapes of novel shape classes from a single RGB image. Prior works are either limited to reconstructing known training classes or are unable to reconstruct high-quality shapes. To solve those issues, we propose Generalizing Implicit Networks (GIN) which decomposes 3D reconstruction into 1.) front-back depth estimation followed by differentiable depth voxelization, and 2.) implicit shape completion with 3D features. The key insight is that the depth estimation network learns local class-agnostic shape priors, allowing us to generalize to novel classes, while our implicit shape completion network is able to predict accurate shapes with rich details by learning implicit surfaces in 3D voxel space. We conduct extensive experiments on a large-scale benchmark using 55 classes of ShapeNet and real images of Pix3D. We qualitatively and quantitatively show that the proposed GIN significantly outperforms the state of the art on both seen and novel shape classes for single-image 3D reconstruction. We also illustrate that our GIN can be further improved by using only few-shot depth supervision from novel classes. Show more
Publication status
publishedExternal links
Book title
2022 International Conference on 3D Vision (3DV)Pages / Article No.
Publisher
IEEEEvent
Subject
single image 3d reconstruction; implicit neural representation; few shot learning; zero shot learningMore
Show all metadata
ETH Bibliography
yes
Altmetrics