Metadata only
Date
2021Type
- Conference Paper
ETH Bibliography
yes
Altmetrics
Abstract
Deep generative models can synthesize photorealistic images of human faces with novel identities. However, a key challenge to the wide applicability of such techniques is to provide independent control over semantically meaningful parameters: appearance, head pose, face shape, and facial expressions. In this paper, we propose VariTex - to the best of our knowledge the first method that learns a variational latent feature space of neural face textures, which allows sampling of novel identities. We combine this generative model with a parametric face model and gain explicit control over head pose and facial expressions. To generate complete images of human heads, we propose an additive decoder that adds plausible details such as hair. A novel training scheme enforces a pose-independent latent space and in consequence, allows learning a one-to-many mapping between latent codes and pose-conditioned exterior regions. The resulting method can generate geometrically consistent images of novel identities under fine-grained control over head pose, face shape, and facial expressions. This facilitates a broad range of downstream tasks, like sampling novel identities, changing the head pose, expression transfer, and more. Show more
Publication status
publishedExternal links
Book title
2021 IEEE/CVF International Conference on Computer Vision (ICCV)Pages / Article No.
Publisher
IEEEEvent
Subject
Image and video synthesis; Faces; Neural generative modelsOrganisational unit
03979 - Hilliges, Otmar / Hilliges, Otmar
Funding
717054 - Optimization-based End-User Design of Interactive Technologies (EC)
More
Show all metadata
ETH Bibliography
yes
Altmetrics