Reconstructing Action-Conditioned Human-Object Interactions Using Commonsense Knowledge Priors
Metadata only
Autor(in)
Alle anzeigen
Datum
2022-09-06Typ
- Working Paper
ETH Bibliographie
yes
Altmetrics
Abstract
We present a method for inferring diverse 3D models of human-object interactions from images. Reasoning about how humans interact with objects in complex scenes from a single 2D image is a challenging task given ambiguities arising from the loss of information through projection. In addition, modeling 3D interactions requires the generalization ability towards diverse object categories and interaction types. We propose an action-conditioned modeling of interactions that allows us to infer diverse 3D arrangements of humans and objects without supervision on contact regions or 3D scene geometry. Our method extracts high-level commonsense knowledge from large language models (such as GPT-3), and applies them to perform 3D reasoning of human-object interactions. Our key insight is priors extracted from large language models can help in reasoning about human-object contacts from textural prompts only. We quantitatively evaluate the inferred 3D models on a large human-object interaction dataset and show how our method leads to better 3D reconstructions. We further qualitatively evaluate the effectiveness of our method on real images and demonstrate its generalizability towards interaction types and object categories. Mehr anzeigen
Publikationsstatus
publishedExterne Links
Zeitschrift / Serie
arXivSeiten / Artikelnummer
Verlag
Cornell UniversityThema
Computer Vision and Pattern Recognition (cs.CV); Computation and Language (cs.CL); FOS: Computer and information sciencesOrganisationseinheit
03979 - Hilliges, Otmar / Hilliges, Otmar
Zugehörige Publikationen und Daten
Is part of: https://doi.org/10.3929/ethz-b-000587514
ETH Bibliographie
yes
Altmetrics