Multi-hypothesis representation learning for transformer-based 3D human pose estimation


METADATA ONLY
Loading...

Author / Producer

Date

2023-09

Publication Type

Journal Article

ETH Bibliography

yes

Citations

Altmetric
METADATA ONLY

Data

Rights / License

Abstract

Despite significant progress, estimating 3D human poses from monocular videos remains a challenging task due to depth ambiguity and self-occlusion. Most existing works attempt to solve both issues by exploiting spatial and temporal relationships. However, those works ignore the fact that it is an inverse problem where multiple feasible solutions (i.e., hypotheses) exist. To relieve this limitation, we propose a Multi-Hypothesis Transformer that learns spatio-temporal representations of multiple plausible pose hypotheses. In order to effectively model multi-hypothesis dependencies and build strong relationships across hypothesis features, we introduce a one-to-many-to-one three-stage framework: (i) Generate multiple initial hypothesis representations; (ii) Model self-hypothesis communication, merge multiple hypotheses into a single converged representation and then partition it into several diverged hypotheses; (iii) Learn cross-hypothesis communication and aggregate the multi-hypothesis features to synthesize the final 3D pose. Through the above processes, the final representation is enhanced and the synthesized pose is much more accurate. Extensive experiments show that the proposed method achieves state-of-the-art results on two challenging datasets: Human3.6M and MPI-INF-3DHP. The code and models are available at https://github.com/Vegetebird/MHFormer.

Publication status

published

Editor

Book title

Volume

141

Pages / Article No.

109631

Publisher

Elsevier

Event

Edition / version

Methods

Software

Geographic location

Date collected

Date created

Subject

3D Human pose estimation; Transformer; Multi-Hypothesis; Self-Hypothesis; Cross-Hypothesis

Organisational unit

Notes

Funding

Related publications and datasets