Motion-from-Blur: 3D Shape and Motion Estimation of Motion-blurred Objects in Videos


METADATA ONLY

Date

2022

Publication Type

Conference Paper

ETH Bibliography

yes

Citations

Altmetric
METADATA ONLY

Data

Rights / License

Abstract

We propose a method for jointly estimating the 3D motion, 3D shape, and appearance of highly motion-blurred objects from a video. To this end, we model the blurred appearance of a fast moving object in a generative fashion by parametrizing its 3D position, rotation, velocity, acceleration, bounces, shape, and texture over the duration of a predefined time window spanning multiple frames. Using differentiable rendering, we are able to estimate all parameters by minimizing the pixel-wise reprojection error to the input video via backpropagating through a rendering pipeline that accounts for motion blur by averaging the graphics output over short time intervals. For that purpose, we also estimate the camera exposure gap time within the same optimization. To account for abrupt motion changes like bounces, we model the motion trajectory as a piece-wise polynomial, and we are able to estimate the specific time of the bounce at sub-frame accuracy. Experiments on established benchmark datasets demonstrate that our method outperforms previous methods for fast moving object deblurring and 3D reconstruction.

Publication status

published

Editor

Book title

2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)

Journal / series

Volume

Pages / Article No.

15969 - 15978

Publisher

IEEE

Event

2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2022)

Edition / version

Methods

Software

Geographic location

Date collected

Date created

Subject

3D from multi-view and sensors; Low-level vision; Motion and tracking; Optimization methods; Physics-based vision and shape-from X; Pose estimation and tracking

Organisational unit

03766 - Pollefeys, Marc / Pollefeys, Marc check_circle

Notes

Funding

Related publications and datasets