MoVideo: Motion-Aware Video Generation with Diffusion Model
METADATA ONLY
Loading...
Author / Producer
Date
2025
Publication Type
Conference Paper
ETH Bibliography
yes
Citations
Altmetric
METADATA ONLY
Data
Rights / License
Abstract
While recent years have witnessed great progress on using diffusion models for video generation, most of them are simple extensions of image generation frameworks, which fail to explicitly consider one of the key differences between videos and images, i.e., motion. In this paper, we propose a novel motion-aware video generation (MoVideo) framework that takes motion into consideration from two aspects: video depth and optical flow. The former regulates motion by per-frame object distances and spatial layouts, while the later describes motion by cross-frame correspondences that help in preserving fine details and improving temporal consistency. More specifically, given a key frame that exists or generated from text prompts, we first design a diffusion model with spatio-temporal modules to generate the video depth and the corresponding optical flows. Then, the video is generated in the latent space by another spatio-temporal diffusion model under the guidance of depth, optical flow-based warped latent video and the calculated occlusion mask. Lastly, we use optical flows again to align and refine different frames for better video decoding from the latent space to the pixel space. In experiments, MoVideo achieves state-of-the-art results in both text-to-video and image-to-video generation, showing promising prompt consistency, frame consistency and visual quality.
Permanent link
Publication status
published
External links
Book title
Computer Vision – ECCV 2024
Journal / series
Volume
15102
Pages / Article No.
56 - 74
Publisher
Springer
Event
18th European Conference on Computer Vision (ECCV 2024)
Edition / version
Methods
Software
Geographic location
Date collected
Date created
Subject
Organisational unit
03514 - Van Gool, Luc (emeritus) / Van Gool, Luc (emeritus)