Open access
Date
2023-08-22Type
- Conference Paper
ETH Bibliography
yes
Altmetrics
Abstract
The ability to recognize, localize and track dynamic objects in a scene is fundamental to many real-world applications, such as self-driving and robotic systems. Yet, traditional multiple object tracking (MOT) benchmarks rely only on a few object categories that hardly represent the multitude of possible objects that are encountered in the real world. This leaves contemporary MOT methods limited to a small set of pre-defined object categories. In this paper, we address this limitation by tackling a novel task, open-vocabulary MOT, that aims to evaluate tracking beyond pre-defined training categories. We further develop OVTrack, an open-vocabulary tracker that is capable of tracking arbitrary object classes. Its design is based on two key ingredients: First, leveraging vision-language models for both classification and association via knowledge distillation; second, a data hallucination strategy for robust appearance feature learning from denoising diffusion probabilistic models. The result is an extremely data-efficient open-vocabulary tracker that sets a new state-of-the-art on the large-scale, large-vocabulary TAO benchmark, while being trained solely on static images. Show more
Permanent link
https://doi.org/10.3929/ethz-b-000651808Publication status
publishedExternal links
Book title
2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)Pages / Article No.
Publisher
IEEEEvent
Subject
Training; Representation learning; Taxonomy; Benchmark testing; Probabilistic logic; Data models; Pattern recognition; Tracking; Low-level analysis; MotionOrganisational unit
09688 - Yu, Fisher / Yu, Fisher
More
Show all metadata
ETH Bibliography
yes
Altmetrics