Metadata only
Date
2022Type
- Conference Paper
ETH Bibliography
yes
Altmetrics
Abstract
Estimating the target extent poses a fundamental challenge in visual object tracking. Typically, trackers are box-centric and fully rely on a bounding box to define the target in the scene. In practice, objects
often have complex shapes and are not aligned with the image axis. In these cases, bounding boxes do not provide an accurate description of the target and often contain a majority of background pixels.
We propose a segmentation-centric tracking pipeline that not only produces a highly accurate segmentation mask, but also internally works with segmentation masks instead of bounding boxes. Thus, our tracker is able to better learn a target representation that clearly differentiates the target in the scene from background content. In order to achieve the necessary robustness for the challenging tracking scenario, we propose a separate instance localization component that is used to condition the segmentation decoder when producing the output mask. We infer a bounding box from the segmentation mask, validate our tracker on challenging tracking datasets and achieve the new state of the art on LaSOT with a success AUC score of 69.7%. Since most tracking datasets do not contain mask annotations, we cannot use them to evaluate predicted segmentation masks. Instead, we validate our segmentation quality on two popular video object segmentation datasets. The code and trained models are available at https://github.com/visionml/pytracking. Show more
Publication status
publishedExternal links
Book title
Computer Vision – ECCV 2022Journal / series
Lecture Notes in Computer ScienceVolume
Pages / Article No.
Publisher
SpringerEvent
Subject
Visual tracking; Video Object SegmentationOrganisational unit
03514 - Van Gool, Luc (emeritus) / Van Gool, Luc (emeritus)
Related publications and datasets
Is supplemented by: https://github.com/visionml/pytracking
More
Show all metadata
ETH Bibliography
yes
Altmetrics