Open access
Date
2023Type
- Conference Paper
ETH Bibliography
yes
Altmetrics
Abstract
The design of more complex and powerful neural net- work models has significantly advanced the state-of-the-art in visual object tracking. These advances can be attributed to deeper networks, or the introduction of new building blocks, such as transformers. However, in the pursuit of increased tracking performance, runtime is often hindered. Furthermore, efficient tracking architectures have received surprisingly little attention. In this paper, we introduce the Exemplar Transformer, a transformer module utilizing a single instance level attention layer for realtime visual object tracking. E.T.Track, our visual tracker that incorpo- rates Exemplar Transformer modules, runs at 47 FPS on a CPU. This is up to 8× faster than other transformer-based models. When compared to lightweight trackers that can operate in realtime on standard CPUs, E.T.Track consis- tently outperforms all other methods on the LaSOT [16], OTB-100 [52], NFS [27], TrackingNet [36], and VOT- ST2020 [29] datasets. Code and models are available at https://github.com/pblatter/ettrack. Show more
Permanent link
https://doi.org/10.3929/ethz-b-000592890Publication status
publishedExternal links
Book title
2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)Pages / Article No.
Publisher
IEEEEvent
Organisational unit
03514 - Van Gool, Luc / Van Gool, Luc
More
Show all metadata
ETH Bibliography
yes
Altmetrics