SILC: Improving Vision Language Pretraining with Self-Distillation
OPEN ACCESS
Loading...
Author / Producer
Date
2023-10-20
Publication Type
Working Paper
ETH Bibliography
yes
Citations
Altmetric
OPEN ACCESS
Data
Rights / License
Abstract
Image-Text pretraining on web-scale image caption dataset has become the default recipe for open vocabulary classification and retrieval models thanks to the success of CLIP and its variants. Several works have also used CLIP features for dense prediction tasks and have shown the emergence of open-set abilities. However, the contrastive objective only focuses on image-text alignment and does not incentivise image feature learning for dense prediction tasks. In this work, we propose the simple addition of local-to-global correspondence learning by self-distillation as an additional objective for contrastive pre-training to propose SILC. We show that distilling local image features from an exponential moving average (EMA) teacher model significantly improves model performance on several computer vision tasks including classification, retrieval, and especially segmentation. We further show that SILC scales better with the same training duration compared to the baselines. Our model SILC sets a new state of the art for zero-shot classification, few shot classification, image and text retrieval, zero-shot segmentation, and open vocabulary segmentation.
Permanent link
Publication status
published
Editor
Book title
Journal / series
Volume
Pages / Article No.
2310.13355
Publisher
Cornell University
Event
Edition / version
v1
Methods
Software
Geographic location
Date collected
Date created
Subject
Organisational unit
03514 - Van Gool, Luc (emeritus) / Van Gool, Luc (emeritus)