SILC: Improving Vision Language Pretraining with Self-distillation
METADATA ONLY
Author / Producer
Date
2025
Publication Type
Conference Paper
ETH Bibliography
yes
Citations
Altmetric
METADATA ONLY
Data
Rights / License
Abstract
Image-Text pretraining on web-scale image caption datasets has become the default recipe for open vocabulary classification and retrieval models thanks to the success of CLIP and its variants. Several works have also used CLIP features for dense prediction tasks and have shown the emergence of open-set abilities. However, the contrastive objective used by these models only focuses on image-text alignment and does not incentivise image feature learning for dense prediction tasks. In this work, we introduce SILC, a novel framework for vision language pretraining. SILC improves image-text contrastive learning with the simple addition of local-to-global correspondence learning by self-distillation. We show that distilling local image features from an exponential moving average (EMA) teacher model significantly improves model performance on dense predictions tasks like detection and segmentation, while also providing improvements on image-level tasks such as classification and retrieval. SILC models sets a new state of the art for zero-shot classification, few shot classification, image and text retrieval, zero-shot segmentation, and open vocabulary segmentation. We further show that SILC features greatly benefit open vocabulary detection, captioning and visual question answering.
Permanent link
Publication status
published
External links
Book title
Computer Vision – ECCV 2024
Journal / series
Volume
15079
Pages / Article No.
38 - 55
Publisher
Springer
Event
18th European Conference on Computer Vision (ECCV 2024)
Edition / version
Methods
Software
Geographic location
Date collected
Date created
Subject
Vision Language Models; Self-supervised Learning; Open-Vocabulary
Organisational unit
03514 - Van Gool, Luc (emeritus) / Van Gool, Luc (emeritus)