I2MVFormer: Large Language Model Generated Multi-View Document Supervision for Zero-Shot Image Classification
Metadata only
Autor(in)
Alle anzeigen
Datum
2023Typ
- Conference Paper
ETH Bibliographie
yes
Altmetrics
Abstract
Recent works have shown that unstructured text (documents) from online sources can serve as useful auxiliary information for zero-shot image classification. However, these methods require access to a high-quality source like Wikipedia and are limited to a single source of information. Large Language Models (LLM) trained on web-scale text show impressive abilities to repurpose their learned knowledge for a multitude of tasks. In this work, we provide a novel perspective on using an LLM to provide text supervision for a zero-shot image classification model. The LLM is provided with a few text descriptions from different annotators as examples. The LLM is conditioned on these examples to generate multiple text descriptions for each class (referred to as views). Our proposed model, I2MVFormer, learns multi-view semantic embeddings for zero-shot image classification with these class views. We show that each text view of a class provides complementary information allowing a model to learn a highly discriminative class embedding. Moreover, we show that I2MVFormer is better at consuming the multi-view text supervision from LLM compared to baseline models. I2MVFormer establishes a new state-of-the-art on three public benchmark datasets for zero-shot image classification with unsupervised semantic embeddings. Code available at https://github.com/ferjad/I2DFormer Mehr anzeigen
Publikationsstatus
publishedExterne Links
Buchtitel
2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)Seiten / Artikelnummer
Verlag
IEEEKonferenz
Organisationseinheit
03514 - Van Gool, Luc (emeritus) / Van Gool, Luc (emeritus)
ETH Bibliographie
yes
Altmetrics