Open access
Date
2023Type
- Conference Paper
ETH Bibliography
yes
Altmetrics
Abstract
While the ImageNet dataset has been driving computer vision research over the past decade, significant label noise and ambiguity have made top-1 accuracy an insufficient measure of further progress. To address this, new label-sets and evaluation protocols have been proposed for ImageNet showing that state-of-the-art models already achieve over 95% accuracy and shifting the focus on investigating why the remaining errors persist.
Recent work in this direction employed a panel of experts to manually categorize all remaining classification errors for two selected models. However, this process is time-consuming, prone to inconsistencies, and requires trained experts, making it unsuitable for regular model evaluation thus limiting its utility. To overcome these limitations, we propose the first automated error classification framework, a valuable tool to study how modeling choices affect error distributions. We use our framework to comprehensively evaluate the error distribution of over 900 models. Perhaps surprisingly, we find that across model architectures, scales, and pre-training corpora, top-1 accuracy is a strong predictor for the portion of all error types. In particular, we observe that the portion of severe errors drops significantly with top-1 accuracy indicating that, while it underreports a model's true performance, it remains a valuable performance metric.
We release all our code at https://github.com/eth-sri/automated-error-analysis. Show more
Permanent link
https://doi.org/10.3929/ethz-b-000653416Publication status
publishedEvent
Organisational unit
03948 - Vechev, Martin / Vechev, Martin
Funding
MB22.00088 - SafeAI: Certified Safe, Fair and Robust Artificial Intelligence (SBFI)
101070617/22.00164 - European Lighthouse on Secure and Safe AI (SBFI)
Notes
Poster presentationMore
Show all metadata
ETH Bibliography
yes
Altmetrics