Ensembles of data-efficient vision transformers as a new paradigm for automated classification in ecology
Abstract
Monitoring biodiversity is paramount to manage and protect natural resources. Collecting images of organisms over large temporal or spatial scales is a promising practice to monitor the biodiversity of natural ecosystems, providing large amounts of data with minimal interference with the environment. Deep learning models are currently used to automate classification of organisms into taxonomic units. However, imprecision in these classifiers introduces a measurement noise that is difficult to control and can significantly hinder the analysis and interpretation of data. We overcome this limitation through ensembles of Data-efficient image Transformers (DeiTs), which not only are easy to train and implement, but also significantly outperform the previous state of the art (SOTA). We validate our results on ten ecological imaging datasets of diverse origin, ranging from plankton to birds. On all the datasets, we achieve a new SOTA, with a reduction of the error with respect to the previous SOTA ranging from 29.35% to 100.00%, and often achieving performances very close to perfect classification. Ensembles of DeiTs perform better not because of superior single-model performances but rather due to smaller overlaps in the predictions by independent models and lower top-1 probabilities. This increases the benefit of ensembling, especially when using geometric averages to combine individual learners. While we only test our approach on biodiversity image datasets, our approach is generic and can be applied to any kind of images. Show more
Permanent link
https://doi.org/10.3929/ethz-b-000587494Publication status
publishedExternal links
Journal / series
Scientific ReportsVolume
Pages / Article No.
Publisher
NatureSubject
Biodiversity; Computer science; Ecology; LimnologyOrganisational unit
03705 - Jokela, Jukka / Jokela, Jukka
More
Show all metadata
ETH Bibliography
yes
Altmetrics