Open-Insect: Benchmarking Open-Set Recognition of Novel Species in Biodiversity Monitoring

Yuyan Chen1,2, Nico Lang3, B. Christian Schmidt4, Aditya Jain2, Yves Basset5,6,7, Sara Beery8, Maxim Larrivée9, David Rolnick1,2

1McGill University    2Mila - Quebec Artificial Intelligence Institute    3University of Copenhagen    4Agriculture and Agri-food Canada    5Smithsonian Tropical Research Institute    6Biology Center, Czech Academy of Sciences    7Maestria de Entomologia, University of Panama    8Massachusetts Institute of Technology    9Montréal Insectarium

📄 Paper | 💻 Code | 🤗 Dataset | 🖼 Poster

Abstract

Global biodiversity is declining at an unprecedented rate, yet little information is known about most species and how their populations are changing. Indeed, some 90% Earth’s species are estimated to be completely unknown. Machine learning has recently emerged as a promising tool to facilitate long-term, large-scale biodiversity monitoring, including algorithms for fine-grained classification of species from images. However, such algorithms typically are not designed to detect examples from categories unseen during training – the problem of open-set recognition (OSR) – limiting their applicability for highly diverse, poorly studied taxa such as insects. To address this gap, we introduce Open-Insect, a large-scale, fine-grained dataset to evaluate unknown species detection across different geographic regions with varying difficulty. We benchmark 38 OSR algorithms across three categories: post-hoc, training-time regularization, and training with auxiliary data, finding that simple post-hoc approaches remain a strong baseline. We also demonstrate how to leverage auxiliary data to improve species discovery in regions with limited data. Our results provide timely insights to guide the development of computer vision methods for biodiversity monitoring and species discovery.

Overview

Dataset overview of Open-Insect
Figure 1: Open-Insect benchmark results on three geographical regions with varying difficulty. The Open-Insect benchmark includes images of thousands of highly visually similar moth species, along with non-moth arthropods, divided by geographic region. Left: Results from 38 OSR methods on three open-set types i) Local moth, ii) Non-local moth, and iii) Non-moth (see Table 2). Right: Visual dissimilarity across taxonomic levels: 1-hop (same genus), 2-hop (different genus, same family), 3-hop (different family within Lepidoptera), and non-moths (different order, ≥4 hops).

Benchmark

Benchmark result
Table 2: Benchmarking results on Open-Insect. We evaluate approaches falling into three categories: i) post-hoc methods, ii) training-time regularization, and iii) training with auxiliary data. Results are shown for the three regions in Open-Insect: NE-America, W-Europe, C-America. For each of the three open-set splits - local (L), non-local (NL), and non-moth (NM) - the AUROC is reported along with the accuracy of the closed-set. The best result within each category is bold, and the overall best result is bold and underlined. For post-hoc methods, we report the mean(standard deviation) from three training runs.

Citation

@inproceedings{chen2025openinsect,
      title     = {Open-Insect: Benchmarking Open-Set Recognition of Novel Species in Biodiversity Monitoring},
      author    = {Chen, Yuyan and Lang, Nico and Schmidt, B. Christian and Jain, Aditya and Basset, Yves and Beery, Sara and Larrivée, Maxim and Rolnick, David},
      booktitle = {NeurIPS 2025 Track on Datasets & Benchmarks},
      year      = {2025},
      url       = {https://openreview.net/pdf?id=63Tia99ofI}
    }

Contact

For questions, please email us at: yuyan.chen2@mail.mcgill.ca.