Technical report
Open access

Automated benchmarking in content-based image retrieval

  • Technical report VISION; 01.01
Publication date2001

Benchmarking has always been a crucial problem in content-based image retrieval (CBIR). A key issue is the lack of a common access method to retrieval systems, such as SQL for relational databases. The Multimedia Retrieval Mark-up Language (MRML) solves this problem by standardizing access to CBIR systems (CBIRSs). Other difficult problems are also shortly addressed, such as obtaining relevance judgments and choosing a database for performance comparison. In this article we present a fully automated benchmark for CBIRSs based on MRML, which can be adapted to any image database and almost any kind of relevance judgment. The test evaluates the performance of positive and negative relevance feedback, which can be generated automatically from the relevance judgments. To illustrate our purpose, a freely available, non-copyright image collection is used to evaluate our CBIRS, Viper. All scripts described here are also freely available for download.

Citation (ISO format)
MULLER, Henning et al. Automated benchmarking in content-based image retrieval. 2001
Main files (1)
  • PID : unige:48030

Technical informations

Creation03/09/2015 11:34:28 AM
First validation03/09/2015 11:34:28 AM
Update time03/14/2023 11:00:31 PM
Status update03/14/2023 11:00:31 PM
Last indexation01/16/2024 5:16:37 PM
All rights reserved by Archive ouverte UNIGE and the University of GenevaunigeBlack