Technical report
OA Policy
English

Automated benchmarking in content-based image retrieval

PublisherGenève
Collection
  • Technical report VISION; 01.01
Publication date2001
Abstract

Benchmarking has always been a crucial problem in content-based image retrieval (CBIR). A key issue is the lack of a common access method to retrieval systems, such as SQL for relational databases. The Multimedia Retrieval Mark-up Language (MRML) solves this problem by standardizing access to CBIR systems (CBIRSs). Other difficult problems are also shortly addressed, such as obtaining relevance judgments and choosing a database for performance comparison. In this article we present a fully automated benchmark for CBIRSs based on MRML, which can be adapted to any image database and almost any kind of relevance judgment. The test evaluates the performance of positive and negative relevance feedback, which can be generated automatically from the relevance judgments. To illustrate our purpose, a freely available, non-copyright image collection is used to evaluate our CBIRS, Viper. All scripts described here are also freely available for download.

Citation (ISO format)
MULLER, Henning et al. Automated benchmarking in content-based image retrieval. 2001
Main files (1)
Report
accessLevelPublic
Identifiers
  • PID : unige:48030
571views
399downloads

Technical informations

Creation09/03/2015 12:34:28
First validation09/03/2015 12:34:28
Update time15/03/2023 00:00:31
Status update15/03/2023 00:00:31
Last indexation31/10/2024 00:26:59
All rights reserved by Archive ouverte UNIGE and the University of GenevaunigeBlack