en
Proceedings chapter
English

Performance Evaluation in Content-Based Image Retrieval

Presented at INRIA Rocquencourt (Paris, France)
Publication date2001
Abstract

Content-based image retrieval (CBIR) has now reached a mature stage. Search techniques are well-categorized and several research prototypes or commercial products are available. However, CBIR true performance is still difficult to quantify. Setting up a CBIR benchmark is a heavy task and can only be done via the collaboration of all parties involved in the research and development of CBIR prototypes and related commercial products. The Benchathlon effort proposes to create such a context in which CBIR will be evaluated thoroughly and objectively. In this paper, we present the Benchathlon and its objectives in more details. The goal of CBIR benchmarking has been divided into various parallel and inter-related sub-tasks. One essential such task is the definition of ground truth data. Since no such data exists, the image collection is to be constructed from scratch. Copyright issues should be resolved so as to be able to freely distribute, extend and modify this collection. Further, different sub-collections should be available for different specialized applications. It is also acknowledged here that no unique ground truth exists. Techniques to account for user subjectivity should therefore be developed. Considering the effort involved, tools for easing the task of data annotation need also to be designed. Related to this is the definition of objective quantitative performance measures. These measures should be both thorough and orthogonal. In other words, they should allow for a complete evaluation and highlight weaknesses and strengths of the CBIR system under evaluation. The goal being both to compare systems and to help system developers to profile their techniques. To use this data in practical evaluation, there is also the need for defining standard test queries and result sets. Domain-specific constraints will strongly influence the design of such test cases. Another aspect is the feasibility of CBIR benchmarking. This imposes the definition of a flexible software architecture enabling automated benchmarking while leading to little (optimally no) programming overhead. Again, legal issues about the openness of the systems under evaluation should be accounted for. In our paper, we also shortly present the solutions proposed by the Viper team at University of Geneva. These realizations are gathered under the umbrella of our GIFT project where the central feature is the Multimedia Retrieval Markup Language (MRML), an XML-based communication protocol that we think is a necessary tool for enabling CBIR benchmarking. We describe the architecture of our MRML-based benchmark and sketch results for the Viper search engine.

Citation (ISO format)
MARCHAND-MAILLET, Stéphane. Performance Evaluation in Content-Based Image Retrieval. In: Multimedia Content-Based Indexing and Retrieval (MMCBIR 2001). INRIA Rocquencourt (Paris, France). [s.l.] : [s.n.], 2001.
Identifiers
  • PID : unige:47822
423views
0downloads

Technical informations

Creation03/06/2015 5:12:16 PM
First validation03/06/2015 5:12:16 PM
Update time03/14/2023 10:59:21 PM
Status update03/14/2023 10:59:21 PM
Last indexation08/29/2023 3:12:22 PM
All rights reserved by Archive ouverte UNIGE and the University of GenevaunigeBlack