Performance Evaluation in Content-Based Image Retrieval
|Published in||Multimedia Content-Based Indexing and Retrieval (MMCBIR 2001). INRIA Rocquencourt (Paris, France). 2001|
|Abstract||Content-based image retrieval (CBIR) has now reached a mature stage. Search techniques are well-categorized and several research prototypes or commercial products are available. However, CBIR true performance is still difficult to quantify. Setting up a CBIR benchmark is a heavy task and can only be done via the collaboration of all parties involved in the research and development of CBIR prototypes and related commercial products. The Benchathlon effort proposes to create such a context in which CBIR will be evaluated thoroughly and objectively. In this paper, we present the Benchathlon and its objectives in more details. The goal of CBIR benchmarking has been divided into various parallel and inter-related sub-tasks. One essential such task is the definition of ground truth data. Since no such data exists, the image collection is to be constructed from scratch. Copyright issues should be resolved so as to be able to freely distribute, extend and modify this collection. Further, different sub-collections should be available for different specialized applications. It is also acknowledged here that no unique ground truth exists. Techniques to account for user subjectivity should therefore be developed. Considering the effort involved, tools for easing the task of data annotation need also to be designed. Related to this is the definition of objective quantitative performance measures. These measures should be both thorough and orthogonal. In other words, they should allow for a complete evaluation and highlight weaknesses and strengths of the CBIR system under evaluation. The goal being both to compare systems and to help system developers to profile their techniques. To use this data in practical evaluation, there is also the need for defining standard test queries and result sets. Domain-specific constraints will strongly influence the design of such test cases. Another aspect is the feasibility of CBIR benchmarking. This imposes the definition of a flexible software architecture enabling automated benchmarking while leading to little (optimally no) programming overhead. Again, legal issues about the openness of the systems under evaluation should be accounted for. In our paper, we also shortly present the solutions proposed by the Viper team at University of Geneva. These realizations are gathered under the umbrella of our GIFT project where the central feature is the Multimedia Retrieval Markup Language (MRML), an XML-based communication protocol that we think is a necessary tool for enabling CBIR benchmarking. We describe the architecture of our MRML-based benchmark and sketch results for the Viper search engine.|
This document has no fulltext available yet, but you can contact its author by using the form below.
|Research groups||Computer Vision and Multimedia Laboratory|
|MARCHAND-MAILLET, Stéphane. Performance Evaluation in Content-Based Image Retrieval. In: Multimedia Content-Based Indexing and Retrieval (MMCBIR 2001). INRIA Rocquencourt (Paris, France). [s.l.] : [s.n.], 2001. https://archive-ouverte.unige.ch/unige:47822|