UNIGE document Scientific Article
previous document  unige:47475  next document
add to browser collection

Performance Evaluation in Content-Based Image Retrieval: Overview and Proposals

Published in Pattern Recognition Letters. 2001, vol. 22, no. 5, p. 593-601
Abstract Evaluation of retrieval performance is a crucial problem in content-based image retrieval (CBIR). Many different methods for measuring the performance of a system have been created and used by researchers. This article discusses the advantages and shortcomings of the performance measures currently used. Problems such as defining a common image database for performance comparisons and a means of getting relevance judgments (or ground truth) for queries are explained. The relationship between CBIR and information retrieval (IR) is made clear, since IR researchers have decades of experience with the evaluation problem. Many of their solutions can be used for CBIR, despite the differences between the fields. Several methods used in text retrieval are explained. Proposals for performance measures and means of developing a standard test suite for CBIR, similar to that used in IR at the annual Text REtrieval Conference (TREC), are presented. (c) Copyright 2001, Elsevier Science, All rights reserved.
Keywords Content-based image retrievalPerformance evaluationInformation retrieval
Note Special Issue on Image and Video Indexing
Full text
Research groups Viper group
Computer Vision and Multimedia Laboratory
Multimodal Interaction Group
(ISO format)
MULLER, Henning et al. Performance Evaluation in Content-Based Image Retrieval: Overview and Proposals. In: Pattern Recognition Letters, 2001, vol. 22, n° 5, p. 593-601. https://archive-ouverte.unige.ch/unige:47475

182 hits

0 download


Deposited on : 2015-03-03

Export document
Format :
Citation style :