Proceedings chapter

TagCaptcha : annotating images with CAPTCHAs

Presented at New York (USA)
Publication date2009

Image retrieval has long been plagued by limitations on automatic methods because they cannot reliably extract semantic data from low-level features. The result is that users must formulate awkward and inefficient queries in terms these systems can understand. Humans, on the other hand, have the ability to quickly and accurately summarise visual data. This dichotomy, named the semantic gap, is a fundamental problem in image retrieval. We aim to narrow the semantic gap in a typical retrieval scenario by motivating users to provide semantic image annotations. We propose a system of collecting image annotations based on the need for human verification on the web. Similar in principle to work by von Ahn et al. [2, 3], the idea is to exploit the requirement of users to pass tests in order to incrementally annotate images.

Citation (ISO format)
MORRISON, Donn Alexander, MARCHAND-MAILLET, Stéphane, BRUNO, Eric. TagCaptcha : annotating images with CAPTCHAs. In: Proceedings of the ACM SIGKDD Workshop on Human Computation, HCOMP ’09. New York (USA). [s.l.] : ACM, 2009. p. 44–45. doi: 10.1145/1600150.1600166
Main files (1)
Proceedings chapter (Published version)

Technical informations

Creation03/06/2015 5:12:06 PM
First validation03/06/2015 5:12:06 PM
Update time03/14/2023 10:58:29 PM
Status update03/14/2023 10:58:29 PM
Last indexation01/16/2024 5:07:55 PM
All rights reserved by Archive ouverte UNIGE and the University of GenevaunigeBlack