Book chapter

See ColOr: Seeing Colours with an Orchestra

Published inHuman Machine Interaction, Editors Lalanne, Denis & Kohlas, Jürg, p. 251-279
PublisherBerlin : Springer
  • Lecture Notes in Computer Science; 5440
Publication date2009

The See Color interface transforms a small portion of a coloured video image into sound sources represented by spatialised musical instruments. Basically, the conversion of colours into sounds is achieved by quantisation of the HSL (Hue, Saturation and Luminosity) colour system. Our purpose is to provide visually impaired individuals with a capability of perception of the environment in real time. In this work we present the system's principles of design and several experiments that have been carried out by several blindfolded persons. The goal of the first experiment was to identify the colours of main features in static pictures in order to interpret the image scenes. Participants found that colours were helpful to limit the possible image interpretations. Afterwards, two experiments based on a head mounted camera have been performed. The first experiment pertains to object manipulation. It is based on the pairing of coloured socks, while the second experiment is related to outdoor navigation with the goal of following a coloured sinuous path painted on the ground. The socks experiment demonstrated that blindfolded individuals were able to accurately match pairs of coloured socks. The same participants successfully followed a red serpentine path painted on the ground for more than 80 meters. Finally, we propose an original approach for a real time alerting system, based on the detection of visual salient parts in videos. The particularity of our approach lies in the use of a new feature map constructed from the depth gradient. From the computed feature maps we infer conspicuity maps that indicate areas that are appreciably different from their surrounding. Then a specific distance function is described, which takes into account both stereoscopic camera limitations and user's choices. We also report how we automatically estimate the relative contribution of each conspicuity map, which enables the unsupervised determination of the final saliency map, indicating the visual salience of all points in the image. We demonstrate here that this additional depth-based feature map allows the system to detect salient regions with good accuracy in most situations, even in the presence of noisy disparity maps.

  • Salient Region
  • Depth Gradient
  • Blind User
  • Tactile Display
  • White Cane
Citation (ISO format)
DEVILLE, Benoît et al. See ColOr: Seeing Colours with an Orchestra. In: Human Machine Interaction. Berlin : Springer, 2009. p. 251–279. (Lecture Notes in Computer Science) doi: 10.1007/978-3-642-00437-7_10
Main files (1)
Book chapter (Published version)

Technical informations

Creation10/08/2021 9:01:00 AM
First validation10/08/2021 9:01:00 AM
Update time03/16/2023 1:27:51 AM
Status update03/16/2023 1:27:50 AM
Last indexation02/12/2024 12:11:22 PM
All rights reserved by Archive ouverte UNIGE and the University of GenevaunigeBlack