Scientific article
Open access

Improving Robotic Hand Prosthesis Control With Eye Tracking and Computer Vision: A Multimodal Approach Based on the Visuomotor Behavior of Grasping

Published inFrontiers in artificial intelligence, vol. 4, 744476
Publication date2022-01-25
First online date2022-01-25

The complexity and dexterity of the human hand make the development of natural and robust control of hand prostheses challenging. Although a large number of control approaches were developed and investigated in the last decades, limited robustness in real-life conditions often prevented their application in clinical settings and in commercial products. In this paper, we investigate a multimodal approach that exploits the use of eye-hand coordination to improve the control of myoelectric hand prostheses. The analyzed data are from the publicly available MeganePro Dataset 1, that includes multimodal data from transradial amputees and able-bodied subjects while grasping numerous household objects with ten grasp types. A continuous grasp-type classification based on surface electromyography served as both intent detector and classifier. At the same time, the information provided by eye-hand coordination parameters, gaze data and object recognition in first-person videos allowed to identify the object a person aims to grasp. The results show that the inclusion of visual information significantly increases the average offline classification accuracy by up to 15.61 ± 4.22% for the transradial amputees and of up to 7.37 ± 3.52% for the able-bodied subjects, allowing trans-radial amputees to reach average classification accuracy comparable to intact subjects and suggesting that the robustness of hand prosthesis control based on grasp-type recognition can be significantly improved with the inclusion of visual information extracted by leveraging natural eye-hand coordination behavior and without placing additional cognitive burden on the user.

  • Assistive robotics
  • Deep learning
  • Electromyography
  • Eye-hand coordination
  • Eye-tracking
  • Hand prosthetics
  • Manipulators
  • Multi-modal machine learning
Citation (ISO format)
COGNOLATO, Matteo et al. Improving Robotic Hand Prosthesis Control With Eye Tracking and Computer Vision: A Multimodal Approach Based on the Visuomotor Behavior of Grasping. In: Frontiers in artificial intelligence, 2022, vol. 4, p. 744476. doi: 10.3389/frai.2021.744476
Main files (1)
Article (Published version)
Secondary files (1)
ISSN of the journal2624-8212

Technical informations

Creation09/20/2022 9:59:00 AM
First validation09/20/2022 9:59:00 AM
Update time03/16/2023 10:14:17 AM
Status update03/16/2023 10:14:16 AM
Last indexation02/01/2024 9:19:05 AM
All rights reserved by Archive ouverte UNIGE and the University of GenevaunigeBlack