VMGdB
the CONTACT Visuo-Motor Grasping dataBase
[back]
Grasping is one of the most interesting challenges in nowadays robotics, posing problems to the mechanical and electronic engineer, the computer vision researcher, the control theorist and, more recently, the neuroscientist. The study of human grasping has proved beneficial to get a better understanding of the problem. In this paper we present VMGdB, the CONTACT Visuo-Motor Grasping Database, a recording of grasping actions performed by 20 human subjects on 7 objects using 5 ways of grasping, under variable illumination conditions. The VMGdB consists of 5200 grasping acts organized in 260 data entries — each of which made of 2 video sequences recorded from two colour cameras, and motor data recorded from a sensorised glove. Labeled data are available as standard AVI videos and a file of ASCII outputs from the glove.

The intentionally unstructured illumination conditions and the fact that the objects are of the most diverse shapes, textures, and colors, make this database a rather realistic ground model of the human act of grasping.

The VMGdB provides to the community a reliable and flexible testbed for tackling the problem of grasping from a humanoid/human-oriented perspective, and hopefully not only that.

Acquisition setup
The VMGdB contains recorgings of grasping actions performed by human subjects sitting in front of a desk onto which an object to grasp is placed.

Each subject was asked to grasp the object in front of him/her, with the right hand wearing a sensorized glove, and then to put it back in the original position of the desk with the left hand, as the right hand went to the resting position.
 
The scene was illuminated by natural light. Illumination conditions were intentionally not controlled and changed overtime since the acquisition session spanned over a week.

The main acquisition components are:
  • 2 Watec WAT-202D color cameras operating at 25Hz for extracting visual representations of the grasping action
  • an Ascension Flock-of-Birds magnetic tracker mounted on the subject wrist, a 22-sensors virtual reality Immersion CyperGlove, and a pressure sensor glued to the subject's thumb for the haptic/motor representation of the action.
Experiment design
  • The dataset considers 7 objects and 5 grasps. Objects have been selected so to represent different materials, colors, and shapes. To them we associate one or more grasps, in accordance to everyday experience. Table 1 reports the 13 (grasp, object) we considered.
The 7 objects in the VMGdB
ball pen duck pig hammer tape legobrick


The 5 grasp types. Left to right: cylindric power, flat, pinch, spherical, tripodal grip.
cylindricpower flat pinch spherical tripodal


The 13 (grasp,object) pairs.
ball pen duck pig hammer tape legobrick
cylindric power X
flat X X
pinch X X X
spherical X X
tripodal X X X X
  • The pool of volunteering subjects includes 20 right-handed people, 6 female and 14 males, aged between 24 to 42 years (median age 31).
  • Each of the 260 (subject, object, grasp) triplets is an entry of the database; each subject performed the same (grasp,object) experiment 20 times, giving a total of 5200 grasping acts.

The data

Each of the 260 (subject, object, grasp) entry  is associated to the following data:

  • visual information. two video sequences (384x288 25 fps AVI, MPEG-4 compression) acquired by the two cameras with a focus of attention on the object and on the action respectively. The videos report the whole grasping action, from a hand rest position to grasping and back to rest position. Each video sequence is asscociated to an ASCII file (smtp extension) allowing for synchronization with motor data.
  • motor information. one ASCII file (dat extension) containing information from the glove, the magnetic tracker and the pressure sensor as well as a time stamp for synchronization with visual data. 
All data contain information on a whole experiment, i.e., on a set of 20 consecutive grasping actions performed by the same subect on the same (grasp,object) pair.
Supplemental material
  • images of the object. a set of 20 plain PNG images for each object, depicted in the acquisition environment is included. Also a crop of these images containing the sole object is available.
  • Matlab scripts. matlab scripts for data synchronization, for the extraction of istantaneous visuo-motor information, and for data interpretation are available. See the README file
Download
Click on the links to download zip files (about 200Mb each) of the grasping actions performed by each volunteer Click on the link below to download the Matlab scripts
For hints on how to browse the data and how to use the scripts
Publications
Contacts and acknoledgements

This is a joint work betweeen Università degli Studi di Genova (It), IDIAP (Ch) and IIT (It) 

for more information: noceti <at> disi.unige.it (Nicoletta Noceti)

[back]