2013 Looking at People ICMI Challenge - Multimodal Gesture Recognition
Track description
The focus of the challenge is on “multiple instance, user independent learning” of gestures, which means learning to recognize gestures from several instances for each category performed by different users, drawn from a gesture vocabulary of 20 categories. A gesture vocabulary is a set of unique gestures, generally related to a particular task. In this challenge we will focus on the recognition of a vocabulary of 20 Italian cultural/anthropological signs.
Challenge stages:
- Development phase: Create a learning system capable of learning from several training examples a gesture classification problem. Practice with development data (a large database of 7,754 manually labeled gestures is available) and submit predictions on-line on validation data (3,362 labelled gestures) to get immediate feed-back on the leaderboard. Recommended: towards the end of the development phase, submit your code for verification purpose. Training sequences 223, 225 and 228 and Validation sequences from 629 to 639 contain too many samples without skeleton information, please do not use them if your proposed method is based on the skeleton information.
- Final evaluation phase: Make predictions on the new final evaluation data (2,742 gestures) revealed at the end of the development phase. The participants will have few days to train their systems and upload their predictions.
We highly recommend that the participants take advantage of this opportunity and upload regularly updated versions of their code during the development period. Their last code submission before deadline will be used for the verification.