Challenge description


Chalearn Challenges on Action, Gesture, and Emotion Recognition: Large Scale Multimodal Gesture Recognition and Real versus Fake expressed emotions @ICCV17

Venice, Italy

Octobr 2017

This contest focuses on the problems for gesture and emotion recognition: large-scale isolated gesture recognition, large-scale continuous gesture recognition from RGB-D data, and fake vs true emotion recognition from video sequences. We propose a challenge on Large Scale Multimodal Gesture Recognition Competition, whose goal is to develop efficient methods for multi-modal gesture recognition from the isolated or continuous sequences. In CVPR 2016, we released two large-scale multimodal gesture datasets: Chalearn LAP IsoGD and Chalearn LAP ConGD. These data sets were used in the ICPR2016 challenge we organized, where more than 100 participants joined the competition. Although the final winners have greatly improved the performances compared with the baseline method (see the result analysis paper [1]), there is still a big room for improvement. Therefore, and given the high interest of the action and gesture communities on large scale recognition problems, we propose launching a second round of this challenge and collocated it with a workshop in the topic.

Being able to recognize deceit and the authenticity of emotional displays is notoriously difficult for human observers because of the subtlety or short duration of discriminative facial responses. Applications are numerous, from determining deceiving behavior in police investigations, to improving border control by understanding the incongruity between what is expressed and what is experienced. For this challenge a new database, the SASE-FE database, consist of 643 different videos which have been recorded with a high resolution GoPro-Hero camera, has been prepared and labelled. The challenges are recognition of fakeness vs trueness of emotion and recognition of fakeness vs trueness within a specific emotion, e.g. fake vs true surprise.

 

Data will be made available to participants in different stages as follows:

  • Development (training) and Validation data with ground truth for all of the considered variables will be made available at the beginning of the competition.  Participants will be able to submit predictions to the CodaLab platform (http://codalab.org/), and receive immediate feedback on their performance in the validation data set (there will be a validation leaderboard in the platform).
  • Final evaluation (test) data will be made available to participants one week before the end of the challenge. Participants will have to submit their predictions in these data to be considered for the final evaluation (no ground truth will be released at this point).

 

How to enter the competitions and how to evaluate them

The competition will be run in the CodaLab platform (https://competitions.codalab.org), an open-source platform, co-developed by the organizers of this challenge together with Stanford and Microsoft Research (https://github.com/codalab/codalab-competitions/wiki/Project_About_CodaLab). The participants will register through the platform, where they will be able to access data, evaluation scripts, leaderboard in the validation data set (i.e., the can know their performance on validation data), etc.

Dissemination of results

Participants who obtain the best results in the challenge will be invited to submit a paper to the its associated ICCV 2017 workshop.

 

References

[1]  Hugo Jair Escalante, Victor Ponce-Lopez, Jun Wan, Michael A. Riegler, Baiyu Chen, Albert Clapes, Serigio Escalera, Isabelle Guyon, Xavier Baro, Pal Halvorsen, Henning Muller, Martha Larson. "ChaLearn Joint Contest on Multimedia Challenges Beyond Visual Analysis: An overview", ICPR workshop 2016.

News


April 20: ICCV'17 competition started

Chalearn Coopetition on Action, Gesture, and Emotion Recognition started.