Multimodal Gesture Recognition: Montalbano V2 (ECCV '14) - Training Data
Evaluation metrics
On the file ChalearnLAPEvaluation.py there are some methods for evaluation. The first important script allows to export the labels of a set of samples into a ground truth folder, to be used to get the final ovelap value. Let's assume that you use the samples 1 to 10 for validation purposes, and have a folder valSamples with the files Sample0001.zip to Sample0010.zip as you downloaded from the training data set. We can create a ground truth folder gtData using:
>> from ChalearnLAPEvaluation import exportGT_Gesture
>> exportGT_Gesture(valSamples,gtData)
This method exports the label files and data files for each sample in the valSample folder to the gtData folder. This new ground truth folder will be used by evaluation methods.
For each sample, we need to store the gesture predictions in a CSV file in the same format that labels are provided, that is, a line for each gesture with the gestureID, the initial frame and the final frame. This file must be named as Samplexxxx_predictions.csv. To make it easy, the class GestureSample allows to store this information for a given sample. Following the example from last section, we can store the predictions for sample using:
>> from ChalearnLAPSample import GestureSample
>> gestureSample = GestureSample("SampleXXXX.zip")
Now, if our predictions are that we have the gesture 1 from frame 102 to 203 and gesture 5 from frame 250 to 325, and we want to store predictions in a certain foldervalPredict, we can use the following code:
>> gestureSample = GestureSample("SampleXXXX.zip")
>> gestureSample.exportPredictions(([1,102,203], [5,250,325]),valPredict)
Assuming previous defined paths and objects, to evaluate the overlap for a single labeled sample prediction, that is, prediction for a sample from a set where labels are provided, we can use:
>> overlap=gestureSample.evaluate(([1,102,203], [5,250,325]))
Finally, to obtain the final score for all the predictions, in the same way performed in the Codalab platform, we use:
>> from ChalearnLAPEvaluation import exportGT_Gesture
>> score=evalGesture(valPredict,gtData)