Track description


OSLWL (one shot learning and weak labels): OSLWL is a realistic variation of a one-shot learning problem adapted to the sign language specific problem, where it is relatively easy to obtain a couple of examples of a sign, using just a sign language dictionary, but it is much more difficult to find co-articulated versions of that specific sign. When subtitles are available, as in broadcast-based datasets, the typical approach consists of using the text to predict a likely interval where the sign might be performed. So in this track we simulate that case by providing a set of queries (isolated signs) and a set of video intervals around each and every co-articulated instance of the queries. Intervals with no instances of queries are also provided as negative groundtruth. Participants will need to spot the exact location of the sign instances in the provided video intervals. The annotations will be released when delivering the set of test queries and test video intervals. The annotations of the test set will be released after the challenge has finished.

The dataset is avilable here.

News


ECCV2022 Challenge on Sign Spotting

The ChaLearn ECCV2022 Challenge on Sign Spotting is open and accepting submissions on Codalab. Train data is available for download. Join us to push the boundaries of Continuous Sign Language Recognition.