Challenge description


Overview

The Task: The challenge will use an extension of the UPAR Dataset [1], which consists of images of pedestrians annotated for 40 binary attributes. For deployment and long-term use of machine-learning algorithms in a surveillance context, the algorithms must be robust to domain gaps that occur when the environment changes. This challenge aims to spotlight the problem of domain gaps in a real-world surveillance context and highlight the challenges and limitations of existing methods to provide a direction for future research.

The Dataset: We will use an extension of the UPAR dataset [1]. The challenge dataset consists of the harmonization of three public datasets (PA100K [2], PETA [3], and Market1501-Attributes [4]) and a private test set. 40 binary attributes have been unified between those for which we provide additional annotations. This dataset enables the investigation of PAR methods' generalization ability under different attribute distributions, viewpoints, varying illumination, and low resolution.

The Tracks: This challenge is split into two tracks associated with semantic pedestrian attributes, such as gender or clothing information: Pedestrian Attribute Recognition (PAR) and attribute-based person retrieval. Both tracks build on the same data sources but will have different evaluation criteria. There are three different dataset splits for both tracks that use different training domains. Each track evaluates how robust a given method is to domain shifts by training on limited data from a specific limited domain and evaluating using data from unseen domains.

  • Track 1: Pedestrian Attribute Recognition: The task is to train an attribute classifier that accurately predicts persons’ semantic attributes, such as age or clothing information, under domain shifts.
  • Track 2: Attribute-based Person Retrieval: Attribute-based person retrieval aims to find persons in a huge database of images called gallery that match a specific attribute description. The goal of this track is to develop an approach that takes binary attribute queries and gallery images as input and ranks the images according to their similarity to the query.

The Phases: Each track will be composed of two phases, i.e., the development and test phases. During the development phase, public training data will be released, and participants must submit their predictions concerning a validation set. At the test (final) phase, participants will need to submit their results for the test data, which will be released just a few days before the end of the challenge. As we progress into the test phase, validation annotations will become available together with the test images for the final submission. At the end of the challenge, participants will be ranked using the public test data and additional data that is kept private. It is important to note that this competition involves submitting results and code. Therefore, participants will be required to share their code and trained models after the end of the challenge (with detailed instructions) so that the organizers can reproduce the results submitted at the test phase in a code verification stage. Verified code will be applied to a private test dataset for final ranking. The organizers will evaluate the top submissions on the public leaderboard on the private test set to determine the 3 top winners of the challenge. At the end of the challenge, top-ranked methods that pass the code verification stage will be considered valid submissions and compete for any prize that may be offered.

[1] Specker, Andreas; Cormier, Mickael; Beyerer, Jürgen (2022): UPAR: Unified Pedestrian Attribute Recognition and Person Retrieval https://arxiv.org/abs/2209.02522
[2] X. Liu, et al., "HydraPlus-Net: Attentive Deep Features for Pedestrian Analysis," in IEEE International Conference on Computer Vision (ICCV), 2017 pp. 350-359.
[3] Yubin Deng, Ping Luo, Chen Change Loy, and Xiaoou Tang. "Pedestrian Attribute Recognition At Far Distance," In Proceedings of the 22nd ACM international conference on Multimedia (MM). 2014, 789–792. 
[4] Yutian Lin, Liang Zheng, Zhedong Zheng, Yu Wu, Zhilan Hu, Chenggang Yan, Yi Yang. "Improving person re-identification by attribute and identity learning," in Pattern Recognition, Volume 95, 2019, 151-161.

Important Dates

Tentative Schedule already available.

Dataset

Detailed information about the dataset can be found here.

Baseline

As baseline, we will use the method described in [1] to solve both tasks. Details can be found in the respective paper.

How to enter the competition

The competition will be run on the CodaLab platform. First, register on CodaLab in the following links to submit results during the development and test phase of the challenge. Then, pick a track (or all tracks) to follow and train on the respective training splits. The validation and testing data will remain the same across all challenges.

By submitting result files to the codalab challenges following the format provided in the starting kit and complying with the challenge rules, the submission will be listed on the leaderboard and ranked.

  • Track 1: Pedestrian Attribute Recognition (competition link): Train on predefined data and evaluate generalization properties.
  • Track 2: Attribute-based Person Retrieval (competition link): Train on predefined data and evaluate generalization properties.

The participants will need to register through the platform, where they can access the data and submit their predictions on the validation and test data (i.e., development and test phases) and obtain real-time feedback on the leaderboard. The development and test phases will open/close automatically based on the defined schedule.

Starting kit

The starting kit includes a download script that downloads the sub-datasets, the annotations for the development phase, and submission templates for both tracks. You can find it here. Please follow the instructions. The submission templates for both tracks include random predictions and show the expected submission format. More details can be found in the following “Making a submission” section. Participants are required to make submissions using the defined templates, by changing the random predictions/rankings by the ones obtained by their models. Note, the evaluation script will verify the consistency of submitted files and may invalidate the submission in case of any inconsistency.
Warning: the maximum number of submissions per participant at the test stage will be set to 3. Participants are not allowed to create multiple accounts to make additional submissions. The organizers may disqualify suspicious submissions that do not follow this rule.

  • Track 1 (Pedestrian Attribute Recognition) Submission template (".csv" file) - (dev phase | test phase).

  • Track 2 (Attribute-based Person Retrieval) Submission template (".csv" file) - (dev phase | test phase).

Warning: the maximum number of submissions per participant at the test stage will be set to 3. Participants are not allowed to create multiple accounts to make additional submissions. The organizers may disqualify suspicious submissions that do not follow this rule.

Making a submission

This challenge follows a cross-validation evaluation protocol using three different splits (train splits are identical for both tasks). Training data consists of image files and associated attribute annotations. There are separate training files for the three splits, named  "train_0.csv", "train_1.csv", and "train_2.csv". The files contain one row per training image and 41 columns. The first column specifies the image path; the others include binary attribute annotations. The value 1 indicates the presence of an attribute, and 0 stands for the absence, respectively. To submit results, train a separate model for each of the three splits. Please note that using training data from another split is strictly forbidden. This includes all kinds of training, calibration, etc.

  • Track 1 PAR: Extract attribute predictions from the three models for all images in "val_all.csv" ("test_all.csv" during the testing phase) and store the results in three separate files named "predictions_0.csv", "predictions_1.csv", and "predictions_2.csv". Submission files should have the same format as the annotation files (e.g.,  "train_0.csv"). I.e., they contain as many rows as there are images in "vall_all.csv"/"test_all.csv", and image paths and respective attribute predictions are stored in the columns in the same order as in the annotation files. It is important to keep the order of images as specified in the "vall_all.csv"/"test_all.csv" files. We recommend simply concatenating your attribute predictions to the "val_all.csv"/"test_all.csv" files to avoid any issues. Finally, zip the submission files and upload the zip file to the CodaLab platform.
  • Track 2 Retrieval: We provide two types of evaluation files for the attribute-based retrieval task: files that contain the attribute queries for each split (“val/test_queries_X.csv”) and files that include the gallery images (“val/test_imgs_X.csv”). Query files include one query per row. In each row, a unique combination of the 40 binary attributes is given that represents the attribute set that should be searched. The gallery files are simply a list of images that should be sorted according to their similarity to the queries. Submission files should contain one row per query with the ranking positions of each gallery image separated by commas. The following table provides an example. Please note that the first row and column are just for visualization purposes and should not be included in the submission files.
  a.png b.png c.png d.png
Query 1 0 1 2 3
Query 2 2 0 1 3
Query 3 3 1 2 0

For each query, the ranking positions are assigned to the four gallery images. It is important to keep the order of gallery images as defined in the "val/test_imgs_X.csv" files. Concerning Query 1, image a.png ranks first based on the algorithm, followed by images b.png, c.png, and d.png. Regarding the second query, the method considers b.png the most similar, followed by c.png, a.png, and d.png. Finally, for submission, create three files, "rankings_0.csv", "rankings_1.csv", and "rankings_2.csv", for the respective splits and zip them.

Then,

sign in on CodaLab -> go to our challenge webpage (and associated track) on codalab -> go on the "Participate" tab -> "Submit / view results" -> "Submit" -> then select your "the_filename_you_want.zip" file and -> submit.

Warning: the last step ("submit") may take some minutes on Track 2 (e.g., >10min) with status "Running" due to the amount of computation and available Codalab resources (just wait). If everything goes fine, you will see the obtained results on the leaderboard ("Results" tab).

Note, CodaLab will keep on the leaderboard the last valid submission. This helps participants to receive real-time feedback on the submitted files. Participants are responsible to upload the file they believe will rank them in a better position as a last and valid submission.

Evaluation Metric

Different evaluation metrics are used for the two tracks:

  • Track 1: Harmonic mean from mA and instance-based F1
  • Track 2: Harmonic mean from mAP and R-1

The calculation works as follows. First, the metrics are computed separately for each of the three splits and then averaged across the splits.

Basic Rules

According to the Terms and Conditions of the Challenge,

  • The participants only use the training data specified in the "train_X.csv" files for training their models on the respective splits. Using training data from another split is strictly forbidden. This includes all kinds of training, calibration, etc.
  • Models cannot be trained on any additional real or synthetic data except ImageNet. Pre-training with COCO (or any other dataset) is not allowed.
  • Any use of the test images is prohibited and will result in disqualification. The test data may not be used in any way, not even unsupervised, semi-supervised, or for domain adaptation.
  • Validation data and released labels in the testing phase can only be used to validate the method's performance and not for training.
  • For each split, participants may train only one model. This model has to be used to compute predictions for the entire validation/test set. The participants are not allowed to use different approaches/models/hyper-parameter sets/etc. for different subsets of the validation/test data. It is allowed to use different training parameters/hyper-parameters for each of the training splits.
  • The maximum number of submissions per participant at the test stage will be set to 3. Participants are not allowed to create multiple accounts to make additional submissions. The organizers may disqualify suspicious submissions that do not follow this rule.
  • In order to win the challenge, top-ranked participants' scores must improve the baseline performance provided by the challenge organizers.
  • The performances on test data will be verified after the end of the challenge during a code verification stage. Only submissions that pass the code verification will be considered in the final list of winning methods.
  • The organizers will evaluate the top submissions on the public leaderboard on the private test set to determine the 3 top winners of the challenge.
  • To be part of the final ranking, the participants will be asked to fill out a survey (fact sheet) where detailed and technical information about the developed approach is provided.

Final Evaluation and Ranking​ (post-challenge)

Important dates regarding code submission and fact sheets are defined in the schedule.

  • Code verification: After the end of the test phase, participants are required to share with the organizers the source code used to generate the submitted results, with detailed and complete instructions (and requirements) so that the results can be reproduced locally (preferably using docker). Note that only solutions that pass the code verification stage are eligible to be announced in the final list of winning solutions. Participants are required to share both training and prediction code with pre-trained models. Participants are requested to share with the organizers a link to a code repository with the required instructions. This information must be detailed inside the fact sheets (detailed next).

Ideally, the instructions to reproduce the code should contain:
1) how to structure the data (at train and test stage).
2) how to run any preprocessing script, if needed.
3) how to extract or load the input features, if needed.
4) how to run the docker used to run the code and to install any required libraries, if possible/needed.
5) how to run the script to perform the training.
6) how to run the script to perform the predictions, that will generate the output format of the challenge. The script must be able to generate predictions for any input images (Task 1) or query input image combinations (Task 2) specified in a text file (formats analogous to those provided for testing).

  • Fact sheets: In addition to the source code, participants are required to share with the organizers a detailed scientific and technical description of the proposed approach using the template of the fact sheets provided by the organizers. Latex template of the fact sheets can be downloaded here.

Sharing the requested information with the organizers: Send the compressed project of your fact sheet (in .zip format), i.e., the generated PDF, .tex, .bib, and any additional files to <upar.challenge@gmail.com>, and put in the Subject of the E-mail "WACV 2023 UPAR Challenge / Fact Sheets and Code repository"


IMPORTANT NOTE: we encourage participants to provide detailed and complete instructions so that the organizers can easily reproduce the results. If we face any problem during code verification, we may need to contact the authors, and this can take time, and the release of the list of winners may be delayed.

Challenge Results (test phase)

We are happy to announce the winning solution of the WACV 2023 Pedestrian Attribute Recognition and Attributed-based Person Retrieval Challenge. The team had its code verified at the code verification stage. The associated fact sheets and link to code repository are available here (will be available soon). The organizers would like to thank all the participants for making this challenge a success.

Pedestrian Attribute Recognition (PAR) Challenge (Track 1)
1st place: melaeric - Team Leader: Jun Wan (Institute of Automation, Chinese Academy of Sciences). Team members: Hao Tan, Zichang Tan, Dunfang Weng, Ajian Liu, Yang Yang, Jun Wan.

Associated Workshop

Check our associated Real-World Surveillance: Applications and Challenges Workshop

web counter

News


WACV'23 Pedestrian Attribute Recognition and Attributed-based Person Retrieval Challenge

The ChaLearn WACV'23 Pedestrian Attribute Recognition and Attributed-based Person Retrieval Challenge has just opened on Codalab. Join us to push the boundaries of pedestrian attribute recognition and attributed-based person retrieval along with concept drift on an extension of the UPAR dataset.