Datasets

Download:

Dataset for Track 1 is released in Baidu Drive and OneDrive.

Dataset for Track 2 is released in Baidu Drive and OneDrive.

Seven public datasets SIRST-V2, IRSTD-1K, IRDST, NUDT-SIRST NUDT-SIRST-Sea, NUDT-MIRSDT, Anti-UAV and a dataset developed by the team of National University of Defense Technology (NUDT) are used in this challenge. The dataset contains images with various target shapes (e.g., point target, spotted target, extended target), wavelengths (e.g., near-infrared, shortwave infrared and thermal), image resolution (e.g., 256, 512, 1024, 3200, etc.), at varied imaging systems (e.g., land-based, aerial-based and space-based imaging systems). Figure 1 shows some example images of the training sets.
Figure 1. Example images of the training sets.
All datasets used in the challenges have been licensed by their authors, and the copyrights of all the datasets belong to their authors. Note that, the datasets provided are not allowed to be used beyond this challenge.
For track 1, 6000 images with coarse point annotation (i.e., GT point is located around the centroid of the GT mask under Gaussian distribution) are released for training and validation. 500 images are used for test, and will not be released.
For track 2, 9000 images with groundtruth (GT) mask annotation are released for trainning and validation. 2000 images are used for test, and will not be released.


Evaluation Metrics

Pixel-level metric (i.e., intersection over union \( I o U\)) and target-level metrics (i.e., probability of detection \( P _ { d }\) and false-alarm rate \( F _ { a }\)) will be used as metrics for performance evaluation in track 1, and their implementations can be found in our public toolbox. Note that, the linear weighted summation of IoU and Pd is used for ranking, and the corresponding performance score Sp is defined as: $$ S _ { p } = \alpha \times I o U + ( 1 - \alpha ) \times P _ { d } , \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ ( 1 )$$ where \( \alpha \) represents the weight. The false alarm probability \( F _ { a }\) must be lower than 1e-4 to be considered a valid score, otherwise the score will be invalid.
For track 2, besides the performance score, we also introduce the number of parameters \( P \) and FLOPs \( F \) for performance evaluation. Note that, we rank entries based on the sum of these three values, with each value normalized by a baseline state-of-the-art model for the task. The corresponding efficiency score \( S _ { e } \) is defined as: $$ S _ { e } = 1 - \frac { P _ { s u b } / P _ { b a s e } + F _ { s u b } / F _ { b a s e } } { 2 } , \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ ( 2 )$$ where \( \left[ \cdot \right] _{s u b}\) and \( \left[ \cdot \right] _{b a s e}\) represent the corresponding values of submission and baseline model. The final score \( S _ { p e }\) is the multiple weighted summations of the performance score and the efficiency score, and can be defined as: $$ S _ { p e } = γ \times S _ { p } + ( 1 - γ ) \times S _ { e } , \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ ( 3 )$$
Note:The use of additional data or pre-trained models is not allowed in the competition. \(\alpha\) = 0.5, \(γ\) = 0.5


Baseline Model

Track1:

l  Baseline model results for test A (60% of test results):

Model IoU Pd Fa Score
DNANet_full (full supervision) 37.478 68.637 5.019e-6 51.558
DNANet_LESPS_coarse (weak supervision) 27.766 64.294 2.153e-5 46.030

l  Baseline model results for test B (100% of test results):

Model IoU Pd Fa Score
DNANet_full (full supervision) 40.773 68.588 4.915e-6 54.681
DNANet_LESPS_coarse (weak supervision) 29.266 63.636 2.294e-5 46.451

l  Baseline model training and testing code: https://github.com/XinyiYing/LESPS

l  Baseline model training and test result file: Baidu Drive , Ondirve

Track2:

l  Baseline model training and test results:

Model IoU Pd Fa Params FLOPs Score
UNet 62 59 1.5e-05 0.9M 5.08G 60

l  Baseline model training and testing code: https://github.com/YeRen123455/ICPR-Track2-LightWeight

l  Baseline model training and test result file: Baidu Drive , Ondirve