Competition Description

LivDet-Iris 2023 is the fifth competition in the LivDet-Iris series offering (a) independent assessment of current state of the art in iris Presentation Attack Detection algorithms and (b) an evaluation protocol, including datasets of spoof and live iris images, that can be followed by researchers after the competition is closed and compare their solutions with LivDet-Iris winners and baselines. LivDet-Iris 2023 has been included in official IJCB 2023 competition list.

This competition had three parts:

Part 1: "Algorithms-Self-Tested" involved the self-evaluation (that is, done by competitors on the sequestered and never-published-before test dataset) incorporating a large number of ISO-compliant live and fake samples. Samples included irises synthesized by modern Generative Adversarial Networks-based models (StyleGAN2 and StyleGAN3) and near-infrared pictures of various artefacts simulating physical iris presentation attacks.

Part 2: "Algorithms-Independently-Tested" involved the evaluation of the software solutions submitted by the competitors and performed by the organizers on a sequestered dataset similar in terms of attack types to the one used in Part 1.

Part 3: "Systems" involved the systematic testing of submitted iris recognition systems based on physical artifacts presented to the sensors.

Competitors can participate in one, two or all parts. For each part a separate winner demonstrating the best performance was planned to be announced. Metrics recommended by ISO/IEC 30107-1:2016 have been used to assess the submissions.

A summary of all previous LivDet-Iris competitions is available as a Chapter 7 in new Handbook of Biometric Anti-Spoofing, and this IJCB 2020 paper presents the most recent LivDet-Iris 2020 competition.

Important Dates

  • March 15, 2023: Data ready to be shared with participants. [DONE]
  • April 15 23, 2023 (extended) : Participants deadline for submissions: self-evaluation results for Part 1, software submissions for Part 2, or delivery of systems for Part 3.
  • May 1 5, 2023: Winners announced; the best teams invited to co-author the IJCB paper submission.
  • May 15, 2023: Human examination and baseline results ready; paper summarizing the competition submitted to IJCB.

How to Participate

To participate in Part 1 -- "Algorithms-Self-Tested":

1. Obtain the self-evaluation package with the data
  • Execute these two data sharing license agreements and send them to the organizers (livdetiris23@gmail.com)
  • You will receive the details how to download the data package, which includes a small "train" dataset with correct labels (spoof/live) presenting the format and nature of the test data, and the actual "test" data, without ground truth labels, to be used in self-evaluation. This package will also include a short "readme" with instructions.
2. Generate self-evaluation PAD scores
  • Generate iris presentation attack detection (PAD) scores in the range of [0,100] for all of the “test” samples, where "100" is the maximum degree of liveness, and "0" means that the image is fake.
  • Construct the CSV file with two columns, “filename” and “score”, and put your PAD scores to this CSV. If the image cannot be processed, -1000 should be used as your PAD score.
3. Submit your CSV by April 15 23 (End of Day, Anywhere on Earth) to livdetiris23@gmail.com. Provide the following data (in the email) with your submission:
  • Submitter name: ____
  • Affiliation: ____
  • Email address: ____
  • Phone number: ____
  • Mailing address: ____
  • Acronym or short name of the team / solution: ____ (if you want to stay anonymous in the competition and all resulting publications, say “Anonymous”)
You will obtain the reception confirmation, or request to correct / amend the submission, if anything is incorrect. You are encouraged to submit early to let us screen your submission for potential errors.

Note 1: the "test" data CANNOT be used in any way for training your algorithms. We trust in your fair evaluation. You can use any publicly-available datasets for training, for instance the benchmarks from previous LivDet-Iris competitions, or datasets offered by the University of Notre Dame.

Note 2: Metrics recommended by ISO/IEC 30107-1:2016 will be used to assess all submissions: APCER at various levels of BPCER for the research paper, and average of APCER and BPCER to select the LivDet-Iris-2023 Part 1 winner (the 50.0 threshold will be used to calculate APCER and BPCER error rates in this case).


To participate in Part 2 -- "Algorithms-Independently-Tested":

Follow these instructions to submit your executables by April 15 23 (End of Day, Anywhere on Earth). Provide the following data (in the email) with your submission:
  • Submitter name: ____
  • Affiliation: ____
  • Email address: ____
  • Phone number: ____
  • Mailing address: ____
  • Acronym or short name of the team / solution: ____ (if you want to stay anonymous in the competition and all resulting publications, say “Anonymous”)
Optionally, if you find it useful, you can request a copy of "train" and "test" data from Part 1 (see above for how). The LivDet-Iris team will process the executable application file on the sequestered datasets of similar nature (however with possibly different attack types) as the data offered in Part 1. Important: the "test" data offered in Part 1 CANNOT be used in any way for training your algorithms.


To participate in Part 3 -- "Systems":

You will need to have your iris recognition system submitted to Clarkson University, NY, USA. Please contact us at your earliest convenience at livdetiris23@gmail.com to arrange the logistics of this testing.

Winners and IJCB 2023 Summary Paper

Winners: Each Part (1, 2 and 3) will have its separate winner. Metrics recommended by ISO/IEC 30107-1:2016 will be used to assess the submissions. For Parts 1 and 2 the 50.0 threshold will be used to calculate APCER and BPCER error rates.

Competition Results - Part 1
Since we chose two different ways of calculating the Attack Presentation Classification Error Rate (APCER), either as a weighted average or non-weighted average (where weights are proportional to the number of samples), we decided to announce two winners, depending on the calculation method:

-- The Beijing University of Civil Engineering and Architecture (BUCEA) team wins when APCER is not weighted by the number of samples with the Average Classification Error Rate (ACER) equal to 22.15%.

-- When weighted APCER is calculated, the Fraunhofer IGD team wins with ACER = 37.31%.

The full breakdown of the competition results is available in the IJCB 2023 paper (ArXiv preprint: https://arxiv.org/abs/2310.04541).

To obtain a copy of the test data used in the competition, follow the instructions provided at https://cvrl.nd.edu/projects/data/#livdet-iris-2023-part1.

Competition Results - Parts 2 and 3
There were no submissions to Parts 2 and 3 in this edition of LivDet-Iris.

Contact

Submissions and General Questions:
livdetiris23@gmail.com
Technical Contacts:
Patrick Tinsley @ Notre Dame (ptinsley@nd.edu)
Sandip Purnapatra @ Clarkson (purnaps@clarkson.edu)
Lead PIs:
Adam Czajka (aczajka@nd.ed)
Stephanie Schuckers (sschucke@clarkson.edu)