LivDet-Iris 2023 is the fifth competition in the LivDet-Iris series offering
(a) independent assessment of current state of the art in iris Presentation Attack Detection algorithms and (b) an evaluation protocol,
including datasets of spoof and live iris images, that can be followed by researchers after the competition is
closed and compare their solutions with LivDet-Iris winners and baselines. LivDet-Iris 2023 has been included in official IJCB 2023 competition list.
This competition had three parts:
Part 1: "Algorithms-Self-Tested" involved the self-evaluation (that is, done by competitors on the sequestered and never-published-before test dataset)
incorporating a large number of ISO-compliant live and fake samples. Samples included irises synthesized by modern Generative Adversarial Networks-based models
(StyleGAN2 and StyleGAN3) and near-infrared pictures of various artefacts simulating physical iris presentation attacks.
Part 2: "Algorithms-Independently-Tested" involved the evaluation of the software solutions submitted by the competitors and
performed by the organizers on a sequestered dataset similar in terms of attack types to the one used in Part 1.
Part 3: "Systems" involved the systematic testing of submitted iris recognition systems based on physical artifacts presented to the sensors.
Competitors can participate in one, two or all parts. For each part a separate winner demonstrating the best performance was planned to be announced.
Metrics recommended by ISO/IEC 30107-1:2016 have been used to assess the submissions.
A summary of all previous LivDet-Iris competitions is available as a
Chapter 7 in new Handbook of Biometric Anti-Spoofing, and this IJCB 2020 paper presents the most recent
LivDet-Iris 2020 competition.
Winners: Each Part (1, 2 and 3) will have its separate winner. Metrics recommended by ISO/IEC 30107-1:2016 will be used to assess the submissions.
For Parts 1 and 2 the 50.0 threshold will be used to calculate APCER and BPCER error rates.
Competition Results - Part 1
Since we chose two different ways of calculating the Attack Presentation Classification Error Rate (APCER),
either as a weighted average or non-weighted average (where weights are proportional to the number of samples),
we decided to announce two winners, depending on the calculation method:
-- The Beijing University of Civil Engineering and Architecture (BUCEA) team wins when APCER is not weighted
by the number of samples with the Average Classification Error Rate (ACER) equal to 22.15%.
-- When weighted APCER is calculated, the Fraunhofer IGD team wins with ACER = 37.31%.
The full breakdown of the competition results is available in the IJCB 2023 paper (ArXiv preprint: https://arxiv.org/abs/2310.04541).
To obtain a copy of the test data used in the competition, follow the instructions provided at https://cvrl.nd.edu/projects/data/#livdet-iris-2023-part1.
Competition Results - Parts 2 and 3
There were no submissions to Parts 2 and 3 in this edition of LivDet-Iris.
University of Notre Dame, IN, USA:
Dr. Adam Czajka
Patrick Tinsley
Mahsa Mitcheff
Dr. Patrick Flynn
Dr. Kevin Bowyer
Clarkson University, NY, USA:
Dr. Stephanie Schuckers
Dr. Masudul Haider Imtiaz
Sandip Purnapatra
Surendra Singh
Naveenkumar Venkataswamy
Submissions and General Questions:
livdetiris23@gmail.com
Technical Contacts:
Patrick Tinsley @ Notre Dame (ptinsley@nd.edu)
Sandip Purnapatra @ Clarkson (purnaps@clarkson.edu)
Lead PIs:
Adam Czajka (aczajka@nd.ed)
Stephanie Schuckers (sschucke@clarkson.edu)