This paper is concerned with the performance evaluation of fingerprint verification systems. After an initial classification of biometric testing initiatives, we explore both the theoretical and practical issues related to performance evaluation by presenting the outcome of the recent Fingerprint Verification Competition(FVC2004).
FVC2004 was organized by the authors of this work for the purpose of assessing the state-of-the-art in this challenging pattern recognition application and making available a new common benchmark for an unambiguous comparison of fingerprint-based biometric systems. FVC2004 is an independent,strongly supervised evaluation performed at the evaluators’ site on evaluators’ hardware.
This allowed the test to be completely controlled and the computation times of different algorithms to be fairly compared. The experience and feedback received from previous, similar competitions (FVC2000 and FVC2002) allowed us to improve the organization and methodology of FVC2004 and to capture the attention of a significantly higher number of academic and commercial organizations(67 algorithms were submitted for FVC2004).
A new, “Light” competition category was included to estimate the loss of matching performance caused by imposing computational constraints. This paper discusses data collection and testing protocols, and includes a detailed analysis of the results.
We introduce a simple but ffective method for comparing algorithms at the score level, allowing us to isolate difficult cases (images) and to study error correlations and algorithm “fusion.” The huge amount of information obtained, including a structured classification of the submitted algorithms on the basis of their features, makes it possible to better understand how current fingerprint recognition systems work and to delineate useful research directions for the future.
Source: Michigan State University
Author: Raffaele Cappelli | Dario Maio | Davide Maltoni | James L. Wayman | Anil K. Jain