0% found this document useful (0 votes)
150 views239 pages

FRVT 1N Report 2022 12 18

This document is a draft supplement to NIST Interagency Report 8271 providing results from the ongoing 1:N track of the Face Recognition Vendor Test (FRVT) for identification. It includes results from new and returning participants, updates to the probe sets used in benchmarks, and release notes describing changes between drafts.

Uploaded by

Haroldo Medeiros
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
150 views239 pages

FRVT 1N Report 2022 12 18

This document is a draft supplement to NIST Interagency Report 8271 providing results from the ongoing 1:N track of the Face Recognition Vendor Test (FRVT) for identification. It includes results from new and returning participants, updates to the probe sets used in benchmarks, and release notes describing changes between drafts.

Uploaded by

Haroldo Medeiros
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 239

NISTIR 8271 DRAFT SUPPLEMENT

Face Recognition
Vendor Test (FRVT)
Part 2: Identification
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

Patrick Grother
Mei Ngan
Kayee Hanaoka
Information Access Division
Information Technology Laboratory

This document is a draft supplement of NIST Interagency Report 8271

2022/12/18
NISTIR 8271 DRAFT SUPPLEMENT

Face Recognition
Vendor Test (FRVT)
Part 2: Identification
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

Patrick Grother
Mei Ngan
Kayee Hanaoka
Information Access Division
Information Technology Laboratory

This document is a draft supplement of NIST Interagency Report 8271

November 2022

U.S. Department of Commerce


Gina M. Raimondo, Secretary

National Institute of Standards and Technology


Laurie E. Locascio, NIST Director and Undersecretary of Commerce for Standards and Technology
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 1

RELEASE NOTES

2022-12-15: The 1:N track of the FRVT remains open.

. This document is the nineteenth draft update to NIST Interagency Report 8271. It contains results for
one first-time participant: First Credit Bureau Kazakhstan.
. The document also includes results for algorithms from five returning developers: Gorilla Technology,
Pangiam, Qnap Scurity, SQIsoft, Vixvizion (formerly known as Imagus).

2022-11-09: The 1:N track of the FRVT remains open.

. This document is the nineteenth draft update to NIST Interagency Report 8271. It contains results for
four first-time participant: Mukh, Turing Technology VIP, Verijelas and Verihubs Inteligensia
. The document also includes results for algorithms from two returning developers: Maxvision and
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

Samsung S1.

2022-09-23: The 1:N track of the FRVT remains open.

. This document is the eighteenth draft update to NIST Interagency Report 8271. It contains results for
two first-time participants: Intema-LGL Group and T4iSB.
. The document also includes results for algorithms from two returning developers: Cloudwalk - Moon-
time Smart Technology, Dermalog, Griaule, Hangzhuo AIlu Network Information Technology, Intel-
livision, Line Corporation, NEC, Sensetime Group, Realnetworks Inc and Vietnam Posts and Telecom-
munications Group.

2022-07-28: The 1:N track of the FRVT remains open.

. This document is the seventeenth draft update to NIST Interagency Report 8271. It contains results for
one first-time participant: Maxvision.
. The document also includes results for algorithms from two returning developers: Rank One Com-
puting, and Viettel Group.
. We have replaced the probe set used in the visa-border benchmark. It was previously comprised of
80 000 images; it now has size 1 212 892 - see amended entries in Table 1. False negative identification
rates have increased.
. We have added images to the probe set used in the visa-kiosk benchmark. It was previously comprised
of 21 016 mates and the same number of non-mates; it now has 31 579 mates and 45 460 non-mates -
see amended and entries in Table 1. False negative identification rates are improved (reduced) slightly.

2022-06-08: The 1:N track of the FRVT remains open.

. This document is the seventeenth draft update to NIST Interagency Report 8271. It includes results for
algorithms submitted by three first-time participants: Digidata, DiluSense Technology, and Vietnam
Posts and Telecommunications Group.
. The document also includes results for algorithms from five returning developers: Canon Inc, Imagus
Technology, Neurotechnology, Thales, and Samsung S1.

2022-04-28: The 1:N track of the FRVT remains open.

. This document is the sixteenth draft update to NIST Interagency Report 8271. It includes results for
algorithms submitted by one first-time participants: Hangzhuo AIlu Network Information Technology.
. The document also includes results for algorithms from three returning developers: HyperVerge Inc,
Qnap Security, and Realnetworks Inc.
. The 1:N results page has been updated.

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 2

2022-03-30: The 1:N track of the FRVT remains open.

. This document is the sixteenth draft update to NIST Interagency Report 8271. It includes results for
algorithms submitted by two first-time participants: Intellivision, and Pangiam.
. The document also includes results for algorithms from three returning developers: Fujitsu Research
and Development Center, Idemia, and Gorilla Technology.
. The 1:N results page has been updated.

2022-02-23: The 1:N track of the FRVT remains open.

. This document is the fifteenth draft update to NIST Interagency Report 8271. It includes results for al-
gorithms submitted by four first-time participants: Cloudwalk - Moontime Smart Technology, Decatur
Industries Inc, NotionTag Technologies Private Limited, and Reveal Media Ltd.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

. The document also includes results for algorithms from three returning developers: Cognitec Systems
GmbH, Sensetime Group, and Viettel Group
. The 1:N results page has been updated.

2022-01-20: The 1:N track of the FRVT remains open.

. This document is the fourteenth draft update to NIST Interagency Report 8271. It includes results for
algorithms recently submitted by two first-time participants: Daon and SQIsoft.
. The document also includes results for algorithms from five returning developers: Cyberlink Corp,
NEC, Neurotechnology, Paravision, and Rank One Computing.
. The 1:N results page has been updated.

2021-12-16: The 1:N track of the FRVT remains open.

. This document is the thirteenth draft update to NIST Interagency Report 8271. It includes results for
algorithms from six returning developers: Dahua Technology, Imagus Technology, Line Corporation,
N-Tech Lab, Qnap Security, and Realnetworks Inc.
. The 1:N results page has been updated.

2021-11-22: The 1:N track of the FRVT remains open.

. This document is the twelfth draft update to NIST Interagency Report 8271. It includes results for algo-
rithms recently submitted by three first-time participants Clearview AI, Griaule, and Mantra Softech
India.
. This document and the 1:N results page also include results for algorithms from six returning devel-
opers: Acer Incorporated, Canon, Dermalog, Samsung S1, VisionLabs, and Veridas Digital Authenti-
cation.

2021-10-28: The 1:N track of the FRVT remains open.

. This document is the eleventh draft update to NIST Interagency Report 8271. It includes results for
algorithms recently submitted by three first-time participants (20Face, Fujitsu Research and Develop-
ment Center, and Vision-Box), and five returning participants (Alchera, Gorilla Technology, Tevian,
Thales-Cogent, and Visidon). Visidon
. Both the main 1:N results page and the small-gallery paperless travel page have been updated.

2021-09-21: The 1:N track of the FRVT remains open. Three news items:

. This document is the tenth draft update to NIST Interagency Report 8271. It includes results for al-
gorithms recently submitted by six first-time developers: Cubox, Fincore, HyperVerge, Qnap Security,
Staqu Technologies, and Tripleize (Aize, 3-ize).

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 3

. It includes results also for four returning developers: Cognitec Systems, Incode Technologies, Inno-
vatrics, Neurotechnology, and Rank One Computing.

2021-08-02: The 1:N track of the FRVT remains open. Three news items:

. This document is the nineth draft update to NIST Interagency Report 8271. It includes results for
algorithms recently submitted by eight participants: Cyberlink Corp, NEC Corp, N-Tech Lab, Realnet-
works Inc., Sensetime Group, Veridas Digital, Viettel Group, and Vigilant Solutions.
. Algorithms submitted since July 24 will be included in the next update scheduled for September 9,
2021.
. A new report, NIST Interagency Report 8381 - FRVT Part 7: Identification for Paperless Travel and Im-
migration, has been released [PDF, webpage]. It documents the use of FRVT 1:N algorithms in positive
access control and immigration status update travel applications where the enrolled population size
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

is as low as 420 people for aircraft boarding, and 42 000 for an airport security line. These population
sizes are much smaller than those used in the main 1:N evaluation. Going forward, we will update the
report and webpage with results for new algorithms.

2021-07-07: The 1:N track of the FRVT remains open. One update:

. This document is the eighth draft update to NIST Interagency Report 8271. It include results for an
algorithm from one participant: Kakao Enterprises.

2021-06-22: The 1:N track of the FRVT remains open. Three updates:

. This is the seventh draft of the update to NIST Interagency Report 8271. It includes results for algo-
rithms from three new participants: Line Corporation, Rendip, and Samsung S1 Corp.
. We have also added results for algorithms from five returning developers: Imagus Technology, Kneron,
Tevian, Visidon, and Xforward AI Technology.
. The algorithm-specific report cards (examples: 1, 2, and 3) now include figures showing how low
threshold values can be used to reduce candidate list lengths for human review, while (usually) elevat-
ing miss rates (FNIR) only modestly. The reports also feature some minor additions and clarifications.

2021-03-26: The 1:N track of the FRVT remains open. Three updates:

. This is the sixth draft of the update to NIST Interagency Report 8271. It includes results for algorithms
from three returning developers: Neurotechnology, Guangzhou Pixel Solutions, and Tech5 SA.
. We have added results on the webpage and in the report for a new ageing dataset in which border
crossing photos are searched against a gallery of border crossing photos collected between 10 and 15
years prior to the mated search photos. See section 2 for a description of the images. Table 1 has a new
entry describing the experiment.
. We will mostly discontinue running the mugshot ageing test, reserving it for algorithms that show
high accuracy on the new border-crossing set.

2021-03-26: Regarding the fifth draft of the update to NIST Interagency Report 8271:

. In addition have added results for first algorithms from two new participants: Viettel Group and
Veridas Digital Authentication Solutions.
. We have added results for algorithms from two returning developers: Idemia and Cognitec Systems.
. In addition to the report, the results page and its hyperlinked report cards have been updated.

2021-02-08: Regarding the fourth draft of the update to NIST Interagency Report 8271:

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 4

. We have added results for eight algorithms submitted by eight developers: Cyberlink, Dermalog,
Imagus, Paravision, Sensetime, Trueface, Vigilant Solutions, and X-Forward AI. With the exception of
Trueface, all of these developers have participated previously.
. We anticipate updating this report again in the first week of March 2021.
. The main results page has been revised with tabs for the investigative and lights-out identification
tables, and a new tab dedicated to speed and resource consumption.
. The report cards (example here) hyperlinked from the results page have been revised to improve con-
tent and format.
2020-12-14: Regarding third draft of the update to NIST Interagency Report 8271:
. We have added results for fifteen algorithms submitted by thirteen developers. The four first-time
participants are: Acer, Akurat Satu Indonesia, Canon, and Xforward AI Technology. The ten return-
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

ing developers are: AllGoVision, Cyberlink Corp, Dahua Technology, Deepglint, Guangzhou Pixel
Solutions, IIT Vision, Innovatrics, Rank One Computing, Scanovate, Sensetime Group, Synesis, and
VisionLabs.
. We have added two new datasets to the evaluation: First a set of “visa-border” photos, representing
search of an airport immigration lane photo against a database of closely ISO standard portraits; sec-
ond a “visa-kiosk” set representing search of a photo collected in a registered traveller kiosk against
the same ISO portrait gallery. The images are described in section 2.1.
. As in previous reports, we include results for searching mugshots against a mugshot gallery containing
a single image of each of 12 million people. However we have suspending running searches against
a gallery in which multiple lifetime photos per person are present, because this is computationally
expensive. We retain a N = 3 million search test dedicated to ageing in which mugshots taken up to 18
years after the first photograph are searched - see Table 7.
. Tables containing computational resource information, Table 2. . ., now include duration of the final-
ization step, in which search algorithms can, at their option, build fast-search data structures.
. We have linked revised per-algorithm PDF report cards from the main results page.
. We have regenerated all figures and tables to drop algorithms submitted before June 2018. Results for
prior algorithms appear in archived editions of this report.
. Going forward, we anticipate producing more frequent updates to this report. Developers may submit
one algorithm to this evaluation every four calendar months.
2020-03-24: Regarding the second draft of the update to NIST Interagency Report 8271:
. Adds results for three algorithms from three developers, Dermalog, Innovatrics, and Synesis.
. Adds Table 7 on ageing showing the increase in false negative rates with time elapsed between two
photos. Some of the results were contained in graphs in prior editions of this report, but the table adds
results for some newly submitted algorithms.
. Adjusts frontal mugshot results (for recent and lifetime consolidated galleries) to include the effect
of removing some images that should not have been included in image test sets. These images were
mostly profile views, images of tattoos containing faces, images of faces on tee shirts, and images of
photographs on walls behind the intended subject. This affects many tables and reduces false negative
identification rates for all algorithms. The reduction is larger for “recent” enrollments than for “lifetime
consolidated” ones with the consequence that accuracy on recent images is now superior.
2020-02-26: Regarding the first draft of the update to NIST Interagency Report 8271:
. Adds results for 38 algorithms from 31 different developers, eleven of whom are entirely new to the
1:N track of FRVT. These are Allgovision, Cyberlink, Deepsea Tencent, Farbar F8, Imperial College
London, Intsys MSU, Kedacom, Kneron, Pixelall, and Scanovate.

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 5

DISCLAIMER
Specific hardware and software products identified in this report were used in order to perform the evalua-
tions described in this document. In no case does identification of any commercial product, trade name, or
vendor, imply recommendation or endorsement by the National Institute of Standards and Technology, nor
does it imply that the products and equipment identified are necessarily the best available for the purpose.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

INSTITUTIONAL REVIEW BOARD


The National Institute of Standards and Technology’s Research Protections Office reviewed the protocol
for this project and determined it is not human subjects research as defined in Department of Commerce
Regulations, 15 CFR 27, also known as the Common Rule for the Protection of Human Subjects (45 CFR 46,
Subpart A).

ACKNOWLEDGMENTS
The authors are grateful for the support and collaboration of the the Department of Homeland Security’s
Science & Technology Directorate (S&T), Office of Biometric Identity Management (OBIM), and Customs
and Border Protection (CBP).
Additionally, the authors are grateful to staff in the NIST Biometrics Research Laboratory for infrastructure
supporting rapid evaluation of algorithms.

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 6

Executive Summary
This document is a draft revision of the September 2019 report NIST Interagency Report 8271. That report gave ex-
tensive documentation of face recognition applied to mugshots. This report extends that by adding more two more
challenging datasets containing images with serious departures from canonical frontal image standards. The report
also adds results for algorithms submitted to NIST since in 2019 and 2020. The algorithms, which implement one-to-
many identification of faces appearing in two-dimensional images, are prototypes from the research and development
laboratories of mostly commercial suppliers, and are submitted to NIST as compiled black-box libraries implementing
a NIST-specified C++ test interface. The report therefore does not describe how algorithms operate. The report lists
accuracy results alongside developer names and will therefore be useful for comparison of face recognition algorithms
and assessment of absolute capability. The report is accompanied by a webpage with sortable results.
The evaluation uses six datasets: frontal mugshots, profile view mugshots, desktop webcam photos, visa-like immigra-
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

tion application photos, immigration lane photos, and registered traveler kiosk photos. These datasets are sequestered
at NIST, meaning that developers do not have access to them for training or testing. This aspect is important because
face recognition algorithms are very often deployed without the developer having access to the customers image data.
A possible exception to this would be in a cloud-based application where the operational image data is uploaded to a
cloud operated by a face recognition developer.
The major result in NIST IR 8271 was that massive gains in accuracy have been achieved in the years 2013 to 2018
and these far exceed improvements made in the prior period, 2010 to 2013. While the industry gains were broad - at
least 30 developers’ algorithms outperformed the most accurate algorithm from late 2013, there remains a wide range of
capability. While this report shows accuracy gains only over the period 2018-2020, the most accurate algorithm reported
here is substantially more accurate than anything reported in NIST IR 8271. This is evidence that face recognition
development continues apace, and that FRVT reports are but a snapshot of contemporary capability.
From discussion with developers, the accuracy gains stem from the adoption of deep convolutional neural networks.
As such, face recognition has undergone an industrial revolution, with algorithms increasingly tolerant of poorly illu-
minated and other low quality images, and poorly posed subjects. One related result is that a few algorithms correctly
match side-view photographs to galleries of frontal photos, with search accuracy approaching that of the best c. 2010
algorithms operating on purely frontal images. The capability to recognize under a 90-degree change in viewpoint -
pose invariance - has been a long-sought milestone in face recognition research.
With good quality portrait photos, the most accurate algorithms will find matching entries, when present, in galleries
containing 12 million individuals, with rank one miss rates of approaching 0.1%. The remaining errors are in large
part attributable to long-run ageing, facial injury and poor image quality. Given this impressive achievement - close to
perfect recognition - an advocate might claim that cooperative face recognition is a solved problem, a statement that
can be refuted with the following context and caveats:

. Mugshots vs. less constrained captures: The low error rates reported here are attained using mostly excel-
lent cooperative live-capture mugshot images collected with an attendant present. Recognition in other circum-
stances, particularly those without a dedicated photographic environment and human or automated quality con-
trol checks, will lead to declines in accuracy. This is documented here for side-view images, poorer quality we-
bcam images, and, particularly, for newly introduced ATM-style kiosk photos that were not originally intended
for automated face recognition. In this case, recognition error rates are much higher, often in excess of 20% even
with the more accurate algorithms which variously remain intolerant of face cropping (at image edge) and of
large downward head pitch.

. Algorithm accuracy spectrum: Recognition accuracy is very strongly dependent on the algorithm and, more

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 7

Same
photo
under
0.300
two IDs Algorithm
cloudwalk_mt_000
Same
dahua_004
0.200 person
deepglint_001
under
idemia_009
two IDs
innovatrics_007
microsoft_6
nec_005
0.100 neurotechnology_010
ntechlab_010
paravision_009
0.070 rankone_012
Twins
sensetime_007
visionlabs_011
0.050
False negative identification rate, FNIR(T)

xforwardai_002
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

0.040 yitu_5

0.030
Siblings

0.020

innovatrics_007

yitu_5
0.010
neurotechnology_010

Lookalikes
0.007

dahua_004
visionlabs_011
0.005
rankone_012
0.004
deepglint_001
microsoft_6
0.003

xforwardai_002
nec_005 cloudwalk_mt_000
0.002

idemia_009

sensetime_007 ntechlab_010

paravision_009
0.001
Identification seldom Investigational always
uses human review uses human review

0.0003 0.001 0.003 0.01 0.03 0.1 0.3 1


False positive identification rate, FPIR(T), N = 12000000

Figure 1: Identification miss rates across the false positive range. N = 12 million individuals are enrolled with one recent image.

generally, on the developer of the algorithm. False negative error rates in a particular scenario range from a few
tenths of one percent to beyond fifty percent. This is tabulated exhaustively later: For example Table 11 shows
accuracy across datasets. Figure 1 here compares algorithms on mugshot searches in a consolidated gallery of
12 million subjects and 12 million photos. Many algorithms do not achieve the low error rates noted above, and
while many of those may still be useful and valuable to end-users, only the most accurate excel on poor quality
images and those collected long after the initial enrollment sample.

. Versioning: While results for up to ten algorithms from each developer are reported here, the intra-provider

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 8

accuracy variations are usually smaller than the inter-provider variations. That said different versions give an
order of magnitude fewer misses. Some developers demonstrate speed-accuracy tradeoffs1 . See Figs. 18, 19.

. Low similarity scores: In thousands of mugshot cases the correct gallery image is returned at rank 1 but its
similarity score is nevertheless low, below some operationally required score threshold. This is not so important
when face recognition is used for “lead generation” in investigational applications because human reviewers are
specifically required to review potentially long candidate lists and the threshold is effectively 0. In applications
where search volumes are higher and labor is not available to review the results from searches, a higher threshold
must be applied. This reduces the length of candidate lists and false positive identification rates at the expense
of increased false negative miss rates. The tradeoff between the two error rates is reported extensively later.

. Population size: As the number of enrolled subjects grows, some mates are displaced from rank one, decreasing
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

accuracy. As tabulated later for N up to 12 million, false negative rates generally rise slowly with population
size. This enables use of face recognition in very large populations. However in most positive and negative
identification applications2 , a score threshold is set to limit the rate at which non-mate searches produce false
positives. This has the consequence that some mated searches will report the mate below threshold, i.e. a miss,
even if it is at rank 1. The utility of this is that many non-mated searches will return no candidate identities at
all. As the error-tradeoff characteristic shows, investigational miss rates on the right side are very low but then
rise steadily (in the center region) as threshold is increased to support “lights-out” applications, and ultimately
rise quickly (left side) as discussed below. Thus, if we demand that just one in one thousand non-mate searches
produce any false positives, the most accurate algorithms there (Sensetime-004 and NEC-3) would fail on between
3 and 5% of mated searches. Even though the graph shows results for the most accurate algorithms, all but two
would fail to find the mate in more than 8% of mated searches. While the two most accurate algorithms produce
a relatively flat error tradeoff until the threshold is raised to limit false positives to about 1 in 400 non-mated
searches3
Thereafter, as the threshold is raised to further reduce false positives, miss rates rise rapidly. This means that low
false positive identification rates are inaccesible with these algorithms, a result that does not apply for ten-finger
identification algorithms. The rapid rise occurs because the lower mate scores are mixed with very high non-mate
scores, the low scores from poor image quality and ageing, the high non-mates from the presence of lookalikes
persons (doppelgangers), twins (discussed next) and, ultimately, the presence of a few unconsolidated subjects
i.e. persons present under multiple IDs.

. False negatives from ageing: A large source of error in long-run applications where subjects are not re-enrolled
on a set schedule is ageing. Changes in facial appearance increase with the time elapsed between photographs.
These will depress similarity scores and eventually cause false negatives. All faces age and while this usually
proceeds in a graceful and progressive manner, drug use can accelerate this [28]. Elective surgery may be effective
in delaying it although this has not been formally quantified with face recognition. As ageing is essentially
unavoidable, it can only be mitigated by scheduled re-capture, as in passport re-issuance. To quantify ageing
effects, we used the more accurate algorithms to enroll the earliest image of 3.1 million adults and then search
1 For example, NEC-0 prepares templates much faster than NEC-2 but gives twenty times more misses. Dermalog-5 executes a template search much
more quickly than Dermalog-6 but is also much less accurate.
2 In a positive identification application such as a registered traveler system, a user is making an implicit claim to be enrolled in the system - most

users will be. In a negative application, such as with deportees, the implicit claim is that the subejct is not enrolled - most will not be.
3 The gallery size here is 12 million people, one image per person. Given 331 201 non-mated searches, an exhaustive implementation of one-too-many

search would execute almost 4 trillion comparisons. At a false positive identification rate of 0.0025 the number of false positives is, to first order,
828 corresponding to single-comparison false match rate of 828 / 4 trillion = 2.1 10−10 i.e. about 1 in 5 billion. Strictly this FMR computation is
meaningful only for algorithms that implement 1:N search using N 1:1 comparisons, which is not always the case.

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 9

with 10.3 million newer photos taken up to 18 years after the the initial enrollment photo. Figure 2 puts ageing
into context by contrasting it with the increase in false negatives that occurs when the number of individuals in
an enrollment database becomes larger and the chance of a false positive increases such that higher thresholds
may become necessary4 .
The Figure shows, from to bottom, increases in false negative identification rates (FNIR) with the algorithm
being tested. This applies to increases due to N on the left side, and increases due to ageing on the right side.
The relative spacing of the dots shows that for all algorithms the dependency of FNIR on N (up to 12 million) is
considerably less than on ∆T (up to 18 years).
In the inset table, accuracy is seen to degrade progressively with time, as mate scores decline and non-mates dis-
place mates from rank 1 position. More accurate algorithms tend to be less sensitive to ageing. The more accurate
algorithms give fewer errors after 18 years of ageing than middle tier algorithms give after four. Note also we do
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

not quantify an ageing rate - more formal methods [2] borrowed from the longitudinal analysis literature have
been published for doing so (given suitable repeated measures data). See Figures 60, 88 and 102.

4 Some
algorithms implement stragegies to automatically adjust scores to account for increased population size. This relieves the system owner of
having to increase thresholds as N increases.

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 10

realnetworks_003
dermalog_007
gorilla_005
incode_004
dermalog_008
veridas_001
ntechlab_007
idemia_4
ntechlab_008
innovatrics_005
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

gorilla_007
cognitec_004
imperial_000
rankone_009
pixelall_003
rankone_007
yitu_5
imagus_005
idemia_007
fujitsulab_001 Degrader
Years Lapsed (0,2]
trueface_000
Years Lapsed (10,12]
dahua_003 Years Lapsed (12,14]
cogent_004 Years Lapsed (14,18]
Years Lapsed (2,4]
realnetworks_006
Years Lapsed (4,6]
visionlabs_008 Years Lapsed (6,8]
Algorithm

cib_000 Years Lapsed (8,10]


microsoft_6 Years Lapsed ~ 4.5

neurotechnology_010
cognitec_006
N
ntechlab_009
N=00640000
canon_001 N=01600000
rankone_011 N=03000000
N=03068801
pangiam_000
N=06000000
ntechlab_011 N=12000000
rankone_012
paravision_005
visionlabs_010
xforwardai_001
deepglint_001
paravision_007
Effect of population Effect of time lapse
visionlabs_011
size, N (increasing left). 1 < DT < 18 years
paravision_009
Mean time lapse is increasing right.
nec_3
sensetime_006
4.5 years. N is fixed at 3068801
cubox_000 Num probes is 154K
sensetime_004
idemia_008
nec_004
sensetime_007
cloudwalk_hr_000
idemia_009
nec_005
cloudwalk_mt_000
0.20 0.15 0.10 0.05 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45
False negative identification rate (FNIR) at false positive identification rate (FPIR) = 0.01

Figure 2: Identification miss rates as a function of enrolled population size, N , and time-lapse, ∆T .

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 11
17.0

16.5
TVAL
Similarity Score

FPIR = 0.001
16.0
FPIR = 0.003

FPIR = 0.010
15.5
FPIR = 0.030

15.0

AA AA AB AB AB
fraternal identical fraternal fraternal identical
SameSex SameSex DifferentSex SameSex SameSex
Gallery: Twin A; Probe: Twin A or B; Type of Twin
Figure 3: Intra- and inter-twin scores

. False positives from twins: By enrolling 640 000 mugshots, adding photos of one twin, and then searching photos
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

of those subjects and their twin the inset figure shows, for one typical algorithm, the similarity is generally greater
when searching twins against themselves (A) than when searching twins against their sibling (B) but very often
still above even stringent thresholds i.e. those corresponding to one in one thousand searches producing a false
positive. Thus twins will very often produce a high-scoring non-match on a candidate list and a false alarm in
an online identification system. The plot of Fig. 3 shows that fraternal twins are sometimes correctly rejected at
those thresholds - including most different sex twins (at center). Figure ?? shows substantially similar behavior
for all algorithms tested. In an investigative search, a twin would typically appear at rank 1, or rank 2 if their
sibling happened to also be the gallery. Twins (and triplets etc.) constituted 3.3% of all live births [17] in recent
years5 , and because that number is higher today than when the individuals in current adult databases were born,
the false positives that arise from twins are now, and will increasingly be, an operational problem. Relative to the
United States, twins are born with considerable regional variation. For example they are much less common in
East Asia, and much more common in Sub-Saharan Africa [21].
The presence of twins in the mugshot database is inevitable given its size, around 12.3 million people. As this
is not an insignificant sample of the domestic United States population, people with other familial ties will be
present also. The data was collected over an extended period and because location information is not available,
we are unable to estimate the proportion of the domestic population that is present in the dataset. However, if
we assume twins are neither more or less disposed to arrest than the general population, we can estimate that
hundreds of thousands of individuals in the dataset are twins. This will affect false positive rates because we
randomly set aside 331 201 individuals for nonmate searches, and some proportion of those will be twins with
siblings in the gallery.

. Database integrity: An operational error rate should be added to all false negative rates in this report reflecting
the proportion of images in a real database that are un-matchable. Such anomalies arise from images that: do not
contain a face; include multiple persons; cannot be decoded; are rotated by 90◦ or 180◦ ; depict a face on clothing;
and others introduced by a long tail of various clerical errors. While the mugshot trials in this report have been
constructed to minimize such effects, they are a real problem in actual operations.

This report is being updated continuously as new algorithms are submitted to FRVT, and run on new datasets. Par-
ticipation in the one-to-many identification track is independent of participation in the one-to-one verification track of
FRVT.

5 See the CDC’s National Vital Statistics Report for 2017: https://www.cdc.gov/nchs/data/nvsr/nvsr67/nvsr67 08-508.pdf

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 12

Scope and Context


Audience: This report is intended for developers, integrators, end users, policy makers and others who have some
familiarity with biometrics applications. The methods and metrics documented here will be of interest to organizations
engaged in tests of face recognition algorithms. Some of these have been incorporated in the ISO/IEC 19795 Part 1
Biometric Testing and Reporting Framework standard, now nearing publication.
Prior benchmarks: Automated face recognition accuracy has improved massively in the two decades since initial com-
mercialization of the various technologies. NIST has tracked that improvement through its conduct of regular inde-
pendent, free, open, and public evaluations. These have fostered improvements in the state of the art. This report
serves as an update to the NIST Interagency Report 8271 on performance of face identification algorithms, published
in September 2019.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

Demographics: In December 2019, NIST published a first report on demographic dependencies in face recognition,
NIST Interagency Report 8280 that documented age, sex and race differentials in one-to-one and one-to-many false
positive and false negative rates.
Scope: NIST IR 8271 documented recognition results for four databases containing in excess of 30.2 million still pho-
tographs of 14.4 million individuals. That constituted the largest public and independent evaluation of face recognition
ever conducted. It includes results for accuracy, speed, investigative vs. identification applications, scalability to large
populations, use of multiple images per person, images of cooperative and non-cooperative subjects.
The report also includes results for ageing, recognition of twins, and recognition of profile-view images against frontal
galleries. It otherwise does not address causes of recognition failure, neither image-specific problems nor subject-
specific factors including demographics. Separate reports on demographic dependencies in face recognition will be
published in the future. Additionally out of scope are: performance of live human-in-the-loop transactional systems
like automated border control gates; human recognition accuracy as used in forensic applications; and recognition of
persons in video sequences (which NIST evaluated separately [9]). Some of those applications share core matching
technologies that are tested in this report.
Images: Five kinds of images are employed; these are either compared with images of the same kind, or against others
from different capture environments as follows. The primary dataset is a set of law enforcement mugshot images (Fig.
5) which are enrolled and then searched with three kinds of images: other mugshots (i.e. within-domain); profile-
view photographs (90 degree cross-view); and lower quality webcam images (Fig. 6) collected in similar detention
operations (cross-domain). Additionally we compare high quality visa-like photos collected in immigration offices,
with: medium quality border crossing images collected in primary immigration lanes; poor quality images collected in
ATM-like registered traveller kiosks.
Participation and industry coverage: The report includes performance figures for prototype algorithms from the re-
search laboratories of commercial developers and a few universities. This represents a substantial majority of the face
recognition industry, but only a tiny minority of the academic community. Participation was open worldwide. While
there is no charge for participation, developers incur some software engineering expense in implementing their algo-
rithms behind the NIST application programming interface (API). The test is a black-box test where the function of the
algorithm, and the intellectual property associated with it, is hidden inside pre-compiled libraries.
Recent technology development: Most face recognition research with deep convolutional neural networks (CNNs) has
been aimed at achieving invariance to pose, illumination and expression variations that characterize photojournalism
and social media images. The initial research [18, 22] employed large numbers of images of relatively few (∼ 104 )
individuals to learn invariance. Inevitably much larger populations (∼ 107 ) were employed for training [11, 20] but
the benchmark, Labeled Faces in the Wild with (essentially) an equal error rate metric [12], represents an easy task,

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 13

one-to-one verification at very high false match rates. While a larger scale identification benchmark duly followed,
Megaface [15], its primary metric, rank one hit rate, contrasts with the high threshold discrimination task required
in most large-population applications of face recognition, namely credential de-duplication, and background checks.
There, identification in galleries containing up to 108 individuals must be performed using a) very few images per
individual and b) stringent thresholds to afford very low false positive identification rates. This track of FRVT was
launched to measure the capability of the new technologies, including in these two cases. FRVT has included open-set
identification tests since 2002, reporting both false negative and positive identification rates [7].

Search Photo
Alice
Bob
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

Ben Automated
Detection and
localization face
Eve Feature extraction recognition
The enrollment Dawn e.g. CNN model engine
database
Sam Feature extraction evaluated as
consists of
e.g. CNN model black box.
images and Alex
any biographic Eva This grey box
data. Enrolled database: Search Algorithm
Pat is the scope of
Array, tree, index or e.g. N comparisons NIST’s
Jack other data structure
evaluation.
The algorithm Jill
is given the
Bill
images and Output is a candidate list. It’s length is determined by preset configuration of
and a pointer Zeke rank and threshold, and these are set to implement objectives.
to the record Zack

ID Rank Score ID Rank Score ID Rank Score


Pat 1 3.142 Pat 1 3.142
Usually the correct response is Usually correct response is Bob 2 2.998
for one entry as most searches empty list as most searches are Ben 3 1.602
will be mated non-mated Zeke 4 0.707
...

Positive identification Negative identification Post-event investigation

Example Access to a gym or cruise ship Watchlist e.g. Detection of deportee or Crime scene photos, or of detainee
duplicate drivers license applications without ID documents.
Claim of identity Implicit claim to be enrolled Implicit claim to not be enrolled No claim: Inquiry
Threshold High, to implement security High, to limit false positives Zero
objective
Num. candidates 1 0 L, set by request to algorithm
Human role Review candidate to assist user in Review candidate to determine false Review multiple candidates, refer
resolution of false negatives, or to positive or correct hit possible hits to examiners see [26]
detect impostor
Intended human Rare – approx. the false negative Rare – approx. the false positive Always
involvement identification rate plus prior identification rate plus prior probability
frequency probability of impostor of an actual mate
Performance FNIR at low FPIR. See sec. 3.1, 3.2 FNIR at low FPIR. See sec. 3.1, 3.2 and FNIR at ranks 1... 50, say. FPIR = 1.
metric of interest and Tables 10, 19 Tables 10, 19 See sec. 3.2 and Table 12, 14, 16

Performance metrics for applications: This report documents the performance of one-to-many face recognition algo-
rithms. The word ”performance” here refers to recognition accuracy and computational resource usage, as measured

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 14

by executing those algorithms on massive sequestered datasets.


This report includes extensive tabulation of recognition error rates germane to the main use-cases for face search tech-
nology. The Figure below, inspired by the Figure 1 in [23] differentiates different applications of the technolgy. The last
row directs readers to the main tables relevant to those applications, respectively threshold-based and rank-based met-
rics that are special cases of the metrics given in section 3. The terms negative identification and positive identification
are taken from the ISO/IEC 2382-37:2017 standardized biometrics vocabulary.
The algorithms are specifically configured for these applications by setting thresholds and candidate list lengths. Both
rank-based metrics and threshold-based metrics include tradeoffs. In investigation, overall accuracy will be reduced
if labor is only available to review a few candidates from the automated system. Note that when a fixed number of
candidates are returned, the false positive identification rate of the automated face recognition engine will be 100%,
because a probe image of anyone not enrolled will still return candidates. In identification applications where false
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

positives must be limited to satisfy reviewer labor availability or a security objective, higher false negative rates are
implied. This report includes extensive quantification of this threshold-based tradeoff. See Sec. 3
Template diversity: The FRVT is designed to evaluate black-box technologies with the consequence that the templates
that hold features extracted from face images are entirely proprietary opaque binary data that embed considerable
intellectual property of the developer. Despite migration to CNN-based technologies there is no consensus on the
optimal feature vector dimension. This is evidenced by template sizes ranging from below 100 bytes to more than four
kilobytes. This diversity of approaches, suggests there is no prospect of a standard template something that would
require a common feature set to be extracted from faces. Interoperability in automated face recognition remains solidly
based on images and documentary standards for those, in particular the ICAO portrait [27] specification deriving from
the ISO/IEC 19794-5 Token frontal [24] standard, which are similar to certain ANSI/NIST Type 10 [26] formats.
Training: The algorithms submitted to NIST have been developed using image datasets that developers do not disclose.
The development will often include application of machine learning techniques and will additionally involve iterative
training and testing cycles. NIST itself does not perform any training and does not refine or alter the algorithm in
any way. Thus the model, data files, and libraries that define an algorithm are fixed for the duration of the tests. This
reflects typical operational reality where recognition software, once installed, is fixed and constant until upgraded.
This situation persists because on-site training of algorithms on customer data is atypical essentially because training
is not a turnkey process.
Automated search and human review: Virtually all applications using automated face search require human review of
the outputs at some frequency: Always for investigational applications; rarely in positive identification applications,
after rejection (false or otherwise); and rarely in negative identification applications, after an alarm (false or otherwise).
The human role is usually to compare a reference image with the query image or the live-subject if present, to render
either a definitive decision on “exclusion” (different subjects), or “identification” (same subject), or a declaration that
one or both images have “no value” and that no decision can be made. Note that automated face recognition algorithms
are not built to do exclusion - low scores from a face comparison arise from different faces and poor quality images of
the same face.
Human reviewers make recognition errors [5, 19, 25] and are sensitive to image acquisition and quality. Accurate
human review is supported by high resolution - as specified in the Type 50, 51 acquisition profiles of the ANSI/NIST
Type 10 record [26], and by multiple non-frontal views as specified in the same standard. These often afford views
of the ear. Organizations involved in image collection should consider supporting human adjudication by collecting
high-resolution frontal and non-frontal views, preparing low resolution versions for automated face recognition [24],
and retaining both for any subsequent resolution of candidate matches. Along these lines, the ISO/IEC Joint Technical

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 15

Committee 1 subcommittee 37 on biometrics has just initiated projects on image quality assessment and face-aware
capture.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 16

Release Notes
FRVT Activities: Since February 2017, NIST has been evaluating one-to-one verification algorithms on an ongoing
basis. NIST then restarted FRVT’s one-to-many track in February 2018, inviting participants to send up to prototype
algorithms. Both tracks allows developers to submit updated algorithms to NIST at any time but no more frequently
than four calendar months. This more closely aligns development and evaluation schedules. Results are posted to the
web within a few weeks of submission. Details and full report are linked from the Ongoing FRVT site.
FRVT Reports: The results of the FRVT appear in the series NIST Interagency Reports tabulated below. The reports
were developed separately and released on different schedules. In prior years NIST has mostly reported FRVT results
as a single report; this had the disadvantage that results from completed sub-studies were not published until all other
studies were complete.

Date Link Title No.


This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

2014-03-20 PDF FRVT Performance of Automated Age Estimation Algorithms 7995


2015-04-20 PDF Face Recognition Vendor Test (FRVT) Performance of Automated Gender Classification Algorithms 8052
2014-05-21 PDF FRVT Performance of face identification algorithms 8009
2017-03-07 PDF Face In Video Evaluation (FIVE) Face Recognition of Non-Cooperative Subjects 8173
2017-11-23 PDF The 2017 IARPA Face Recognition Prize Challenge (FRPC) 8197
2018-11-27 PDF Face Recognition Vendor Test - Part 2: Identification 8271
2019-09-11 PDF Face Recognition Vendor Test - Part 2: Identification 8271
2019-12-11 PDF Face Recognition Vendor Test - Part 3: Demographic Effects 8280
2020-01-03 WWW Face Recognition Vendor Test (FRVT) - Part 1 Verification Draft

Details appear on pages linked from https://www.nist.gov/programs-projects/face-projects.


Appendices: This report is accompanied by appendices which present exhaustive results on a per-algorithm basis.
These are machine-generated and are included because the authors believe that visualization of such data is broadly
informative and vital to understanding the context of the report.
Typesetting: Virtually all of the tabulated content in this report was produced automatically. This involved the use
of scripting tools to generate directly type-settable LATEX content. This improves timeliness, flexibility, maintainability,
and reduces transcription errors.
Graphics: Many of the Figures in this report were produced using the ggplot2 package running under R, the capabilities
of which extend beyond those evident in this document.

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 17

Contents
Release Notes 1

Disclaimer 5

Institutional Review Board 5

Acknowledgments 5

Executive Summary 6
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

Scope and Context 12

Release Notes 16

1 Introduction 18

2 Evaluation datasets 19

3 Performance metrics 25

4 Results 41

Appendices 82

A Accuracy on large-population FRVT 2018 mugshots 82

B Effect of time-lapse: Accuracy after face ageing 127

C Effect of enrolling multiple images 200

D Accuracy with poor quality webcam images 207

E Accuracy for profile-view to frontal recognition 217

F Search duration 221

G Gallery Insertion Timing 231

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 18

1 Introduction

One-to-many identification represents the largest market for face recognition technology. Algorithms are used across
the world in a diverse range of biometric applications: detection of duplicates in databases, detection of fraudulent
applications for credentials such as passports and driving licenses, token-less access control, surveillance, social media
tagging, lookalike discovery, criminal investigation, and forensic clustering.

This report contains a breadth of performance measurements relevant to many applications. Performance here refers
to accuracy and resource consumption. In most applications, the core accuracy of a facial recognition algorithm is
the most important performance variable. Resource consumption will be important also as it drives the amount of
hardware, power, and cooling necessary to accommodate high volume workflows. Algorithms consume processing
time, they require computer memory, and their static template data requires storage space. This report documents
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

these variables.

1.1 Open-set searches

FRVT tested open-set identification algorithms. Real-world applications are almost always “open-set”, meaning that
some searches have an enrolled mate, but some do not. For example, some subjects have truly not been issued a visa
or drivers license before; some law enforcement searches are from first-time arrestees6 . In an “open-set” application,
algorithms make no prior assumption about whether or not to return a high-scoring result, and for a mated search, the
ideal behaviour is that the search produces the correct mate at high score and first rank. For a non-mate search, the
ideal behavior is that the search produces zero high-scoring candidates.

Many academic benchmarks execute only closed-set searches. The proportion of mates found in the rank one position
is the default accuracy metric. This hit rate metric ignores the score with which a mate is found; weak hits count
as much as strong hits. This ignores the real-world imperative that in many applications it is necessary to elevate a
threshold to reduce the number of false positives.

6 Operationally closed-set applications are rare because it is usually not the case that all searches have an enrolled mate. One counter-example,
however, is a cruise ship in which all passengers are enrolled and all searches should produce exactly one identity. Another example is forensic
identification of dental records from an aircraft crash.

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 19

2 Evaluation datasets

This report documents accuracy for four kinds of images - mugshots, webcam, profiles and wild - as described in the
following sections.

2.1 Immigration-related images

This report includes benchmark tests sharing a common enrollment of high quality frontal portrait images collected
while subject make applications for various immigration benefits. We then search that with two kinds of images,
webcam images collected during in-bound immigration and also images collected from registered travelers using a
ATM-style kiosk. These are described below and depicted in Figure 4.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

a) Application b) Immigration Lane c) Kiosk

Figure 4: Example photos.

. Application reference photos: The images are collected in an attended interview setting using dedicated capture
equipment and lighting. The images, at size 300x300 pixels, are smaller than normally indicated by ISO. The
images are all high-quality frontal portraits collected in immigration offices and with a white background. As
such, potential quality related drivers of high false match rates (such as blur) can be expected to be absent. The
images are encoded as ISO/IEC 10918-1 i.e. JPEG. Older images had a compression ration of about 16:1, while
newer images, since 2010, are more lightly compressed at 4:1. When these images are provided as input into the
algorithm, they are labeled with the type “iso”. This report enrols 1 600 000 application images, one per person.

. Border crossing photos: Most images are have width 320 and height 240 pixels. They are JPEG compressed at
16:1 i.e. filesize just below 15KB. The images present challenges for face recognition in that subjects often exhibit
non-zero yaw and pitch (associated with the rotational degrees of freedom of the camera mount), low contrast
(due to varying and intense background lights), and poor spatial resolution (due to inexpensive cameras). There
are often subjects standing in the background, usually at very low resolution (see Figure 4b). In such cases,
algorithms should detect all faces and determine which is the largest and most centered. When these images are
provided as input into the algorithm, they are labeled with the type “wild”.

. Kiosk photos: These photos were collected from subjects whose attention was focused on interaction with an
immigration kiosk. They images were not intended for use with automated face recognition. The camera is situ-
ated above a display which the user touches, and is triggered either without directing the subject to look at it, or
without waiting for the subject to comply. The images are therefore characterized by pitch-down pose, sometimes
exceeding 45 degrees, as in Figure 4c. Yaw-angle variation is mild, with most images close to frontal. The images

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 20

have width 320 pixels and height 240 pixels and therefore tall individuals are sometimes cropped. This is often
just above the eyes and can occur at the nose or mouth. Conversely, short individuals are sometimes cropped
such that only the top part of the face is visible. In a quite small number of cases, there other subjects standing
just behind the primary subject such that algoriths should detect all faces and determine which is the largest and
most centered. Background ceiling lighting is often visible and this sometimes leads to under-exposure of the
face When these images are provided as input into the algorithm, they are labeled with the type “wild”.

2.2 Law enforcement images

The main mugshot dataset used is referred to as the FRVT 2018 set. This set was collected over the period 2002 to 2017
in routine United States law enforcement operations. This set yields three subsets
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

. Mugshots: Mugshots comprise about 86% of the database. They have reasonable compliance with the ANSI / NIST
ITL 1-2011 Type 10 standard’s subject acquisition profiles levels 10-20 for frontal images [26]. The most common
departure from the standard’s requirements is the presence of mild pose variations around frontal - the images
of Figure 5 are typical. The images vary in size, with many being 480x600 pixels with JPEG compression applied
to produce filesizes of between 18 and 36KB with many images outside this range, implying that about 0.5 bits
are being encoded per pixel. When these images are provided as input into the algorithm, they are labeled with
the type “mugshot”. Example images appear in Fig. 5
NIST Interagency Report 8238 includes a comparison of this set of mugshots with the smaller and easier sets of
mugshots used in tests run in 2010 and 2014.

. Profile images: Profile-view images have been collected in law enforcement for more than 100 years, as human
capability is improved with orthogonal information. The profile images used in this report were collected during
the same session as the frontal mugshot photograph, in the same standardized photographic setup. These would
not therefore be used with automated face recognition. A small subset, 200 000 images, were set aside for testing.
When these images are provided as input into the algorithm, they are labeled with the type “wild”. Example
images appear in Fig. 7

. Webcam images: The remaining 14% of the images were collected using an inexpensive webcam attached to a
flexible operator-directed mount. These images are all of size 240x240 pixels, that are in considerable violation of
most quality-related clauses of all face recognition standards. As evident in the figure, the most common defects
are non-frontal pose (associated with the rotational degrees of freedom of the camera mount), low contrast (due
to varying and intense background lights), and poor spatial resolution (due to inexpensive camera optics) - see
examples in Fig 6. The images are overly JPEG compressed, to between 4 and 7KB, implying that only 0.5 to 1 bits
are being encoded per color pixel. When these images are provided as input into the algorithm, they are labeled
with the type “wild”. Example images appear in Fig. 6

These are drawn from NIST Special Database 32 which may be downloaded here.

These images were partitioned in galleries and probesets for the various experiment listed in Table 1.

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 21
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

Figure 5: Six mated mugshot pairs representative of the FRVT-2014 (LEO) and FRVT-2018 datasets. The images are collected live,
i.e. not scanned from paper. Image source: NIST Special Database 32 the Multiple Encounter Deceased Subjects dataset.

Figure 6: Twelve webcam images representative of probes against the FRVT-2018 mugshot gallery. The first eight images are four
mated pairs. Such images present challenges to recognition including pose, non-uniform illumination, low contrast, compression,
cropping, and low spatial sampling rate. Image source: NIST Special Database 32 the Multiple Encounter Deceased Subjects dataset.

Figure 7: [Profile views] The three images are a frontal enrollment, subsequent frontal probe, and same-session ninety degree profile
view. While collection of both frontal and profile views has been typical in law enforcement for more than a century, the recognition
of profile to frontal views has essentially been impossible. However, reasonbly high accuracy results is now possible - see section E.

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 22

Image
Encounter 1 ... Ki − 1 Ki
Capture Time T1 ... TKi −1 TKi
Role RECENT Not used Not used Enrolled Search
Role LIFETIME Enrolled Enrolled Enrolled Search

Figure 8: Depiction of the “recent” and “lifetime” enrollment types. Image source: NIST Special Database 32

2.3 Enrollment strategies


This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

Many operational applications include collection and enrollment of biometric data from subjects on more than one
occasion. This might be done on a regular basis, as might occur in credential (re-)issuance, or irregularly, as might
happen in a criminal recidivist situation [4]. The number of images per person will depend on the application area.
In civil identity credentialing (e.g. passports, driver’s licenses), the images will be acquired approximately uniformly
over time (e.g. ten years for a passport). While the distribution of dates for such images of a person might be assumed
uniform, a number of factors might undermine this assumption7 . In criminal applications, the number of images
would depend on the number of arrests. The distribution of dates for arrest records for a person (i.e. the recidivism
distribution) has been modeled using the exponential distribution but is recognized to be more complicated8 .

In any case, the 2010 NIST evaluation of face recognition showed that considerable accuracy benefits accrue with
retention and use of all historical images [6].

To this end, the FRVT API document provides K ≥ 1 images of an individual to the enrollment software. The software
is tasked with producing a single proprietary undocumented “black-box” template9 from the K images. This affords
the algorithm an ability to generate a model of the individual, rather than to simply extract features from each image on
a sequential basis.

As depicted in Figure 8, the i-th individual in the FRVT 2018 dataset has Ki images. These are labelled as xk for
k = 1 . . . Ki in chronological order of capture date. To measure the utility of having multiple enrollment images, this
report evaluates three kinds of enrollment:

. Recent: Only the second most recent image, xKi −1 is enrolled. This strategy of enrollment mimics the operational
policy of retaining the imagery from the most recent encounter. This might be done operationally to ameliorate
the effects of face ageing. Obviously retaining only the most recent image should only be done if the identity
of the person is trusted to be correct. For example, in an access control situation retention of the most recent
successful authentication image would be hazardous if it could be a false positive.

. Lifetime-consolidated: All but the most recent image are enrolled, x1 . . . xKi −1 . This subject-centric strategy
might be adopted if quality variations exist where an older image might be more suitable for matching, despite
the ageing effect.
7 For example, a person might skip applying for a passport for one cycle, letting it expire. In addition, a person might submit identical images (from
the same photography session) to consecutive passport applications at five year intervals.
8 A number of distributions have been considered to model recidivism, see for example [3].
9 There are no formal face template standards. Template standards only exist for fingerprint minutiae - see ISO/IEC 19794-2:2011.

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 23

RECENT LIFETIME LIFETIME


CONSOLIDATED UNCONSOLIDATED
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

Num. people, N = 6 Num. people, N = 6 Num. people, N = 6


Num. images, M = 6 Num. images, M = 9 Num. images, M = 9

For each of N enrollees, the For each enrollee, the algorithm is For each of N enrollees, the
algorithm is given only the most given all photos from all historical algorithm is given all photos from
recent photo. encounters. The algorithm is able all historical encounters but as
to fuse information from all images separate images, so that the
of a person algorithm is not aware that some
images are of the same ID.

Operational situation: Operational situation: Operational situation:


Typical when old images are not, or Typical when, say, fingerprints are This is typical when ID is not known
cannot be, retained, or (rarely) if available and precise de- when an image is collected, or is
prior images are too old to be duplication is possible. uncertain.
valuable.
The result is a consolidated person- The result is an unconsolidated
centric database. event-based database.

Accuracy computation: False negative unless the enrolled mate is returned Accuracy computation: False
within top R ranks and at or above threshold. negative unless any of the enrolled
mates are returned within top R
ranks and at or above threshold.

Figure 9: Enrollment strategies. The figure shows the three kinds of enrollment databases examined in this report. Image source:
NIST Special Database 32

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 24

ENROLLMENT SEARCH
TYPE SEE POPULATION MATE NON - MATE
SECTION 2.3 FILTER N - SUBJECTS N - IMAGES N - SUBJECTS N - IMAGES N - SUBJECTS N - IMAGES
Mugshot trials from enrollment of single images
1 RECENT NATURAL 640 000 640 000 154 549 154 549 331 254 331 254
2 RECENT NATURAL 1 600 000 1 600 000
3 RECENT NATURAL 3 000 000 3 000 000
4 RECENT NATURAL 6 000 000 6 000 000
5 RECENT NATURAL 12 000 000 12 000 000
Cross-domain
13 MUGSHOTS AS ON ROW 2 82 106 82 106 331 254 331 254
WEBCAM WEBCAM WEBCAM WEBCAM
Cross-view
14 MUGSHOTS AS ON ROW 2 100 000 100 000 100 000 100 000
PROFILE PROFILE PROFILE PROFILE
Mugshot ageing
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

17 OLDEST NATURAL 3 068 801 3 068 801 2 853 221 10 951 064 0 0
Border crossing ageing
18 OLDEST NATURAL 1 600 000 1 600 000 903 655 1 922 393 1 393 076 1 680 000
Visa-border
19 PRIOR NATURAL 1 600 000 1 600 000 577 444 1 212 892 79 769 80 000
VISA VISA BORDER BORDER BORDER BORDER
20 VISA AS ON ROW 18 14 004 31 579 42 474 45 460
BORDER BORDER BORDER BORDER

Table 1: Enrollment and search sets. Each row summarizes one identification trial. Unless stated otherwise, all entries refer to
mugshot images. The term “natural” means that subjects were selected without heed to demographics, i.e. in the distribution native
to this dataset. The probe images were collected in a different calendar year to the enrollment image. Missing values in rows 2-12
are the same as in row 1.

. Lifetime-unconsolidated: Again all but the most recent image are enrolled x1 . . . xKi −1 but now separately, with
different identifiers, such that the algorithm is not aware that the images are from the same face. This kind of
event- or encounter-centric enrollment is very common when operational constraints preclude reliable consolida-
tion of the historical encounters into a single identity. This aspect also prevents the recognition algorithm from a)
building a holistic model of identity (as is common in speaker recognition systems) and b) implementing fusion,
for example template-level fusion of feature vectors, or post-search score-level fusion. The result is that searches
will typically yield more than one image of a person in the top ranks. This has consequences for appropriate
metrics, as detailed in section 3.2.1
NIST first evaluated this kind of enrollment in mid 2018, and the results tables include some comparison of
accuracy available from all three enrollment styles.

In all cases, the most recent image, xKi , is reserved as the search image. For the 1.6 million subject enrollment partition
of the FRVT 2018 data, 1 ≤ Ki ≤ 33 with Ki = 1 in 80.1% of the individuals, Ki = 2 in 13.4%, Ki = 3 in 3.7%, Ki = 4 in
1.4%, Ki = 5 in 0.6%, Ki = 6 in 0.3%, and Ki > 6 is 0.2% for everyone else. This distribution is substantially dependent
on United States recidivism rates.

We did not evaluate the case of retaining only the highest quality image, since automated quality assessment is out
of scope for this report. We do not anticipate that such strategies will prove beneficial when the quality assessment
apparatus is imperfect and unvalidated.

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 25

3 Performance metrics

This section gives specific definitions for accuracy and timing metrics. Tests of open-set biometric algorithms must
quantify frequency of two error conditions:

. False positives: Type I errors occur when search data from a person who has never been seen before is incorrectly
associated with one or more enrollees’ data.

. Misses: Type II errors arise when a search of an enrolled person’s biometric does not return the correct identity.

Many practitioners prefer to talk about “hit rates” instead of “miss rates” - the first is simply one minus the other as
detailed below. Sections 3.1 and 3.2 define metrics for the Type I and Type II performance variables.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

Additionally, because recognition algorithms sometimes fail to produce a template from an image, or fail to execute a
one-to-many search, the occurrence of such events must be recorded. Further because algorithms might elect to not
produce a template from, for example, a poor quality image, these failure rates must be combined with the recognition
error rates to support algorithm comparison. This is addressed in section 3.5.

Finally, section 3.7 discusses measurement of computation duration, and section 3.8 addresses the uncertainty associ-
ated with various measurements. Template size measurement is included with the results.

3.1 Quantifying false positives

It is typical for a search to be conducted into an enrolled population of N identities, and for the algorithm to be
configured to return the closest L candidate identities. These candidates are ranked by their score, in descending order,
with all scores required to be greater than or equal to zero. A human analyst might examine either all L candidates, or
just the top R ≤ L identities, or only those with score greater than threshold, T . The workload associated with such
examination is discussed later, in 3.6.

False alarm performance is quantified in two related ways. These express how many searches produces false positives,
and then, how many false positives are produced in a search.

False positive identification rate: The first quantity, FPIR, is the proportion of non-mate searches that produce an
adverse outcome:

Num. non-mate searches where one or more enrolled candidates are returned with score at or above threshold
FPIR(N, T ) =
Num. non-mate searches attempted.
(1)
Under this definition, FPIR can be computed from the highest non-mate candidate produced in a search - it is not
necessary to consider candidates at rank 2 and above. FPIR is the primary measure of Type I errors in this report.

Selectivity: However, note that in any given search, several non-mate may be returned above threshold. In order to
quantify such events, a second quantity, selectivity (SEL), is defined as the number of non-mates returned on a candidate
list, averaged over all searches.

Num. non-mate enrolled candidates returned with score at or above threshold


SEL(N, T ) = (2)
Num. non-mate searches attempted.

where 0 ≤ SEL(N, T) ≤L. Both of these metrics are useful operationally. FPIR is useful for targeting how often an

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 26

adverse false positive outcome can occur, while SEL as a number is related to workload associated with adjudicating
candidate lists. The relationship between the two quantities is complicated - it depends on whether an algorithm
concentrates the false alarms in the results of a few searches or whether it disburses them across many. This was
detailed in FRVT 2014, NISTIR 8009. It has not yet been detailed in FRVT 2018.

3.2 Quantifying hits and misses

If L candidates are returned in a search, a shorter candidate list can be prepared by taking the top R ≤ L candidates for
which the score is above some threshold, T ≥ 0. This reduction of the candidate list is done because thresholds may be
applied, and only short lists might be reviewed (according to policy or labor availability, for example). It is useful then
to state accuracy in terms of R and T , so we define a “miss rate” with the general name false negative identification
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

rate (FNIR), as follows:

Num. mate searches with enrolled mate found outside top R ranks or score below threshold
FNIR(N, R, T ) = (3)
Num. mate searches attempted.

This formulation is simple for evaluation in that it does not distinguish between causes of misses. Thus a mate that is
not reported on a candidate list is treated the same as a miss arising from face finding failure, algorithm intolerance
of poor quality, or software crashes. Thus if the algorithm fails to produce a candidate list, either because the search
failed, or because a search template was not made, the result is regarded as a miss, adding to FNIR.

Hit rates, and true positive identification rates: While FNIR states the “miss rate” as how often the correct candidate is
either not above threshold or not at good rank, many communities prefer to talk of “hit rates”. This is simply the true
positive identification rate(TPIR) which is the complement of FNIR giving a positive statement of how often mated
searches are successful:
TPIR(N, R, T ) = 1 − FNIR(N, R, T ) (4)

This report does not report true positive “hit” rates, preferring false negative miss rates for two reasons. First, costs
rise linearly with error rates. For example, if we double FNIR in an access control system, then we double user incon-
venience and delay. If we express that as decrease of TPIR from, say 98.5% to 97%, then we mentally have to invert the
scale to see a doubling in costs. More subtly, readers don’t perceive differences in numbers near 100% well, becoming
inured to the “high nineties” effect where numbers close to 100 are perceived indifferently.

Reliability is a corresponding term, typically being identical to TPIR, and often cited in automated (fingerprint) iden-
tification system (AFIS) evaluations.

An important special case is the cumulative match characteristic(CMC) which summarizes accuracy of mated-searches
only. It ignores similarity scores by relaxing the threshold requirement, and just reports the fraction of mated searches
returning the mate at rank R or better.
CMC(N, R) = 1 − FNIR(N, R, 0) (5)

We primarily cite the complement of this quantity, FNIR(N, R, 0), the fraction of mates not in the top R ranks.

The rank one hit rate is the fraction of mated searches yielding the correct candidate at best rank, i.e. CMC(N, 1). While
this quantity is the most common summary indicator of an algorithm’s efficacy, it is not dependent on similarity scores,
so it does not distinguish between strong (high scoring) and weak hits. It also ignores that an adjudicating reviewer is
often willing to look at many candidates.

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 27

3.2.1 False negative rates for unconsolidated galleries

As detailed in section 2.3 a common type of gallery, here referred to as the lifetime unconsolidate type, is populated
with all images of an individual without any association between them. That is, the gallery construction algorithm is
not provided with any ID labels that would support processing of a person’s images jointly. This constrasts with the
lifetime consolidate type where an algorithm may explicitly fuse features from multiple images of a person, or select
a best image. In such cases, where the number of enrolled images is a random variable, we define two false negative
rates as follows.

The first demands that the algorithm place any of the Ki mates in the top R ≥ 1 ranks. The proportion of searches for
which this does not occur forms a false negative identification rate:
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

Num. mate searches where any enrolled mate is found in the top R ranks and at-or-above threshold
FNIRany (N, R, T ) = 1−
Num. mate searches attempted.
(6)

The second demands that the algorithm place all Ki mates in the top R ≥ Ki ranks. The proportion of searches for
which this does not occur forms a false negative identification rate:

Num. mate searches where all enrolled mates are found in the top R ranks and at-or-above threshold
FNIRall (N, R, T ) = 1−
Num. mate searches attempted.
(7)

Placing all mates in the top ranks is a more difficult task than correctly retrieving any image, so it holds that: FNIRall ≥
FNIRany . This is evident in the results presented for November 2018 algorithms in Tables starting at ??.

The information retrieval community might prefer to compute and plot precision and recall; this is a valid approach, but
we advance the two metrics above because they relate to our normal definition of consolidated FNIR, and they cover
the two extreme use-cases of wanting any hit vs. all hits.

3.3 DET interpretation

In biometrics, a false negative occurs when an algorithm fails to match two samples of one person – a Type II error.
Correspondingly, a false positive occurs when samples from two persons are improperly associated – a Type I error.

Matches are declared by a biometric system when the native comparison score from the recognition algorithm meets
some threshold. Comparison scores can be either similarity scores, in which case higher values indicate that the sam-
ples are more likely to come from the same person, or dissimilarity scores, in which case higher values indicate different
people. Similarity scores are traditionally computed by fingerprint and face recognition algorithms, while dissimilari-
ties are used in iris recognition. In some cases, the dissimilarity score is a distance possessing metric properties. In any
case, scores can be either mate scores, coming from a comparison of one person’s samples, or nonmate scores, coming
from comparison of different persons’ samples.

The words ”genuine” or ”authentic” are synonyms for mate, and the word ”impostor” is used as a synonym for non-
mate. The words ”mate” and ”nonmate” are traditionally used in identification applications (such as law enforcement
search, or background checks) while genuine and impostor are used in verification applications (such as access control).

An error tradeoff characteristic represents the tradeoff between Type II and Type I classification errors. For identifica-
tion this plots false negative vs. false positive identification rates i.e. FNIR vs. FPIR parametrically with T. Such plots

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 28

are often called detection error tradeoff (DET) characteristics or receiver operating characteristic (ROC). These serve
the same function − to show error tradeoff − but differ, for example, in plotting the complement of an error rate (e.g.
TPIR = 1 − FNIR) and in transforming the axes, most commonly using logarithms, to show multiple decades of FPIR.
More rarely, the function might be the inverse of the Gaussian cumulative distribution function.

The slides of Figures 10 through 15 discuss presentation and interpretation of DETs used in this document for reporting
face identification accuracy. Further detail is provided in formal biometrics testing standards, see the various parts of
ISO/IEC 19795 Biometrics Testing and Reporting. More terms, including and beyond those to do with accuracy, appear
in ISO/IEC 2382-37 Information technology – Vocabulary – Part 37: Harmonized biometric vocabulary.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

11:12:06
2022/12/18

1:N FNIR.
Proportion of
mate searches DET Properties and Interpretation 1 :: Error
not yielding Rates, Metrics, Comparison of algorithms
mate above
threshold, T. Type I Errors (Incorrect association of people)
1:1 matching FMR = False Match Rate
See ISO/IEC 1:1 transactional FAR = False Accept Rate
FNIR(N, R, T) =

19795-1 1:N matching FPIR = False Positive Identification Rate


FPIR(N, T) =

Type II Errors (Failure to associate samples of a person)


FNIR is a
Algorithm A 1:1 matching FNMR = False Non-match Rate
synonym for

FRVT
1:1 transactional FRR = False Rejection Rate
“miss rate”; the
Two typical biometric 1:N matching FNIR = False Negative Identification Rate
complement,

-
False pos. identification rate
False neg. identification rate

systems: B is more

FACE RECOGNITION VENDOR TEST


1-FNIR is the Threshold interpretation:
“hit rate” or accurate than A. This
• Face, fingerprint conventionally use similarity scores, so
true positive applies at all operating
high threshold implies low FPIR.
identification points along the DET.
• Iris conventionally uses dissimilarity scores, so high
rate, TPIR. threshold implies high FPIR.
The remaining figures apply to face recognition.

Log-scale is Algorithm B
R = Num. candidates examined
N = Num. enrolled subjects

typical to show
both small and
large numbers,
e.g. from strong
and weak

-
IDENTIFICATION
algorithms.
T = Threshold

Algorithm C

Flat DET is desirable – false positive rate can be set Excellent biometric, but only after
arbitrarily low without increase in false negatives fraction, y, of mate transactions
y
fail due to failure to make
template or abject quality.
T > 0 → Identification
T = 0 → Investigation

The perfect biometric: Zero Low FPIR values achieved with more Log-scale is almost always required because FPIR. Proportion of non-mated searches
errors. Practically this is stringent, thresholds. low FPIR values are operationally yielding any candidates above threshold, T.
unusual and occurs only with important. See ISO/IEC 19795-1
small or pristine datasets.

Figure 10: DET as the primary performance reporting mechanism.

29
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

1:N FNIR.
11:12:06
2022/12/18

Proportion of
mate searches DET Properties and Interpretation 2 ::
not yielding Operational uses-cases drive threshold policy
mate above
threshold, T.
E: High threshold " false positives are rare F: Low Threshold " false positives are
See ISO/IEC System configured so that it is almost a “lights out” common, and candidate lists are long
19795-1 system, i.e. action is implied if a search returns a hit.
System configured assuming and requiring
FNIR(N, R, T) =

human adjudication of false alarms


FPIR(N, T) =

1:N FNIR Error tradeoff between

FRVT
“miss rate” Misses and false alarms

-
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


C: Criminal investigation, where
1. Volume of searches is tiny, say one photo
A: Watchlist, surveillance where
from a bank robbery surveillance camera
1. Prior probability of mate is low
2. Prior probability of a mate may be high, e.g.
2. Volume of searches is high
“insider job” in hotel room theft.
3. Review labor availability is 3. Reviewer labor is high and sufficient.
limited
R = Num. candidates examined
N = Num. enrolled subjects

B: Driving license, visa, or passport fraud detection. For example a


passport office with 10000 applications per day, and reviewer labor D: High profile investigation.
sufficient to review 10 cases per hour might set threshold to target Operator requests say 1000
FPIR = 0.024 candidates with time and labor

-
to review all

IDENTIFICATION
Toward lights out Review candidate lists
T = Threshold

High search volume and/or low Low search volume and/or high
examiner labor availability + cost labor availability + cost

0.0001 0.001 0.01 0.1 1


T > 0 → Identification
T = 0 → Investigation

Low FPIR values achieved with more stringent, thresholds. 1:N FPIR “false alarm rate” FPIR. Proportion of non-mated searches yielding any
candidates above threshold, T.

Figure 11: DET as the primary performance reporting mechanism.

30
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

1:N FNIR.
11:12:06
2022/12/18

Proportion of
mate searches DET Properties and Interpretation 3 ::
not yielding Algorithm accuracy interpretation
mate above
threshold, T.

See ISO/IEC
19795-1
FNIR(N, R, T) =
FPIR(N, T) =

FRVT
-
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


FNIR is a B
synonym for Flat DETs: A small change in FNIR has direct correspondence to a large change
“miss rate”; the in FPIR. This is characteristic of a highly discriminative biometric (such as 10
complement,
fingerprints, or two irides). The gradient of the DET is the likelihood ratio
1-FNIR is the
“hit rate” or
true positive DFNIR
identification The DETs for A and B cross,
DFPIR
indicating different shape of
R = Num. candidates examined
N = Num. enrolled subjects

rate, TPIR.
the tails of the non-mated
distribution.

-
IDENTIFICATION
Log-scale is
Two typical biometric
typical to
systems: B is more
show small
accurate than A at low
numbers.
FPIR but not at high FPIR.
T = Threshold

Low FPIR values achieved with more Log-scale is almost always required because FPIR. Proportion of non-mated searches
T > 0 → Identification
T = 0 → Investigation

stringent, thresholds. low FPIR values are operationally relevant. yielding any candidates above threshold, T.
See ISO/IEC 19795-1

Figure 12: DET as the primary performance reporting mechanism.

31
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

11:12:06
2022/12/18

(0,1)
1. With ΔTime = 2 years, capable DET Properties and Interpretation 4 ::
algorithms will return this mated pair with Drivers of FNIR
1:N FNIR. a high score. It will only contribute to FNIR
Proportion of T = High at very high T. In children, growth is rapid
mate searches and this will not hold +. The progressive rise in the DET, i.e. increasing FNIR, occurs when a search of a probe sample does not
not yielding correctly return the enrolled mate. Leading causes of this are:
mate above
threshold, T. 1. Ageing: Given sufficient time-lapse, the appearance of a face will change. This is a gradual
FNIR(N, R, T) =

process affecting all human faces and, absent surgical intervention, is essentially irreversible over
long time-scales. Ageing increases false negative rates. In some applications ageing effects are
FPIR(N, T) =

See ISO/IEC
19795-1 avoided by policy: faces are re-enrolled periodically. In other applications, this is not possible.

FRVT
2. Image quality: The leading cause of false negative recognition failure is that either or both
images are in some sense defective. Quality can be degraded due to imaging problems (poor
illumination, mis-focus etc.), mis-handling (cropping, (re-)compression) or resolution change) and

-
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


commonly subject “misbehavior” (non-frontal pose, non-neutral expression). These effects
FNIR is a depress similarity scores. Good design mitigates imaging and mis-handling errors.
synonym for
“miss rate”; the Additional failures arise from clerical biographic error (two persons labelled with the same ID), person
complement, absent from the photo entirely,
1-FNIR is the
“hit rate” or
true positive
identification 3. With mild changes in pose,
R = Num. candidates examined
N = Num. enrolled subjects

rate, TPIR. illumination, and expression,


weaker identification algorithms
will assign low similarity scores such
2. With ΔTime = 12 years, even that this pair will contribute to FNIR

-
at low T.

IDENTIFICATION
capable algorithms will return this
mated pair with a moderate score.
It will only contribute to FNIR at
moderate T.
T = Threshold

T = Low T=0

Low FPIR values achieved + D. Michalski et al. The Impact of Ageing on Facial Comparisons with Images FPIR. Proportion of non-mated searches (1,0)
T > 0 → Identification
T = 0 → Investigation

with higher, i.e. more of Children conducted by Humans and Automated Systems January 2017 yielding any candidates above threshold, T.
stringent thresholds. Proc. Soc. for Applied Research in Memory and Cognition, Sydney, Aus. See ISO/IEC 19795-1

Figure 13: DET as the primary performance reporting mechanism.

32
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

x
11:12:06
2022/12/18

(0,1) DET Properties and Interpretation 5 ::


Drivers of FPIR
1:N FNIR.
Proportion of
T = High
mate searches Twins
not yielding Sharp rise in DET indicates arises if the dataset contains biometrically similar samples
mate above under two different IDs. This can occur when:
threshold, T.
FNIR(N, R, T) =

Source: ND Twins
1. Ground truth errors are present: Instances of a person being present in the
FPIR(N, T) =

See ISO/IEC
dataset under different IDs. This leads to high non-mate scores that are actually
19795-1 mate-scores.

FRVT
2. Twins: For a genetically linked biometric trait such as face shape, very similar
facial appearance in two individuals will lead to high non-mate scores+.

-
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


Siblings
3. Familial similarity: For the same, but less pronounced, reasons, siblings and
FNIR is a
parent-child face similarity leads to elevated non-mate scores.
synonym for Left: Author
“miss rate”; the Right: Sister
[with permission] 4. National origin: Individuals with same national origin have faces more similar
complement,
than randomly selected individuals.
1-FNIR is the
“hit rate” or
Additionally false positives can occur due to algorithm idiosyncrasies, e.g. from
true positive
matching similar think-framed glasses, from hair covering the face in similar patterns.
identification
R = Num. candidates examined
N = Num. enrolled subjects

rate, TPIR.

-
T = Low

IDENTIFICATION
T=0
Look-alikes
T = Threshold

Parent-Child
Source: MEDS
NIST Special
Database 32

Low FPIR values achieved FPIR. Proportion of non-mated searches (1,0)


T > 0 → Identification
T = 0 → Investigation

+ NOTE: While most algorithms will not recognize twins


with higher, i.e. more correctly, there is at least one face recognition algorithm that yielding any candidates above threshold, T.
stringent thresholds. can correctly distinguish twins [US Patent: US7369685B2]. See ISO/IEC 19795-1

Figure 14: DET as the primary performance reporting mechanism.

33
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

11:12:06
2022/12/18

DET Properties and Interpretation 6 :: Fixed thresholds,


change in image properties or demographics
1:N FNIR.
Proportion of
Algorithm X,
mate searches
Condition 1
not yielding
mate above
threshold, T.
FNIR(N, R, T) =

If system X is used with images of different properties, say from


FPIR(N, T) =

See ISO/IEC different imaging systems, or from different populations, generally


19795-1 both FNIR and FPIR will change. The dotted line joins points of the
Algorithm X,
same threshold. Horizontal (vertical) lines indicate change in FPIR

FRVT
Condition 2
(FNIR) only. Two cases concerning population size are shown below
(A and B), for the blue curves.

-
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


FNIR is a
synonym for
“miss rate”; the If DETs are computed for two categories (men and women) or
complement, (cameras A and B) or (indoor vs. outdoor), generally the Type I
1-FNIR is the and Type II errors will differ and the line of constant threshold
Algorithm Y, will be neither horizontal nor vertical.
“hit rate” or
Condition 1
true positive
identification
R = Num. candidates examined
N = Num. enrolled subjects

rate, TPIR.

Log-scale is
typical to Algorithm Y,
show small Condition 2

-
IDENTIFICATION
numbers.
T = Threshold

The ideal situation in most applications is that a fixed threshold


yields a fixed FPIR so that system owners see no change in false
alarms across populations or conditions.

Low FPIR values achieved with higher, Log-scale is often required because low FPIR. Proportion of non-mated searches
T > 0 → Identification
T = 0 → Investigation

i.e. more stringent, thresholds. FPIR values are operationally relevant. yielding any candidates above threshold, T.
See ISO/IEC 19795-1

Figure 15: DET as the primary performance reporting mechanism.

34
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

11:12:06
2022/12/18

A: Typical case: In theory, and often in practice, a 1:N search is


DET Properties and Interpretation 7 ::
implemented by executing N 1:1 comparisons independently and Effect of enrolled population size.
1:N FNIR. then sorting by similarity score:
Proportion of
mate searches Mate scores: A mate comparison score is independent of the rest
not yielding of enrollment data, and so independent of N. This implies the
mate above horizontal line above FNIR(T, N) = FNMR(T, 1).
threshold, T.
FNIR(N, R, T) =

Non-mate scores: FPIR increases linearly with N from binomial


FPIR(N, T) =

See ISO/IEC theory: FPIR(N, T) = 1 – (1 – FMR(T))N ® N FMR(T) for small FPIR.


19795-1

FRVT
-
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


Pop. N2 > N1
FNIR is a Pop. N1
synonym for
“miss rate”; the
complement,
1-FNIR is the
“hit rate” or
true positive
identification
R = Num. candidates examined
N = Num. enrolled subjects

rate, TPIR.

Log-scale is
typical to

-
B: Special case: An enrollment database is not just a linear data structure, it could be an

IDENTIFICATION
show small
index, or tree, then search is not simply N 1:1 comparisons and a sort. In that case:
numbers.
Mate scores become dependent on the enrollment data, either its size or actual content,
then generally FNIR(T, N) ǂ FNIR(T, 1).
T = Threshold

Non-mate scores are normally no longer just the highest 1:1 comparison score. Instead,
for example, scores may be normalized as the implementation attempts to make FPIR
independent of N will yield the vertical line linking points of equal threshold.

Low FPIR values achieved with higher, Log-scale is often required because low FPIR. Proportion of non-mated searches
T > 0 → Identification
T = 0 → Investigation

i.e. more stringent, thresholds. FPIR values are operationally important. yielding any candidates above threshold, T.
See ISO/IEC 19795-1

Figure 16: DET as the primary performance reporting mechanism.

35
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

11:12:06
2022/12/18

(0,1) DET Properties and Interpretation 8 ::


Non-ideal tests, datasets or systems
1:N FNIR.
Proportion of
mate searches
not yielding A DET characteristic that just stops indicates exhaustion
mate above of the sample data, with neither FPIR nor FNIR being
threshold, T. zero. This indicates that both genuine and impostor
FNIR(N, R, T) =

samples are observed at the end of the ranges.


FPIR(N, T) =

See ISO/IEC
19795-1

FRVT
All DETs pass through points
(0,1) and (1,0) corresponding

-
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


to thresholds 0 and ¥.

For systems that produce


only a decision, the DET has
one point.
FNIR is a
synonym for
“miss rate”; the
complement,
R = Num. candidates examined
N = Num. enrolled subjects

1-FNIR is the
“hit rate” or
For systems that produce a
true positive
limited number of comparison
identification
scores, e.g. one configured
rate, TPIR.

-
with three “high”, “medium”

IDENTIFICATION
and “low” security settings, the
Log-scale is DET has three points.
typical to
show small
numbers. A stepped DET occurs at the ends of the score ranges
when FNM and FPIR estimates are made from very
T = Threshold

few comparisons. At these thresholds, the


uncertainty in the measurements will be larger.

Low FPIR values achieved Log-scale is often required because low FPIR. Proportion of non-mated searches (1,0)
T > 0 → Identification
T = 0 → Investigation

with higher, i.e. more FPIR values are operationally relevant. yielding any candidates above threshold, T.
stringent thresholds. See ISO/IEC 19795-1

Figure 17: DET as the primary performance reporting mechanism.

36
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 37

3.4 Best practice testing requires execution of searches with and without mates

FRVT embeds 1:N searches of two kinds: Those for which there is an enrolled mate, and those for which there is not.
The respective numbers for these types of searches appear in Table 1. However, it is common to conduct only mated
searches10 . The cumulative match characteristic is computed from candidate lists produced in mated searches. Even if
the CMC is the only metric of interest, the actual trials executed in a test should nevertheless include searches for which
no mate exists. As detailed in Table 1 the FRVT reserved disjoint populations of subjects for executing true non-mate
searches.

3.5 Failure to extract features


This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

During enrollment some algorithms fail to convert a face image to a template. The proportion of failures is the failure-
to-enroll rate, denoted by FTE. Similarly, some search images are not converted to templates. The corresponding
proportion is termed failure-to-extract, denoted by FTX.

We do not report FTX because we assume that the same underlying algorithm is used for template generation for
enrollment and search.

Failure to extract rates are incorporated into FNIR and FPIR measurements as follows.

. Enrollment templates: Any failed enrollment is regarded as producing a zero length template. Algorithms are
required by the API [10] to transparently process zero length templates. The effect of template generation failure
on search accuracy depends on whether subsequent searches are mated, or non-mated: Mated searches will fail
giving elevated FNIR; non-mated searches will not produce false positives so, to first order, FPIR will be reduced
by a factor of 1−FTE.

. Search templates and 1:N search: In cases where the algorithm fails to produce a search template from input
imagery, the result is taken to be a candidate list whose entries have no hypothesized identities and zero score.
The effect of template generation failure on search accuracy depends on whether searches are mated, or non-
mated: Mated searches will fail giving elevated FNIR; Non-mated searches will not produce false positives, so
FPIR will be reduced. Thus given a measurement of false negative and positive rates made over only those
where failures-to-extract did not occur, those rates - call them FNIR† and FPIR† - could be adjusted by an explicit
measurement of FTX as follows
FNIR = FTX + (1 − FTX)FNIR† (8)

FPIR = (1 − FTX)FPIR† (9)

This approach is the correct treatment for positive-identification applications such as access control where cooperative
users are enrolled and make attempts at recognition. This approach is not appropriate to negative identification ap-
plications, such as visa fraud detection, in which hostile individuals may attempt to evade detection by submitting
poor quality samples. In those cases, template generation failures should be investigated as though a false alarm had
occurred.
10 Forexample, the Megaface benchmark. This is bad practice for several reasons: First, if a developer knows, or can reasonably assume, that a mate
always exists, then unrealistic gaming of the test is possible. A second reason is that it does not put FPIR on equal footing with FNIR and that
matters because in most applications, not all searches have mates - not everyone has been previously enrolled in a driving license issuance or a
criminal justice system - so addressing between-class separation becomes necessary.

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 38

3.6 Fixed length candidate lists, threshold independent workload

Suppose an automated face identification algorithm returns L candidates, and a human reviewer is retained to examine
up to R candidates, where R ≤ L might be set by policy, preference or labor availability. For now, assume also that
the reviewer is not provided with, or ignores, similarity scores, and thresholds are not applied. Given the algorithm
typically places mates at low (good) ranks, the number of candidates a reviewer can be expected to review can be
derived as follows. Note that the reviewer will:

. Always inspect the first ranked image Frac. reviewed = 1

. Then inspect those candidates where mate not confirmed at rank 1 Frac. reviewed = 1-CMC(1)
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

. Then inspect those candidates where mate not confirmed at rank 1 or 2 Frac. reviewed = 1-CMC(2)

etc. Thus if the reviewer will stop after a maximum of R candidates, the expected number of candidate reviews is

M (R) = 1 + (1 − CM C(1)) + (1 − CM C(2)) + . . . + (1 − CM C(R − 1)) (10)


R−1
X
=R− CM C(r) (11)
r=1

A recognition algorithm that front-loads the cumulative match characteristic will offer reduced workload for the re-
viewer. This workload is defined only over the searches for which a mate exists. In the cases where there truly is no
mate, the reviewer would review all R candidates. Thus, if the proportion of searches for which a mate does exist is β,
which in the law enforcement context would be the recidivism rate [3], the full expression for workload becomes:

R−1
!
X
M (R) = β R− CM C(r) + (1 − β)R (12)
r=1
R−1
X
=R−β CM C(r) (13)
r=1

3.7 Timing measurement

Algorithms were submitted to NIST as implementations of the application programming interface(API) specified by
NIST in the Evaluation Plan [10]. The API includes functions for initialization, template generation, finalization, search,
gallery insert, and gallery delete. Two template generation functions are required, one for the preparation of an enroll-
ment template, and one for a search template.

In NIST’s test harness, all functions were wrapped by calls to the C++ std::chrono::high resolution clock which on the
dedicated timing machine counts 1ns clock ticks. Precision is somewhat worse than that however.

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 39

3.8 Uncertainty estimation

3.8.1 Random error

This study leverages operational datasets for measurement of recognition error rates. This affords several advantages.
First, large numbers of searches are conducted (see Table 1) giving precision to the measurements. Moreover, for the
two mugshot datasets, these do not involve reuse of individuals so binomial statistics can be expected to apply to
recognition error counts. In that case, an observed count of a particular recognition outcome (i.e. a false negative or
false positive) in M trials will sustain 95% confidence that the actual error rate is no larger than some value.

As an example, the minimum number of mugshot searches conducted in this report is M =154 549, and for an observed
FNIR around 0.002, the measurement supports a conclusion that the actual FNIR is no higher than 0.00228 at 99%
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

confidence level. On the false positive side, we tabulate FNIR at FPIR values as low as 0.001. Given estimates based
on 331 254 non-mate trials, the actual FPIR values will be below 0.00115 at 99% confidence. In conclusion, large scale
evaluation, without reuse of subjects, supports tight uncertainty bounds on the measured error rates.

3.8.2 Systematic error

The FRVT 2018 dataset includes anomalies discovered as a result of inspecting images involved in recognition failures
from the most accurate algorithms. Two kinds of failure occur: False negatives (which, for the purpose here, include
failures to make templates) and false positives.

False negative errors: We reviewed 600 false negative pairs for which either or both of the leading two algorithms did
not put the correct mate in the top 50 candidates. Given 154 549 searches, this number represents 0.39% of the total,
resulting in FNIR ∼ 0.0039. Of the 600 pairs:

. A: Poor quality: About 20% of the pairs included images of very low quality, often greyscale, low resolution,
blurred, low contrast, partially cropped, interlaced, or noisy scans of paper images. Additionally, in a few cases,
the face is injured or occluded by bandages or heavy cosmetics.

. B: Ground truth identity label bugs: About 15% of the pairs are not actually mated. We only assigned this
outcome when a pair is clearly not mated.

. C: Profile views: About 35% included an image of a profile (side) view of the face, or, more rarely, an image that
was rotated 90 degrees in-plane (roll).

. D: Tattoos: About 30% included an image of a tattoo that contained a face image. These arise from mis-labelling
in the parent dataset metadata.

. E: Ageing: There is considerable time-lapse between the two captures.

All these estimates are approximate. Of these, the tattoo and mislabled images can never be matched. These constitute
an accuracy floor in the sample implying that FNIR cannot be below 0.001811 . The profile-views, low-quality images,
and images with considerable ageing can, in principle, be successfully matched - indeed some algorithms do so - so
are not part of the accuracy floor.

11 This value is the sum of two partial false negative rates: FNIRB = 0.15 * 0.0039 plus FNIRD = 0.3 * 0.0039

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 40

For the microsoft-4 algorithm the lowest miss rate from (recent entry in Table 26) is FNIR(640 000, 50, 0) = 0.0018. This
is close to the value estimated from the inspection of misses. It is below the 0.0039 figure because the algorithm does
match some profile and poor quality images, that the yitu-2 algorithm does not.

For many tables (e.g. Table 26), the FNIR values obtained for the FRVT-2018 mugshots could be corrected by reducing
them by 0.0018. The best values would then be indistinct from zero. The results in this report were not adjusted to
account for this systematic error.

False positive errors: As shown in Figure 1 and discussed in Figure 14 many of the DET characteristics in this report
exhibit a pronounced turn upward at low false positive rates. The shape can be caused by identity labelling errors in
the ground truth of a dataset, specifically persons present in the database under two IDs such that some proportion of
non-mate pairs are actually mated. To look for such possibilities, we merged the highest 1000 non-mate pairs produced
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

by three different algorithms which resulted in 1839 unique pairs. This constitutes 0.56% of all non-mate searches.
We assert that it is very difficult for human reviewers to assign the pairs into the following three categories: twins;
doppelgangers; or ground-truth errors (instances of the same person under two IDs). Given this difficulty we made no
attempt to correct any possible ground truth errors except by removing 57 pairs in the following categories:

. A: Profile views: Thirteen pairs included one or two profile-view images. As described in Figure 145, these can
cause false positives.

. B: Same-session photographs: For twelve pairs, the images were identical or trivially altered (e.g. cropped)
versions of the same photo. These were present under a different ID likely due to some clerical or procedural
mistake.

. C: Tattoos of faces: There were fourteen instances of tattoo photographs that contained faces causing false
matches.

. D: T-shirt faces: There were six instances of T-shirt photographs (of Bob Marley and Che Guevara) being detected
instead of the face and causing false positives.

. E: Background faces: There were twelve instances of one subject appearing in the background of two otherwise
correct portrait photos.

Note we did not remove any images where there was a chance that the pair was actually a different person.

In any case, the results in this report have not been adjusted for this systematic error.

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 41

4 Results

This section gives extensive results for algorithms submitted to FRVT 2018. Three page “report cards” for each algo-
rithm are contained in a separate supplement. Performance metrics were described in section 3. The main results are
summarized in tabular form with more exhaustive data included as DET, CMC and related graphs in appendices as
follows:

. The three tables 2-4 list algorithms alongside full developer names, acceptance date, size of the provided config-
uration data, template size and generation time, and search duration data.

– The template generation duration is most important to applications that require fast response. For example,
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

an eGate taking more than two seconds to produce a template might be unacceptable. Note that GPUs may
be of utility in expediting this operation for some algorithms, though at additional expense. Two additional
factors should be considered1213 .
– The search duration is the time taken for a search of a search template into a gallery of N enrollment tem-
plates. This performance variable, together with the volume of searches, is influential on the amount of
hardware needed to sustain an operational deployment. This is measured here with the algorithm run-
ning on a single core of a contemporary CPU. Search is most simply implemented as N computations of
a distance metric followed by a sort operation to find the closest enrollments. However, considerable opti-
mization of this process is possible, up to and including fast-search algorithms that, by various means, avoid
computation of all N distances.
– The template size is the size of the extracted feature vector (or vectors) and any needed header information.
Large template sizes may be influential on bus or network bandwidth, storage requirements, and on search
duration. While the template itself is an opaque data blob, the feature dimensionality might be estimated by
assuming a four-bytes-per-float encoding. There is a wide range of encodings. For the more accurate algo-
rithm, sizes range from 256 bytes to about 2KB bytes, indicating essentially no consensus on face modeling
and template design.
– The template size multiplier column shows how, given k input images, the size of the template grows.
Most implementations internally extract features from each image and concatenate them, and implement
some score-level fusion logic during search. Other implementations, including many of the most accurate
algorithms, produce templates whose size does not grow with k. This could be achieved via selection of
the best quality image - but this is not optimal in handling ageing where the oldest image could be the best
quality. Another mechanism would be feature-level fusion where information is fused from all k inputs. In
any case, as a black-box test, the fusion scheme is proprietary and unknown.
– The size of the configuration data is the total size of all files resident in a vendor-provided directory that
contains arbitrary read-only files such as parameters, recognition models (e.g caffe). Generally a large value
for this quantity may prohibit the use of the algorithm on a resource-constrained device.
12 The FRVT 2018 API prohibited threading, so some gains from parallelism may be available on multiple-cores or multiple processors, if the feature
extraction code could be distributed across them.
13 Note also that factors of two or more may be realizable by exploiting modern vector processing instructions on CPUs. It is not clear in our

measurements whether all developers exploited Intel’s AVX2 instructions, for example. Our machine was so equipped, but we insisted that the
same compiled library should also run on older machines lacking that instruction. The more sophisticated implementations may have detected
AVX2 presence and branched accordingly. The less sophisticated may be defaulted to the reduced instruction set. Readers should see the FRVT
2018 API document for the specific chip details.

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 42

. Tables 26-27 report core rank-based accuracy for mugshot images. The population size is limited to N = 1.6 million
identities because this is the largest gallery size on which all algorithms were executed. Notable observations
from these tables are as follows:

– Accuracy gains since 2018: NIST Interagency Report 8238 documented massive gains over those reported in
the FRVT 2014 report, NIST Interagency Report 8009. Further gains are documented in this report. Compar-
ing the most accurate algorithm in November 2018, NEC-3, the value of FNIR(N, L, T) reduced from 0.0031
to 0.0024 for the Sensetime-004 algorithm with N = 12 million recent images. The tables show broader gains:
many developers have made advances since 2018 with between two and five-fold reduction in errors.
– Wide range in accuracy: The rank-1 miss rates vary from FNIR(N, 1, 0) = 0.0012 for sensetime-004 up to
about 0.5 for the very fast but inaccurate microfocus-x algorithms. Among the developers who are superior
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

to NEC in 2013, the range is from 0.002 to 0.035 for camvi-3. This large accuracy range is consistent with the
buyer-beware maxim, and indicates that face recognition software is far from being commoditized.

. Tables 31-32 report threshold-based error rates, FNIR(N, L, T), for N = 1.6 million for mugshot-mugshot accuracy
on FRVT 2014, FRVT 2018, and also (in pink) mugshot-webcam accuracy using FRVT 2018 enrollments. Notable
observations from these tables are as follows:

– Order of magnitude accuracy gains since 2014: As with rank-based results, the gains in accuracy are sub-
stantial, though somewhat reduced. At FPIR = 0.01, the best improvement over NEC in 2014 is a 27 fold
reduction in FNIR using the NEC 2 algorithm. At FPIR = 0.001, the largest gain is a six-fold reduction in
FNIR via the NEC 3 algorithm.
– Broad gains across the industry: About 19 companies realize accuracy better than the NEC benchmark from
2014. This is somewhat lower than the 28 developers who succeeded on the rank-1 metric. This may be due
to the ubiquity of, and emphasis on, the rank-1 metric in many published algorithm development papers.
– Webcam images: Searches of webcam images give FNIR(N, T) values around 2 to 3 times higher than
mugshot searches. Notably the leading developers with mugshots are approximately the same with poorer
quality webcams. But some developers e.g. Camvi, Megvii, TongYi, and Neurotechnology do improve their
relative rankings on webcams, perhaps indicating their algorithms were tailored to less constrained images.

. Tables 18, 22, 23 and show, respectively, high-threshold, rank 1, and rank 50 FNIR values for all algorithms
performing searches into five different gallery sizes, N = 640 000, N = 1 600 000, N = 3 000 000, N = 6 000 000 and
12 000 000. The FPIR = 0.001 table is included to inform high-volume duplicate detection applications. The Rank-1
table is included as a primary accuracy indicator. The Rank-50 table is included to inform agencies who routinely
produce 50 candidates for human-review. The notable results are:

– Slow growth in rank-based miss rates: FNIR(N, R) generally grows as a power law, aN b . From the straight
lines of many graphs of Figure 20 this is clearly a reasonable model for most, but not all, algorithms. The
coefficient a can be interpreted as FNIR in a gallery of size 1. The more important coefficient b indicates
scalability, and often, b  1, implies very benign growth in FNIR. The coefficients of the models appear in
the Tables 22 and 23.
– Slow growth in threshold-based miss rates: FNIR(N, T) also generally grows as a power law, aN b except at
the high threshold values corresponding to low FPIR values. This is visible in the plots of Figure 36 which

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 43

show straight lines except for FPIR = 0.001, which increase more rapidly with N above 3 000 000. Each trace
in those figures shows FNIR(N, T) at fixed FPIR with both N and T varying. Thus at large N, it is usually
necessary to elevate T to maintain fixed FPIR. This causes increased FNIR. Why that would no-longer obey a
power-law is not known. However, if we expect large galleries to contain individuals with familial relations
to the non-mate search images - in the most extreme case, twins - then suppression of false positives becomes
more difficult. This is discussed in the Figures starting at Fig. 10

. Figure ?? shows false positives from twins against their enrolled siblings, broken out by type of twin: fraternal
or identical. The Figure is based on the enrollment of 104 single images on one of a pair of twins, and then
the search of 2354 second images. Note that the dataset is heavily skewed towards identical twins which is not
representative of the true population. There is also a skew towards same sex fraternal twin pairs compared to
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

different sex fraternal twin pairs again not representative of the true population.
The notable results are:

– For all algorithms tested, the 1087 mated searches (Twin A vs. Twin A) produce scores almost always above
typical operational thresholds, with (not shown) matches at rank 1. The images are of good quality, so this
is the result expected from the rest of this report.
– For the 1066 identical twin searches (AB), almost all produce the twin at rank 1, with a few producing the
mate at further down the candidate lists rank and low score.
– For the 169 fraternal searches (AB) from same sex pairs, most algorithms give a large number of very high
scores, implying false positives at all thresholds. However, there there are long tails containing lower scores
that are correctly below threshold. In general, scores that are higher in this distribution are all rank 1 whereas
the lower scores have much higher ranks.
– (Not shown) Of the 169, there are 24 fraternal searches (AB) involving different sex twins. Here most al-
gorithms correctly report scores well below the lowest threshold, and usually not on the candidate list at
all.

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

0.150
11:12:06
2022/12/18

0.120
ntech−5 ntech−4
ranko−5 neuro−4
0.100
cogni−1
0.090

cogen−2
0.070 ntech−6

0.060
idemi−3
neuro−7
micro−3
FNIR(N, R, T) =

0.050 cogni−4
FPIR(N, T) =

0.045 Dataset: 2018 Mugshot


Metric: FNIR@FPIR=0.003
0.040 cogen−3
N = 12000000
0.035 yitu−3

FRVT
a cib
idemi−4
0.030
a cogent
truef−0 a cognitec
ranko−9
False negative identification rate, FNIR(T)

-
False pos. identification rate
False neg. identification rate

a deepglint

FACE RECOGNITION VENDOR TEST


0.025
a idemia
yitu−2 a microsoft
a nec
0.020
ntech−8 a neurotechnology
a ntechlab
0.017
a paravision
ranko−10 a rankone
0.015
visio−8 a sensetime
yitu−5 a trueface
yitu−4
0.012 a visionlabs
a xforwardai
a yitu
0.010

cogni−5
R = Num. candidates examined
N = Num. enrolled subjects

0.009
cogni−6
tsize
0.007 parav−5 2000
ntech−9
4000
0.006
6000

-
IDENTIFICATION
0.005 ntech−11 8000
visio−9

nec−3
0.003

nec−2
T = Threshold

0.002 idemi−8
sense−4

0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
T > 0 → Identification
T = 0 → Investigation

Mean template extraction time (seconds)

Figure 18: [Mugshot Dataset] Speed-accuracy tradeoff. For developers of the more accurate algorithms the plot shows the tradeoff of high-threshold recognition miss-
rates, FNIR(N, N, T) for FPIR(N, T) = 0.003, and template generation time. Developers are coded by color. Template size is encoded by the size of the circle. Some labels
are quite distant from the respective point, to avoid superposing text. Without any other influences, the assumption would be that taking time to localize the face, and
extract features, would lead to better accuracy. The most notable result, for NEC, is that their slower algorithms are much more accurate than the version that extract
features in fewer than 90 milliseconds.

44
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

11:12:06
2022/12/18

0.020

cogni−1 idemi−6 sense−2 cib−0

0.017

0.015

ranko−5 idemi−5 idemi−3


cogni−3
0.012
ntech−5
ntech−4
FNIR(N, R, T) =
FPIR(N, T) =

0.010 Dataset: 2018 Mugshot


Metric: FNIR_at_RANK=1
0.009 cogni−2 visio−4 N = 12000000
idemi−4

FRVT
a cib
a cogent
idemi−7
cogen−3 a cognitec
False negative identification rate, FNIR(T)

0.007

-
False pos. identification rate
False neg. identification rate

a deepglint

FACE RECOGNITION VENDOR TEST


neuro−7 ranko−7 a idemia
0.006 neuro−5 a microsoft
neuro−4 a nec
ntech−6 a neurotechnology
a ntechlab
0.005
a paravision
cogen−2
micro−6 a rankone
a sensetime
ranko−9 cogni−4
a trueface
micro−5
a visionlabs
a xforwardai
yitu−3 a yitu
R = Num. candidates examined
N = Num. enrolled subjects

yitu−4
0.003 ntech−8
tsize
ranko−10 truef−0 2000
micro−3
4000
yitu−2
6000

-
IDENTIFICATION
8000
0.002
xforw−2
visio−8 cogni−5

cogni−6
T = Threshold

idemi−8
visio−9
0.001

0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
T > 0 → Identification
T = 0 → Investigation

Mean template extraction time (seconds)

Figure 19: [Mugshot Dataset] Speed-accuracy tradeoff. For developers of the more accurate algorithms the plot shows the tradeoff of rank-one recognition miss-rates,
FNIR(N, 1, 0), and template generation time. Developers are coded by color. Template size is encoded by the size of the circle. Some labels are quite distant from the
respective point, to avoid superposing text. Without any other influences, the assumption would be that taking time to localize the face, and extract features, would lead
to better accuracy. This occurs for NEC with their slower algorithm being much accurate than the version that extract features in fewer than 90 milliseconds.

45
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

1 1 2 5
DEVELOPER SHORT SEQ . VALIDATION CONFIG LIB TEMPLATE GENERATION FINALIZE SEARCH DURATION MILLISEC
11:12:06
2022/12/18

3 4
FULL NAME NAME NUM . DATE DATA ( MB ) DATA ( MB ) SIZE ( B ) MULT TIME ( MS ) TIME ( S ) L =1 L =50 L =50 L =50 L =50 POWER LAW
N =1.6 M N =1.6 M N =1.6 M N =3 M N =6 M N =12 M (µs)
176 21 72 (237) (239)
1 20Face 20face 000 2021-10-01 112 319 2048 - 236 9 6355 6341 - - -
2 3Divi 3divi 5 2018-10-26 186 51 234
4096 k 120
638 199
28 (107)
538 (107)
537 (99)
1377 (95)
2614 (89)
5530 171
0.07N 1.1
42 121 30 (16) (15)
3 3Divi 3divi 6 2018-10-26 187 51 528 k 640 5 33 33 - - -
4 Acer Incorporated acer 000 2020-08-12 35 67 37
512 - 16
198 20
4 (69)
295 (67)
295 (57)
623 (88)
2302 (81)
4915 205
0.00N 1.3
147 12 69 (121) (116)
5 Acer Incorporated acer 001 2021-11-08 42 610 2048 - 184 9 619 575 - - -
6 Akurat Satu Indonesia ptakuratsatu 000 2020-10-23 0 572 45
538 - 230
905 254
28633 (8)
15 (6)
16 (6)
17 (5)
17 (4)
17 3
6827.74N 0.1
151 6 231 (212) (215)
7 Alchera Inc alchera 2 2018-10-30 7 14 2048 k 114 63 2923 2929 - - -
8 Alchera Inc alchera 3 2018-10-30 251 14 167
2048 k 94
531 232
63 (213)
2955 (216)
2956 (186)
6546 (187)
15013 (187)
35262 200
0.10N 1.2
145 204 214 (238) (245)
9 Alchera Inc alchera 004 2021-09-17 476 24 2048 - 853 35 6657 6851 - - -
10 Alivia / Innovation Sys isystems 3 2018-10-30 350 784 127
2048 1 194
825 162
16 (82)
385 (86)
389 (79)
979 (75)
1822 (121)
9348 206
0.00N 1.3
FNIR(N, R, T) =

11 AllGoVision allgovision 000 2019-07-30 168 150 146


2048 k 57
404 103
12 (216)
3226 (219)
3193 (184)
6129 (184)
12449 (183)
25835 102
1.40N 1.0
FPIR(N, T) =

12 AllGoVision allgovision 001 2020-07-14 283 126 175


2048 - 174
777 109
13 (215)
3174 (218)
3183 (183)
6073 (182)
12284 (182)
25701 100
1.42N 1.0
13 Anke Investments anke 0 2018-10-30 779 27 217
2072 k 65
429 159
16 (124)
675 (130)
748 (106)
1483 (104)
2968 (99)
6148 135
0.21N 1.1
216 66 152 (129) (133)
14 Anke Investments anke 1 2018-10-30 779 27 2072 k 430 15 707 769 - - -
209 112 124 (122) (124) (96) (91) (86) 92
0.30N 1.0

FRVT
15 Anke Investments anke 002 2019-06-27 341 401 2056 k 623 13 624 682 1306 2403 5082
16 Aware aware 5 2018-10-30 368 27 226
3100 k 182
792 211
34 (21)
95 (26)
98 (24)
203 (21)
371 (14)
252 15
4.13N 0.7
2 181 4 (38) (38)
17 Aware aware 6 2018-10-30 368 27 124 k 789 2 158 162 - - -

-
False pos. identification rate
False neg. identification rate

69 2 95 (65) (63)
18 Ayonix ayonix 1 2018-10-29 74 2 1036 k 12 11 279 279 - - -

FACE RECOGNITION VENDOR TEST


19 Ayonix ayonix 2 2018-10-30 74 2 68
1036 1 1
11 135
14 (64)
279 (62)
276 (46)
535 (46)
1087 (46)
2284 113
0.11N 1.0
20 Camvi Technologies camvitech 4 2018-10-30 233 220 52
1024 1 141
686 209
31 (17)
33 (14)
32 (12)
38 (10)
40 (7)
48 4
8492.66N 0.1
53 166 207 (15) (11)
21 Camvi Technologies camvitech 5 2018-10-30 257 220 1024 1 751 31 31 30 - - -
22 Canon Inc cib 000 2020-10-19 426 127 255
8196 - 133
674 238
113 (217)
3589 (221)
3604 (187)
6738 (185)
13495 (184)
27114 34
2.33N 1.0
23 Canon Inc canon 001 2021-10-27 1139 91 238
4096 - 219
885 185
21 (240)
6804 (243)
6789 (205)
12741 (201)
25650 (198)
51922 68
3.82N 1.0
24 Canon Inc canon 002 2022-04-26 1231 111 249
6200 - 225
897 229
58 (242)
7673 (246)
7559 (207)
14216 (203)
28503 (201)
57633 66
4.35N 1.0
25 Clearview AI Inc clearviewai 000 2021-11-12 358 316 235
4096 - 171
765 204
30 (135)
802 (121)
657 (87)
1134 (79)
1939 (73)
3889 21
1.59N 0.9
26 Cloudwalk - Hengrui AI Technology hr 000 2021-02-10 501 392 170
2048 - 229
905 145
15 (66)
282 (61)
276 (48)
539 (55)
1268 (65)
3177 175
0.03N 1.1
27 Cloudwalk - Moontime Smart Technology cloudwalk 000 2022-01-31 716 573 140
2048 - 211
869 87
10 (94)
440 (82)
371 (50)
547 (45)
1065 (58)
2902 25
0.53N 0.9
28 Cloudwalk - Moontime Smart Technology cloudwalk-mt 001 2022-07-27 797 574 122
2048 - 247
953 140
14 (59)
273 (93)
427 (67)
784 (67)
1601 (67)
3341 82
0.21N 1.0
29 Cognitec Systems GmbH cognitec 2 2018-10-30 463 26 192
2052 k 20
225 194
27 (187)
1733 (189)
1763 (166)
3660 (160)
7279 (156)
13895 97
0.83N 1.0
R = Num. candidates examined
N = Num. enrolled subjects

30 Cognitec Systems GmbH cognitec 3 2018-10-30 465 26 188


2052 k 32
297 157
16 (186)
1719 (190)
1791 (164)
3638 (159)
7277 (163)
14904 124
0.66N 1.0
31 Cognitec Systems GmbH cognitec 004 2021-03-08 384 60 186
2052 - 15
192 121
13 (185)
1673 (187)
1727 (149)
2904 (147)
5801 (144)
11707 29
1.15N 1.0
32 Cognitec Systems GmbH cognitec 005 2021-07-30 460 61 198
2052 - 43
367 74
9 (175)
1556 (178)
1551 (151)
2916 (156)
6561 (157)
13958 141
0.38N 1.1
33 Cognitec Systems GmbH cognitec 006 2022-02-10 689 61 195
2052 - 77
463 84
10 (150)
1006 (149)
1002 (124)
2097 (120)
4312 (110)
7624 133
0.30N 1.1
34 Cubox cubox 000 2021-08-24 529 298 173
2048 - 234
917 83
10 (218)
3646 (223)
4076 (189)
7605 (188)
15871 - 134
1.16N 1.1
35 Cyberlink Corp cyberlink 000 2019-06-12 217 93 190
2052 1 125
654 203
30 (126)
696 (126)
701 (100)
1379 (96)
2639 (101)
6214 117
0.28N 1.0

-
196 63 200 (127) (125) (98) (145) (147) 204
0.00N 1.3

IDENTIFICATION
36 Cyberlink Corp cyberlink 001 2019-10-07 459 102 2052 1 423 28 698 700 1350 5524 12031
37 Cyberlink Corp cyberlink 002 2020-07-31 333 109 246
4140 - 157
724 244
6875 (171)
1353 (220)
3198 (185)
6138 (181)
12205 (154)
13106 19
16.71N 0.8
38 Cyberlink Corp cyberlink 003 2021-01-05 333 100 252
6212 - 144
691 216
35 (98)
488 (127)
723 (103)
1415 (102)
2886 (91)
5643 155
0.12N 1.1
39 Cyberlink Corp cyberlink 004 2021-07-16 371 100 250
6212 - 159
728 189
23 (100)
492 (104)
504 (76)
923 (62)
1448 (68)
3350 24
0.73N 0.9
40 Cyberlink Corp cyberlink 005 2022-01-07 371 100 251
6212 - 161
733 206
30 (102)
501 (99)
498 (92)
1193 (97)
2672 (93)
5693 195
0.03N 1.2
41 DAON daon 000 2021-12-23 274 2 213
2069 - 102
583 52
8 (106)
524 (120)
625 (104)
1454 (107)
3097 (103)
6316 196
0.03N 1.2
97 48 187 (59)
42 Dahua Technology Co Ltd dahua 0 2018-10-29 276 167 2048 k 374 22 - 258 - - -
43 Dahua Technology Co Ltd dahua 1 2018-10-29 276 167 138
2048 k 44
369 196
28 - (57)
257 (55)
602 (52)
1202 (61)
3007 182
0.02N 1.2
T = Threshold

44 Dahua Technology Co Ltd dahua 002 2019-12-02 607 137 142


2048 k 139
685 179
19 (51)
243 (60)
269 (90)
1189 (103)
2950 (107)
6732 210
0.00N 1.5
45 Dahua Technology Co Ltd dahua 003 2020-11-18 889 154 119
2048 - 156
723 171
18 (67)
283 (55)
249 (42)
468 (42)
935 (40)
1871 36
0.16N 1.0
46 Dahua Technology Co Ltd dahua 004 2021-11-18 812 116 111
2048 - 169
758 89
11 (91)
423 (90)
411 (73)
871 (66)
1568 (64)
3174 104
0.17N 1.0
47 Decatur Industries Inc decatur 000 2022-02-09 411 383 181
2052 - 207
863 76
9 (188)
1761 (197)
2023 (158)
3361 (161)
7283 (160)
14592 71
1.06N 1.0
48 Deepglint deepglint 001 2019-11-15 448 265 237
4096 - 135
676 213
35 (125)
677 (176)
1495 (115)
1724 (99)
2747 (102)
6246 16
25.27N 0.8
49 Dermalog dermalog 5 2018-10-26 0 440 3
128 1 93
528 242
3155 (1)
0 (1)
0 (1)
0 (1)
0 (1)
0 5
66.21N 0.2
50 Dermalog dermalog 6 2018-10-26 0 453 13
256 1 89
507 6
2 (35)
142 (35)
144 (29)
269 (28)
531 (27)
1294 125
0.05N 1.0
51 Dermalog dermalog 007 2020-02-12 0 424 4
128 1 60
410 2
1 (27)
98 (24)
96 (26)
218 (24)
429 (24)
1013 163
0.01N 1.1
T > 0 → Identification
T = 0 → Investigation

52 Dermalog dermalog 008 2021-01-25 0 531 29


512 - 45
370 22
4 (75)
335 (51)
246 (41)
462 (41)
924 (39)
1849 43
0.15N 1.0
Notes
1 Configuration size does not capture static data present in libraries. Libraries are included but the size also includes any ancillary libraries for image processing (e.g. openCV) or numerical computation (e.g. blas).
2 Finalization is the processing of converting N = 1600000 templates into a searchable data structure an operation which can be a simple copy, or the building of an index or tree, for example. The duration of the
operation may be data dependent, and may not be linear in the number of input templates.
3 This multiplier expresses the increase in template size when k images are passed to the template generation function.
4 All durations are measured on Intel®Xeon®CPU E5-2630 v4 @ 2.20GHz processors. Estimates are made by wrapping the API function call in calls to std::chrono::high resolution clock which on the machine in (3)
counts 1ns clock ticks. Precision is somewhat worse than that however.
5 Search durations are measured as in the prior note. The power-law model in the final column mostly fits the empirical results in Figure 146. However in certain cases the model is not correct and should not be used
numerically.

46
Table 2: Summary of algorithms and properties included in this report. The blue superscripts give ranking for the quantity in that column. Missing search durations,
denoted by “-”, are absent because those runs were not executed, usually because we did not run on the larger galleries. Caution: The power-law model is sometimes
an incorrect model. It is included here only to show broad sublinear behavior, which is flagged in green. The models should not be used for prediction.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

1 1 2 5
DEVELOPER SHORT SEQ . VALIDATION CONFIG LIB TEMPLATE GENERATION FINALIZE SEARCH DURATION MILLISEC
11:12:06
2022/12/18

3 4
FULL NAME NAME NUM . DATE DATA ( MB ) DATA ( MB ) SIZE ( B ) MULT TIME ( MS ) TIME ( S ) L =1 L =50 L =50 L =50 L =50 POWER LAW
N =1.6 M N =1.6 M N =1.6 M N =3 M N =6 M N =12 M (µs)
53 Dermalog dermalog 009 2021-11-09 0 318 34
512 - 39
347 16
3 (56)
253 (52)
246 (40)
461 (40)
923 (38)
1846 38
0.16N 1.0
54 Dermalog dermalog 010 2022-07-25 0 514 35
512 - 116
633 18
3 (50)
241 (50)
242 (39)
454 (39)
910 (37)
1823 41
0.15N 1.0
179 97 241 (2) (21)
55 Digidata digidata 000 2022-06-03 248 33 2048 - 560 2444 0 95 - - -
56 DiluSense Technology dilusense 000 2022-05-26 311 56 172
2048 - 24
247 193
26 (192)
1904 (192)
1898 (163)
3597 (158)
7256 (161)
14689 91
0.88N 1.0
177 188 132
57 FarBar Inc f8 001 2019-10-03 266 19 2048 k 810 14 - - - - -
106 81 63 (115) (112)
58 Fincore Ltd fincore 000 2021-08-18 250 224 2048 - 475 9 562 560 - - -
59 First Credit Bureau Kazakhstan firstcreditkz 001 2022-11-22 548 24 21
288 - 185
799 5
2 (19)
46 (17)
46 (15)
87 (15)
179 (15)
354 61
0.03N 1.0
60 Fujitsu Research and Development Center fujitsulab 000 2021-10-12 497 337 61
1032 - 244
945 32
5 (183)
1668 (182)
1657 (155)
3140 (152)
6320 (151)
12723 89
0.78N 1.0
61 Fujitsu Research and Development Center fujitsulab 001 2022-03-15 675 386 64
1032 - 218
882 64
9 (190)
1854 (191)
1817 (159)
3451 (157)
6986 (158)
14166 109
0.72N 1.0
62 Gorilla Technology gorilla 2 2018-10-29 91 1252 77
1132 k 38
338 191
24 (36)
145 (36)
146 (30)
293 (29)
612 (31)
1509 161
0.02N 1.1
FNIR(N, R, T) =

219 96 248 (198)


63 Gorilla Technology gorilla 3 2018-10-26 94 1252 2156 k 559 12020 - 2047 - - -
FPIR(N, T) =

64 Gorilla Technology gorilla 004 2020-01-06 182 1244 220


2192 k 52
388 218
41 (68)
286 (66)
285 (91)
1191 (92)
2416 (85)
5036 203
0.00N 1.3
65 Gorilla Technology gorilla 005 2021-02-22 306 1420 253
6288 - 84
483 234
78 (134)
802 (134)
799 (108)
1514 (123)
4454 (117)
8820 186
0.05N 1.2
66 Gorilla Technology gorilla 006 2021-09-30 377 691 256
8336 - 172
767 237
99 (179)
1626 (179)
1612 (134)
2422 (122)
4422 (122)
9363 88
0.59N 1.0
0.42N 1.0

FRVT
254 92 236 (132) (129) (102) (101) (94) 69
67 Gorilla Technology gorilla 007 2022-02-16 392 322 6290 - 526 89 765 745 1408 2823 5764
68 Gorilla Technology gorilla 008 2022-10-31 321 290 247
4242 - 241
938 227
54 (104)
513 (100)
500 (77)
949 (90)
2402 (97)
6006 189
0.03N 1.2
69 Griaule griaule 000 2021-11-01 0 584 184
2052 - 62
417 46
8 (232)
5827 (236)
6150 (197)
11473 (195)
22952 (192)
46070 35
3.89N 1.0

-
False pos. identification rate
False neg. identification rate

70 Griaule griaule 001 2022-07-26 0 615 187


2052 - 256
1102 98
12 (233)
5866 (237)
6181 (199)
11629 (196)
23175 (193)
46504 52
3.74N 1.0

FACE RECOGNITION VENDOR TEST


71 Guangzhou Pixel Solutions Co Ltd pixelall 002 2019-07-01 0 165 222
2560 k 14
190 149
15 (168)
1296 (170)
1334 (141)
2526 (136)
5136 (140)
11045 114
0.52N 1.0
72 Guangzhou Pixel Solutions Co Ltd pixelall 003 2019-11-05 0 690 221
2560 k 150
703 188
22 (165)
1273 (166)
1307 (138)
2474 (137)
5198 (141)
11141 126
0.46N 1.0
73 Guangzhou Pixel Solutions Co Ltd pixelall 004 2020-07-02 0 538 224
2560 k 68
449 170
17 (164)
1259 (165)
1300 (137)
2465 (143)
5492 (142)
11443 140
0.34N 1.1
74 Guangzhou Pixel Solutions Co Ltd pixelall 005 2021-03-23 0 717 223
2560 - 200
840 93
11 (177)
1606 (177)
1528 (143)
2609 (133)
4926 (145)
11770 79
0.73N 1.0
75 Hangzhuo AIlu Network Information Technology hzailu 000 2022-03-18 855 97 58
1024 - 123
649 94
11 (209)
2609 (213)
2551 (181)
4813 (179)
9702 (178)
19338 60
1.50N 1.0
76 Hangzhuo AIlu Network Information Technology hzailu 001 2022-08-18 273 162 99
2048 - 177
777 100
12 (221)
4537 (227)
4637 (192)
8666 (190)
17109 (189)
39805 116
1.79N 1.0
77 Hikvision Research Institute hikvision 5 2018-10-29 593 9 83
1408 1 107
607 155
16 (143)
883 (144)
895 (119)
1908 (113)
3792 (124)
9387 172
0.10N 1.1
82 105 158 (141) (143)
78 Hikvision Research Institute hikvision 6 2018-10-29 593 9 1408 1 598 16 871 877 - - -
79 HyperVerge Inc hyperverge 001 2021-08-11 1791 212 57
1024 - 203
845 28
5 (128)
705 (123)
681 (97)
1346 (98)
2681 (92)
5680 99
0.32N 1.0
80 HyperVerge Inc hyperverge 002 2022-04-13 1140 1118 56
1024 - 237
934 68
9 (123)
661 (122)
659 (95)
1292 (84)
2188 (44)
2181 17
11.29N 0.8
81 Idemia idemia 5 2018-10-29 417 48 26
352 1 47
371 29
5 (32)
137 (33)
138 (36)
437 (34)
724 (33)
1630 197
0.01N 1.2
R = Num. candidates examined
N = Num. enrolled subjects

82 Idemia idemia 6 2018-10-29 417 48 25


352 1 46
370 26
4 (33)
137 (32)
138 (37)
442 (37)
827 (34)
1646 199
0.01N 1.2
83 Idemia idemia 007 2020-01-17 738 113 51
860 1 183
794 133
14 (37)
151 (37)
152 (61)
683 (64)
1481 (62)
3022 208
0.00N 1.4
84 Idemia idemia 008 2021-03-15 378 65 24
300 - 70
451 17
3 (31)
132 (31)
131 (27)
247 (26)
501 (25)
1013 72
0.07N 1.0
85 Idemia idemia 009 2022-03-01 735 68 49
636 - 213
873 40
7 (46)
211 (45)
205 (35)
389 (36)
787 (32)
1615 94
0.10N 1.0
86 Imagus Technology Pty Ltd imagus 005 2021-01-15 222 311 132
2048 - 180
786 131
14 (49)
236 (71)
313 (58)
651 (59)
1361 (49)
2461 170
0.03N 1.1
87 Imagus Technology Pty Ltd imagus 006 2021-05-27 248 369 152
2048 - 228
904 77
9 (73)
317 (48)
234 (44)
499 (56)
1273 (53)
2727 194
0.01N 1.2

-
0.16N 1.0

IDENTIFICATION
171 108 58 (48) (49) (38) (38) (36) 30
88 Imagus Technology Pty Ltd imagus 007 2021-11-16 248 366 2048 - 609 9 234 238 442 881 1765
148 67 168 (113) (115)
89 Imagus Technology Pty Ltd imagus 008 2022-05-26 204 335 2048 - 445 17 560 565 - - -
90 Imperial College London imperial 000 2019-08-28 461 15 109
2048 1 101
577 108
13 (79)
360 (85)
379 (112)
1626 (117)
4057 (138)
10291 211
0.00N 1.5
150 31 154 (89) (87)
91 Incode Technologies Inc incode 2 2018-10-29 71 31 2048 1 289 15 411 404 - - -
92 Incode Technologies Inc incode 3 2018-10-29 133 31 123
2048 1 147
697 144
15 (88)
408 (91)
412 (69)
847 (68)
1608 (79)
4486 165
0.05N 1.1
93 Incode Technologies Inc incode 004 2019-06-24 254 50 135
2048 1 80
475 99
12 (80)
365 (84)
378 (105)
1482 (70)
1660 (60)
2954 138
0.12N 1.1
94 Incode Technologies Inc incode 005 2021-07-29 259 21 166
2048 - 86
500 82
10 (72)
316 (96)
454 (74)
890 (76)
1843 (71)
3640 154
0.07N 1.1
95 Innovatrics innovatrics 4 2018-10-30 0 400 73
1076 k 53
399 245
10902 (7)
8 (4)
8 (4)
11 (2)
9 (3)
13 9
668.38N 0.2
T = Threshold

96 Innovatrics innovatrics 005 2019-09-30 0 455 44


538 1 196
827 247
11897 (6)
8 (5)
8 (3)
9 (3)
9 (2)
9 1
4055.65N 0.1
97 Innovatrics innovatrics 007 2021-08-16 175 58 43
538 - 175
777 134
14 (26)
97 (28)
100 (21)
188 (23)
378 (22)
788 26
0.09N 1.0
202 59 182 (83) (83)
98 Intellivision intellivision 001 2022-03-08 62 130 2056 - 406 20 388 377 - - -
207 34 114 (248) (253)
99 Intellivision intellivision 002 2022-07-28 114 128 2056 - 331 13 20542 20448 - - -
100 Intema-LGL Group intema 000 2022-08-24 1042 20 32
512 - 163
737 252
13809 (14)
27 (12)
31 (10)
36 (12)
44 (10)
54 11
791.50N 0.3
101 IrexAI irex 000 2021-02-09 724 46 225
3080 - 202
844 178
19 (120)
616 (117)
600 (86)
1120 (94)
2477 (95)
5863 145
0.13N 1.1
102 Kakao Enterprise kakao 000 2021-06-23 404 124 191
2052 - 199
835 48
8 (47)
213 (46)
215 (45)
510 (43)
971 (41)
1955 149
0.05N 1.1
103 Kakao Enterprise kakao 001 2022-06-08 615 102 139
2048 - 249
961 172
18 (96)
469 (97)
471 (78)
952 (78)
1887 (72)
3870 142
0.11N 1.1
T > 0 → Identification
T = 0 → Investigation

104 Kedacom International Pte kedacom 001 2019-09-16 239 36 23


292 1 88
507 7
2 (131)
764 (131)
760 (120)
1940 (105)
2983 (106)
6623 120
0.31N 1.0
Notes
1 Configuration size does not capture static data present in libraries. Libraries are included but the size also includes any ancillary libraries for image processing (e.g. openCV) or numerical computation (e.g. blas).
2 Finalization is the processing of converting N = 1600000 templates into a searchable data structure an operation which can be a simple copy, or the building of an index or tree, for example. The duration of the
operation may be data dependent, and may not be linear in the number of input templates.
3 This multiplier expresses the increase in template size when k images are passed to the template generation function.
4 All durations are measured on Intel®Xeon®CPU E5-2630 v4 @ 2.20GHz processors. Estimates are made by wrapping the API function call in calls to std::chrono::high resolution clock which on the machine in (3)
counts 1ns clock ticks. Precision is somewhat worse than that however.
5 Search durations are measured as in the prior note. The power-law model in the final column mostly fits the empirical results in Figure 146. However in certain cases the model is not correct and should not be used

47
numerically.

Table 3: Summary of algorithms and properties included in this report. The blue superscripts give ranking for the quantity in that column. Missing search durations,
denoted by “-”, are absent because those runs were not executed, usually because we did not run on the larger galleries. Caution: The power-law model is sometimes
an incorrect model. It is included here only to show broad sublinear behavior, which is flagged in green. The models should not be used for prediction.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

1 1 2 5
DEVELOPER SHORT SEQ . VALIDATION CONFIG LIB TEMPLATE GENERATION FINALIZE SEARCH DURATION MILLISEC
11:12:06
2022/12/18

3 4
FULL NAME NAME NUM . DATE DATA ( MB ) DATA ( MB ) SIZE ( B ) MULT TIME ( MS ) TIME ( S ) L =1 L =50 L =50 L =50 L =50 POWER LAW
N =1.6 M N =1.6 M N =1.6 M N =3 M N =6 M N =12 M (µs)

105 Kneron kneron 000 2020-03-03 366 13 114


2048 k 91
523 113
13 (208)
2535 (211)
2506 (180)
4752 (178)
9696 (180)
20926 121
0.95N 1.0
134 79 65 (210) (214)
106 Kneron kneron 001 2021-06-10 270 69 2048 - 472 9 2690 2642 - - -
107 Line Corporation line 000 2021-06-02 138 397 116
2048 - 82
481 53
8 (227)
5433 (231)
5418 (195)
10144 - - 33
3.65N 1.0
108 Line Corporation line 001 2021-11-21 471 396 141
2048 - 231
907 54
8 (191)
1872 (195)
1934 (165)
3647 (166)
7675 - 128
0.64N 1.0
109 Line Corporation lineclova 002 2022-07-29 560 72 102
2048 - 193
824 117
13 (57)
262 (58)
257 (43)
488 (44)
977 (42)
1963 62
0.15N 1.0
110 Lomonosov Moscow State University intsysmsu 000 2019-08-19 375 168 113
2048 1 110
614 118
13 (92)
430 (94)
431 (72)
860 (71)
1730 (88)
5353 180
0.03N 1.1
111 Lookman Electroplast Industries lookman 3 2018-10-28 203 24 22
292 1 37
336 15
3 (130)
739 (128)
745 (101)
1394 (100)
2817 (112)
8286 158
0.13N 1.1
47 33 25 (147) (148)
112 Lookman Electroplast Industries lookman 4 2018-10-28 184 24 548 1 320 4 981 998 - - -
113 Lookman Electroplast Industries lookman 005 2019-09-16 239 36 46
548 1 87
506 21
4 (149)
1005 (150)
1008 (142)
2597 (141)
5446 (118)
8939 156
0.19N 1.1
0.57N 1.0
FNIR(N, R, T) =

197 61 81 (145) (145) (114) (112) (108) 39


114 Mantra Softech India mantra 000 2021-10-28 460 61 2052 - 412 10 916 910 1714 3411 6841
130 11 1 (226) (230) (194) (193) (188) 90
2.41N 1.0
FPIR(N, T) =

115 Maxvision maxvision 000 2022-06-17 167 60 2048 - 183 - 5044 5188 9663 19358 39552
116 Maxvision maxvision 001 2022-10-28 228 63 168
2048 - 72
457 116
13 (159)
1173 (159)
1177 (128)
2233 (127)
4589 (123)
9371 73
0.65N 1.0
117 Megvii/Face++ megvii 1 2018-10-28 1703 41 239
4096 1 114
631 210
32 (110)
552 (113)
561 (94)
1222 (89)
2321 (96)
5968 164
0.08N 1.1
240 118 208 (111) (110)
118 Megvii/Face++ megvii 2 2018-10-28 1735 42 4096 1 635 31 553 558 - - -

FRVT
119 MicroFocus microfocus 5 2018-10-29 94 26 8
256 k 26
262 10
2 (43)
182 (42)
186 (33)
354 (33)
708 (29)
1425 64
0.11N 1.0
10 25 12 (44) (41)
120 MicroFocus microfocus 6 2018-10-29 94 26 256 k 262 2 183 186 - - -
121 Microsoft microsoft 5 2018-10-29 381 155 55
1024 1 126
658 96
11 (176)
1606 (183)
1673 (154)
3076 (151)
6302 (155)
13160 86
0.79N 1.0

-
False pos. identification rate
False neg. identification rate

54 131 148 (180) (181) (167) (154) (153) 107


0.68N 1.0

FACE RECOGNITION VENDOR TEST


122 Microsoft microsoft 6 2018-10-29 478 155 1024 1 671 15 1642 1618 3710 6401 12892
123 Mukh Technologies mukh 002 2022-09-16 693 442 163
2048 - 257
1278 243
4261 (4)
5 (19)
83 (16)
106 (16)
313 (16)
628 27
0.07N 1.0
124 N-Tech Lab ntech 5 2018-10-30 1685 113 95
1940 k 154
711 228
55 (53)
243 (54)
246 (47)
538 (47)
1100 (56)
2867 176
0.02N 1.1
125 N-Tech Lab ntech 6 2018-10-30 1686 117 94
1940 k 198
831 230
63 (52)
243 (53)
246 (49)
546 (48)
1104 (57)
2873 178
0.02N 1.1
126 N-Tech Lab ntechlab 007 2019-06-25 2450 51 227
3348 k 184
795 233
73 (85)
393 (92)
427 (66)
780 (74)
1768 (70)
3499 118
0.16N 1.0
127 N-Tech Lab ntechlab 008 2020-01-06 1111 51 81
1300 k 95
554 217
36 (42)
179 (39)
184 (32)
341 (32)
683 (28)
1395 54
0.11N 1.0
128 N-Tech Lab ntechlab 009 2021-03-01 1208 42 80
1300 - 226
899 215
35 (41)
178 (40)
184 (31)
336 (31)
676 (35)
1704 139
0.05N 1.1
129 N-Tech Lab ntechlab 010 2021-06-24 351 213 78
1280 - 214
874 33
6 (93)
440 (95)
435 (68)
821 (69)
1645 (66)
3337 80
0.22N 1.0
130 N-Tech Lab ntechlab 011 2021-12-07 679 208 79
1280 - 208
864 35
6 (99)
488 (98)
483 (75)
912 (77)
1869 (83)
5003 160
0.07N 1.1
131 NEC nec 2 2018-10-30 705 35 90
1616 k 122
642 175
18 (86)
405 (89)
409 (84)
1072 (72)
1755 (78)
4255 167
0.06N 1.1
132 NEC nec 3 2018-10-30 774 110 91
1712 k 129
665 183
21 (5)
7 (3)
7 (5)
14 (9)
40 (11)
82 187
0.00N 1.2
133 NEC nec 004 2021-07-19 971 63 75
1104 - 251
965 36
7 (76)
349 (77)
351 (59)
662 (57)
1330 (51)
2685 67
0.20N 1.0
R = Num. candidates examined
N = Num. enrolled subjects

134 NEC nec 005 2021-12-13 922 88 74


1104 - 250
961 37
7 (97)
473 (108)
551 (82)
1017 (82)
2091 (76)
4242 77
0.28N 1.0
135 NEC nec 006 2022-08-10 701 54 76
1104 - 239
937 62
9 (77)
358 (78)
354 (60)
666 (58)
1331 (52)
2707 56
0.21N 1.0
136 Neurotechnology neurotech 5 2018-10-30 266 53 9
256 k 55
402 11
2 (138)
835 (140)
839 (113)
1690 (111)
3219 (119)
8955 146
0.19N 1.1
12 158 9 (139) (141)
137 Neurotechnology neurotech 6 2018-10-30 564 53 256 k 726 2 839 842 - - -
138 Neurotechnology neurotech 007 2019-10-03 57 51 14
256 k 7
161 8
2 (155)
1118 (155)
1110 (126)
2143 (121)
4397 (120)
9045 85
0.55N 1.0
41 186 24 (158) (158) (129) (126) (128) 95
0.55N 1.0

-
139 Neurotechnology neurotechnology 008 2021-03-22 355 49 514 - 800 4 1167 1149 2266 4573 9586

IDENTIFICATION
140 Neurotechnology neurotechnology 009 2021-09-01 246 82 40
513 - 138
683 13
3 (152)
1035 (152)
1049 (122)
1977 (119)
4270 (115)
8756 132
0.32N 1.1
141 Neurotechnology neurotechnology 010 2022-01-07 247 83 11
256 - 127
661 3
2 (148)
988 (146)
984 (117)
1897 (116)
3977 (111)
8048 123
0.36N 1.0
142 Neurotechnology neurotechnology 012 2022-06-07 247 84 15
256 - 140
686 14
3 (153)
1036 (154)
1063 (123)
2046 (118)
4179 (114)
8624 119
0.41N 1.0
143 Newland Computer Co Ltd newland 2 2018-10-30 96 27 174
2048 - 206
855 150
15 (244)
8741 (249)
8854 (211)
17892 (209)
39356 - 162
1.32N 1.1
129 18 146 (166) (163)
144 Noblis noblis 1 2018-10-30 114 176 2048 1 206 15 1273 1272 - - -
145 Noblis noblis 2 2018-10-30 153 176 248
6144 1 90
517 220
43 (207)
2513 (212)
2522 (182)
5649 (183)
12432 (191)
44262 201
0.04N 1.3
146 NotionTag Technologies Private Limited notiontag 000 2022-01-14 265 945 218
2120 - 71
453 85
10 (243)
8619 (248)
8705 (210)
16652 (208)
38794 (205)
90607 166
1.15N 1.1
0.18N 1.0
T = Threshold

131 119 169 (61) (72) (54) (53) (48) 65


147 Pangiam pangiam 000 2022-02-22 453 23 2048 - 636 17 276 319 601 1210 2443
98 252 129 (60) (73)
148 Pangiam pangiam 001 2022-11-17 991 36 2048 - 966 14 275 323 - - -
121 42 205 (63) (65)
149 Paravision (EverAI) everai 2 2018-10-30 224 304 2048 1 366 30 278 283 - - -
150 Paravision (EverAI) everai 3 2018-10-30 438 304 115
2048 1 155
717 198
28 (62)
278 (64)
281 (51)
572 (49)
1146 (45)
2278 110
0.12N 1.0
151 Paravision (EverAI) everai-paravision 004 2019-06-19 527 128 229
4096 1 132
672 223
45 (112)
559 (111)
559 (144)
2611 (155)
6445 (159)
14519 209
0.00N 1.5
152 Paravision (EverAI) paravision 005 2019-12-11 543 154 232
4096 1 197
830 225
48 (114)
561 (114)
564 (83)
1056 (86)
2298 (82)
4966 136
0.16N 1.1
153 Paravision (EverAI) paravision 007 2021-02-01 529 235 231
4096 - 148
701 226
48 (116)
569 (109)
558 (85)
1086 (83)
2111 (77)
4254 23
1.11N 0.9
154 Paravision paravision 009 2021-12-14 672 300 241
4100 - 115
631 235
82 (219)
3690 (224)
4230 (190)
8037 (189)
16532 (185)
31422 112
1.62N 1.0
118 73 70 (161) (188)
T > 0 → Identification
T = 0 → Investigation

155 Qnap Security qnap 000 2021-07-28 182 15 2048 - 457 9 1231 1763 - - -
156 Qnap Security qnap 001 2021-12-09 191 13 155
2048 - 109
613 49
8 (182)
1666 (174)
1429 (160)
3472 (163)
7375 (166)
15159 184
0.11N 1.2
Notes
1 Configuration size does not capture static data present in libraries. Libraries are included but the size also includes any ancillary libraries for image processing (e.g. openCV) or numerical computation (e.g. blas).
2 Finalization is the processing of converting N = 1600000 templates into a searchable data structure an operation which can be a simple copy, or the building of an index or tree, for example. The duration of the
operation may be data dependent, and may not be linear in the number of input templates.
3 This multiplier expresses the increase in template size when k images are passed to the template generation function.
4 All durations are measured on Intel®Xeon®CPU E5-2630 v4 @ 2.20GHz processors. Estimates are made by wrapping the API function call in calls to std::chrono::high resolution clock which on the machine in (3)
counts 1ns clock ticks. Precision is somewhat worse than that however.
5 Search durations are measured as in the prior note. The power-law model in the final column mostly fits the empirical results in Figure 146. However in certain cases the model is not correct and should not be used

48
numerically.

Table 4: Summary of algorithms and properties included in this report. The blue superscripts give ranking for the quantity in that column. Missing search durations,
denoted by “-”, are absent because those runs were not executed, usually because we did not run on the larger galleries. Caution: The power-law model is sometimes
an incorrect model. It is included here only to show broad sublinear behavior, which is flagged in green. The models should not be used for prediction.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

1 1 2 5
DEVELOPER SHORT SEQ . VALIDATION CONFIG LIB TEMPLATE GENERATION FINALIZE SEARCH DURATION MILLISEC
11:12:06
2022/12/18

3 4
FULL NAME NAME NUM . DATE DATA ( MB ) DATA ( MB ) SIZE ( B ) MULT TIME ( MS ) TIME ( S ) L =1 L =50 L =50 L =50 L =50 POWER LAW
N =1.6 M N =1.6 M N =1.6 M N =3 M N =6 M N =12 M (µs)
157 Qnap Security qnap 002 2022-04-15 338 32 128
2048 - 192
822 166
17 (146)
958 (160)
1179 (131)
2312 (130)
4789 (135)
9791 148
0.24N 1.1
158 Qnap Security qnap 003 2022-12-09 239 60 100
2048 - 51
387 126
13 (184)
1671 (172)
1396 (162)
3567 (162)
7350 (164)
15014 192
0.09N 1.2
149 50 34 (246) (251) (208) (171)
159 Quantasoft quantasoft 1 2018-10-30 276 452 2048 k 385 6 15422 14858 14717 - 18323
160 Rank One Computing rankone 4 2018-10-09 0 101 1
85 k 3
36 38
7 (28)
101 (29)
101 (23)
190 - - 31
0.07N 1.0
161 Rank One Computing rankone 5 2018-10-24 0 101 5
133 k 4
92 39
7 (34)
140 (34)
144 (28)
266 (27)
525 (26)
1049 28
0.11N 1.0
6 23 47
162 Rank One Computing rankone 006 2019-06-03 0 133 165 k 245 8 - - - - -
163 Rank One Computing rankone 007 2019-11-12 0 137 7
165 k 27
272 42
7 (30)
116 (30)
115 (25)
215 (25)
439 (23)
877 63
0.07N 1.0
164 Rank One Computing rankone 009 2020-06-26 0 105 16
260 k 13
185 92
11 (22)
95 (25)
96 (19)
181 (19)
362 (20)
727 42
0.06N 1.0
165 Rank One Computing rankone 010 2020-11-05 0 135 17
261 - 17
198 86
10 (23)
95 (20)
95 (17)
178 (17)
357 (18)
714 37
0.06N 1.0
166 Rank One Computing rankone 011 2021-08-27 0 175 18
261 - 99
566 55
8 (25)
96 (22)
95 (20)
183 (20)
370 (17)
714 51
0.06N 1.0
FNIR(N, R, T) =

167 Rank One Computing rankone 012 2021-12-27 0 257 19


261 - 98
563 44
8 (24)
95 (23)
95 (18)
179 (18)
361 (19)
718 40
0.06N 1.0
FPIR(N, T) =

168 Rank One Computing rankone 013 2022-07-21 0 223 20


261 - 137
679 160
16 (29)
101 (27)
100 (22)
188 (22)
376 (21)
784 22
0.20N 0.9
169 Realnetworks Inc realnetworks 2 2018-10-30 105 104 242
4104 k 22
241 197
28 (193)
2008 (199)
2048 (170)
4194 (169)
8642 (165)
15035 78
1.08N 1.0
170 Realnetworks Inc realnetworks 003 2019-06-12 93 102 92
1848 k 10
173 107
13 (157)
1145 (156)
1132 (125)
2142 (138)
5241 (139)
10495 152
0.21N 1.1
93 9 91 (156) (157) (127) (129) (131) 131
0.36N 1.0

FRVT
171 Realnetworks Inc realnetworks 004 2019-10-17 94 102 1848 1 171 11 1143 1137 2149 4740 9693
172 Realnetworks Inc realnetworks 005 2021-06-23 168 209 210
2056 - 35
332 61
9 (181)
1654 (180)
1616 (153)
3030 (149)
6068 (148)
12134 47
1.01N 1.0
173 Realnetworks Inc realnetworks 006 2021-12-02 250 56 201
2056 - 40
348 50
8 (109)
543 (106)
531 (81)
996 (81)
1998 (75)
3991 45
0.33N 1.0

-
False pos. identification rate
False neg. identification rate

174 Realnetworks Inc realnetworks 007 2022-04-11 455 99 205


2056 - 117
634 167
17 (137)
815 (137)
812 (109)
1559 (108)
3159 (104)
6361 127
0.27N 1.0

FACE RECOGNITION VENDOR TEST


175 Realnetworks Inc realnetworks 008 2022-08-29 557 99 208
2056 - 253
968 101
12 (108)
538 (105)
525 (80)
986 (80)
1967 (90)
5559 153
0.09N 1.1
176 Remark Holdings remarkai 000 2019-06-12 234 1092 165
2048 k 124
650 106
12 (231)
5776 (233)
5703 (198)
11604 (206)
32133 (206)
91436 202
0.05N 1.3
162 103 138 (230) (234)
177 Remark Holdings remarkai 0 2018-10-30 187 847 2048 k 593 14 5685 5723 - - -
178 Remark Holdings remarkai 1 2018-10-30 187 847 143
2048 k 64
427 143
14 (229)
5680 (235)
5761 (202)
12475 (204)
28726 (202)
59618 190
0.37N 1.2
179 Rendip rendip 000 2021-05-21 0 416 110
2048 - 221
890 73
9 (54)
249 (80)
368 (63)
697 (63)
1452 (59)
2926 143
0.08N 1.1
180 Reveal Media Ltd revealmedia 000 2022-02-02 287 196 189
2052 - 49
383 80
10 (197)
2322 (196)
2019 (169)
3838 (167)
7816 (169)
16559 115
0.78N 1.0
181 SQIsoft sqisoft 001 2021-12-20 271 377 206
2056 - 76
462 71
9 (169)
1310 (168)
1319 (136)
2456 (132)
4906 (134)
9755 32
0.90N 1.0
182 SQIsoft sqisoft 002 2022-10-26 354 593 200
2056 - 128
661 115
13 (174)
1480 (175)
1456 (146)
2712 (142)
5487 (149)
12210 108
0.59N 1.0
183 Samsung S1 Corp s1 000 2021-06-03 257 196 228
4096 - 209
865 180
20 (239)
6715 (244)
6794 (206)
13032 (202)
26372 (200)
55723 106
2.82N 1.0
184 Samsung S1 Corp s1 001 2021-11-01 240 198 136
2048 - 189
813 56
8 (199)
2415 (210)
2491 (179)
4718 (177)
9614 (181)
24472 147
0.53N 1.1
185 Samsung S1 Corp s1 002 2022-05-04 244 93 153
2048 - 248
958 156
16 (162)
1234 (164)
1285 (133)
2411 (131)
4805 (132)
9705 55
0.77N 1.0
R = Num. candidates examined
N = Num. enrolled subjects

186 Samsung S1 Corp s1 003 2022-09-27 471 93 144


2048 - 254
977 123
13 (178)
1620 (184)
1697 (157)
3187 (153)
6400 (152)
12792 59
0.99N 1.0
187 Scanovate Ltd scanovate 000 2020-01-15 250 446 180
2048 - 151
705 141
14 (173)
1419 (173)
1412 (152)
3008 (180)
11616 (146)
12012 191
0.10N 1.2
188 Scanovate Ltd scanovate 001 2020-09-10 250 446 156
2048 - 134
675 112
13 (170)
1321 (169)
1320 (139)
2502 (135)
5047 (136)
10163 81
0.65N 1.0
189 Sensetime Group sensetime 0 2018-10-30 525 6 244
4104 k 146
693 219
41 (101)
498 (101)
501 (93)
1212 (85)
2281 (84)
5032 159
0.09N 1.1
190 Sensetime Group sensetime 1 2018-10-30 525 6 243
4104 k 113
628 224
48 (105)
516 (102)
502 (88)
1146 (87)
2301 (80)
4765 157
0.09N 1.1
191 Sensetime Group sensetime 002 2019-06-03 523 6 203
2056 k 106
603 173
18 (78)
359 (81)
370 (118)
1897 (124)
4508 (127)
9543 212
0.00N 1.5

-
204 233 177 (224) (229) (201) (198) (195) 169
0.67N 1.1

IDENTIFICATION
192 Sensetime Group sensetime 003 2019-12-02 769 76 2056 1 910 19 4885 4989 12325 24712 49445
193 Sensetime Group sensetime 004 2020-08-10 456 29 67
1032 - 143
690 105
12 (206)
2490 (206)
2477 (177)
4654 (176)
9402 (179)
19651 84
1.22N 1.0
194 Sensetime Group sensetime 005 2020-12-17 631 39 60
1032 - 255
980 90
11 (202)
2459 (222)
3939 (188)
7398 (186)
14768 (177)
19016 20
14.03N 0.9
195 Sensetime Group sensetime 006 2021-07-26 526 54 62
1032 - 235
929 43
7 (198)
2414 (205)
2422 (174)
4527 (172)
9128 (172)
18640 70
1.35N 1.0
196 Sensetime Group sensetime 007 2022-01-15 526 37 59
1032 - 238
935 57
8 (200)
2432 (203)
2406 (172)
4513 (170)
8998 (175)
18796 76
1.28N 1.0
197 Sensetime Group sensetime 008 2022-08-17 567 37 65
1032 - 240
937 67
9 (201)
2444 (204)
2419 (173)
4525 (171)
9114 (170)
18279 58
1.43N 1.0
178 152 139 (119) (118)
198 Shaman Software shaman 6 2018-10-26 0 200 2048 k 706 14 603 612 - - -
199 Shaman Software shaman 7 2018-10-26 0 200 104
2048 k 153
707 142
14 (118)
602 (119)
614 (89)
1187 (93)
2448 (87)
5083 111
0.25N 1.0
T = Threshold

200 Shanghai Yitu Technology yitu 4 2018-10-30 2119 136 215


2070 1 224
897 222
45 (167)
1288 (162)
1203 (135)
2440 (139)
5241 (130)
9671 105
0.52N 1.0
201 Shanghai Yitu Technology yitu 5 2018-10-30 2043 136 214
2070 1 205
853 221
44 (163)
1237 (161)
1199 (140)
2513 (134)
5013 (129)
9620 101
0.55N 1.0
28 8 23 (247) (252)
202 Smilart smilart 4 2018-10-30 65 89 512 k 167 4 16137 15633 - - -
161 69 136
203 Smilart smilart 5 2018-10-30 562 89 2048 k 450 14 - - - - -
236 195 192 (225) (228)
204 Staqu Technologies staqu 000 2021-08-30 1018 690 4096 - 826 24 4950 4933 - - -
205 Synesis synesis 3 2018-10-30 237 150 230
4096 k 5
99 201
29 (133)
789 (135)
801 (121)
1941 (115)
3888 (116)
8810 177
0.07N 1.1
206 Synesis synesis 003 2019-07-04 143 17 108
2048 k 19
211 102
12 (103)
507 (103)
502 (130)
2297 (125)
4564 (125)
9452 207
0.00N 1.4
207 Synesis synesis 005 2020-09-08 494 24 245
4104 - 168
756 190
24 (142)
877 (142)
865 (156)
3182 (128)
4658 (133)
9750 193
0.06N 1.2
T > 0 → Identification
T = 0 → Investigation

133 165 120 (55) (56)


208 T4iSB t4isb 000 2022-08-17 228 15 2048 - 741 13 250 250 - - -
Notes
1 Configuration size does not capture static data present in libraries. Libraries are included but the size also includes any ancillary libraries for image processing (e.g. openCV) or numerical computation (e.g. blas).
2 Finalization is the processing of converting N = 1600000 templates into a searchable data structure an operation which can be a simple copy, or the building of an index or tree, for example. The duration of the
operation may be data dependent, and may not be linear in the number of input templates.
3 This multiplier expresses the increase in template size when k images are passed to the template generation function.
4 All durations are measured on Intel®Xeon®CPU E5-2630 v4 @ 2.20GHz processors. Estimates are made by wrapping the API function call in calls to std::chrono::high resolution clock which on the machine in (3)
counts 1ns clock ticks. Precision is somewhat worse than that however.
5 Search durations are measured as in the prior note. The power-law model in the final column mostly fits the empirical results in Figure 146. However in certain cases the model is not correct and should not be used
numerically.

49
Table 5: Summary of algorithms and properties included in this report. The blue superscripts give ranking for the quantity in that column. Missing search durations,
denoted by “-”, are absent because those runs were not executed, usually because we did not run on the larger galleries. Caution: The power-law model is sometimes
an incorrect model. It is included here only to show broad sublinear behavior, which is flagged in green. The models should not be used for prediction.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

1 1 2 5
DEVELOPER SHORT SEQ . VALIDATION CONFIG LIB TEMPLATE GENERATION FINALIZE SEARCH DURATION MILLISEC
11:12:06
2022/12/18

3 4
FULL NAME NAME NUM . DATE DATA ( MB ) DATA ( MB ) SIZE ( B ) MULT TIME ( MS ) TIME ( S ) L =1 L =50 L =50 L =50 L =50 POWER LAW
N =1.6 M N =1.6 M N =1.6 M N =3 M N =6 M N =12 M (µs)

209 Tech5 SA tech5 001 2019-08-19 1394 116 84


1536 k 220
887 79
10 (81)
383 (132)
766 (147)
2767 (150)
6149 (100)
6178 168
0.12N 1.1
210 Tech5 SA tech5 002 2021-04-07 727 112 39
513 - 242
940 19
4 (223)
4682 (242)
6689 (203)
12541 (199)
25145 (197)
50239 44
4.18N 1.0
211 Tencent Deepsea Lab deepsea 001 2019-07-29 250 323 157
2048 1 164
737 104
12 (151)
1021 (151)
1020 (148)
2774 (146)
5767 (150)
12341 198
0.06N 1.2
212 Tevian tevian 5 2018-10-30 773 15 137
2048 1 58
405 147
15 (87)
405 (88)
408 (70)
854 (73)
1757 (69)
3380 130
0.14N 1.0
213 Tevian tevian 006 2021-04-16 769 19 66
1032 - 104
597 78
10 (70)
295 (68)
295 (52)
578 (51)
1187 (55)
2741 144
0.06N 1.1
214 Tevian tevian 007 2021-10-12 703 19 63
1032 - 176
777 27
4 (71)
297 (69)
298 (53)
579 (50)
1179 (47)
2418 122
0.11N 1.0
215 Thales cogent 2 2018-10-30 681 39 71
1043 k 245
945 195
27 (194)
2017 (201)
2144 (171)
4298 (168)
8472 (168)
16429 83
1.08N 1.0
216 Thales cogent 3 2018-10-30 681 39 70
1043 k 243
940 75
9 (160)
1230 (167)
1311 (145)
2687 (140)
5398 (137)
10184 96
0.62N 1.0
217 Thales cogent 004 2021-02-10 1376 59 199
2053 - 246
947 128
14 (211)
2903 (193)
1911 (161)
3566 (164)
7498 (167)
16370 129
0.64N 1.0
FNIR(N, R, T) =

218 Thales cogent 005 2021-09-13 1043 56 72


1062 - 173
769 31
5 (144)
912 (147)
996 (116)
1872 (114)
3845 (109)
7555 98
0.44N 1.0
48 201 45 (117) (138) (110) (109) (113) 151
0.16N 1.1
FPIR(N, T) =

219 Thales cogent 006 2022-05-14 508 70 550 - 843 8 587 820 1564 3173 8290
220 TigerIT Americas LLC tiger 2 2018-10-29 416 518 193
2052 k 75
461 151
15 (189)
1816 (194)
1921 (168)
3833 (165)
7526 (162)
14820 103
0.83N 1.0
194 74 256 (45) (43)
221 TigerIT Americas LLC tiger 3 2018-10-30 416 518 2052 k 461 37431 191 189 - - -
222 Toshiba toshiba 0 2018-10-30 961 105 89
1548 k 216
876 97
12 (236)
6153 (238)
6236 (200)
12221 (200)
25355 (196)
49448 185
0.36N 1.2

FRVT
212 215 257 (235) (240)
223 Toshiba toshiba 1 2018-10-30 961 105 2060 k 875 44701 6007 6355 - - -
160 54 66 (214) (217)
224 Tripleize aize 001 2021-08-06 262 150 2048 - 402 9 3087 3080 - - -
225 Trueface.ai trueface 000 2021-01-27 247 119 96
2000 - 41
363 110
13 (58)
271 (76)
327 (56)
614 (54)
1239 (50)
2678 93
0.15N 1.0

-
False pos. identification rate
False neg. identification rate

117 191 125 (249) (254)

FACE RECOGNITION VENDOR TEST


226 TuringTech.vip turingtechvip 001 2022-09-29 151 161 2048 - 817 13 22085 22044 - - -
227 Veridas Digital Authentication Solutions S.L. veridas 001 2021-03-05 347 875 120
2048 - 212
872 111
13 (228)
5493 (232)
5469 (196)
10350 (194)
20655 (190)
41264 48
3.40N 1.0
228 Veridas Digital Authentication Solutions S.L. veridas 002 2021-07-06 347 870 103
2048 - 217
877 88
10 (74)
322 (74)
325 (62)
685 (60)
1365 (54)
2730 137
0.09N 1.1
229 Veridas Digital Authentication Solutions S.L. veridas 003 2021-11-09 346 870 125
2048 - 210
867 59
9 (95)
440 (75)
327 (64)
699 (61)
1401 (74)
3954 188
0.02N 1.2
107 36 130 (9) (9)
230 Verijelas verijelas 000 2022-10-11 248 11 2048 - 334 14 20 27 - - -
231 Vietnam Posts and Telecommunications Group vnpt 001 2022-05-05 361 235 105
2048 - 223
892 181
20 (136)
813 (136)
804 (107)
1514 (106)
3037 (98)
6128 46
0.50N 1.0
232 Vietnam Posts and Telecommunications Group vnpt 002 2022-09-08 547 235 169
2048 - 187
808 165
16 (140)
857 (139)
835 (111)
1576 (110)
3183 (105)
6412 75
0.44N 1.0
233 Viettel Group vts 000 2021-03-12 250 257 158
2048 - 85
492 240
2295 (3)
4 (2)
4 (2)
6 (4)
11 - 14
0.61N 0.6
234 Viettel Group vts 001 2021-07-16 352 600 164
2048 - 222
891 186
21 (203)
2477 (209)
2487 (175)
4644 (174)
9313 (174)
18713 49
1.53N 1.0
235 Viettel Group vts 002 2022-02-08 244 600 101
2048 - 227
903 202
29 (205)
2485 (208)
2485 (178)
4678 (175)
9370 (176)
18833 57
1.49N 1.0
236 Viettel Group vts 003 2022-07-14 493 468 154
2048 - 149
702 212
34 (204)
2482 (207)
2480 (176)
4649 (173)
9302 (173)
18651 50
1.52N 1.0
85 170 176 (186)
237 Vigilant Solutions vigilant 5 2018-10-30 335 122 1544 k 762 19 - 1720 - - -
R = Num. candidates examined
N = Num. enrolled subjects

86 190 184 (185)


238 Vigilant Solutions vigilant 6 2018-10-30 337 122 1544 k 816 21 - 1713 - - -
239 Vigilant Solutions vigilantsolutions 007 2021-01-08 340 51 87
1544 - 111
616 164
16 (172)
1354 (171)
1352 (150)
2911 (148)
5966 (143)
11466 150
0.27N 1.1
240 Vigilant Solutions vigilantsolutions 008 2021-07-23 340 51 88
1544 - 56
403 119
13 (154)
1062 (153)
1061 (132)
2330 (144)
5520 (126)
9499 173
0.11N 1.1
241 Visidon visidon 1 2018-10-30 166 42 185
2052 k 130
667 153
15 (220)
4370 (226)
4472 (191)
8454 (191)
17262 (186)
34288 74
2.40N 1.0
183 142 60 (195) (202)
242 Visidon vd 002 2021-05-18 248 42 2052 - 687 9 2089 2336 - - -
182 145 51 (196) (200)
243 Visidon vd 003 2021-10-12 497 43 2052 - 692 8 2095 2082 - - -

-
IDENTIFICATION
244 Visiob-Box visionbox 000 2021-09-17 252 274 211
2059 - 83
481 163
16 (90)
422 (79)
359 (71)
855 (30)
631 (43)
2096 18
2.46N 0.8
245 VisionLabs visionlabs 6 2018-10-30 360 17 31
512 1 30
289 253
20290 (18)
36 (16)
36 (13)
39 (11)
44 (9)
53 8
3211.93N 0.2
246 VisionLabs visionlabs 7 2018-10-30 360 17 36
512 1 29
289 255
34666 (20)
63 (18)
63 (14)
72 (14)
80 (12)
115 10
2076.32N 0.2
247 VisionLabs visionlabs 008 2019-06-18 348 17 27
512 1 28
272 250
12747 (12)
23 (8)
24 (7)
26 (6)
29 (5)
33 6
2539.61N 0.2
248 VisionLabs visionlabs 009 2020-08-04 689 20 38
512 - 78
467 251
13245 (13)
23 (10)
29 (9)
34 (13)
61 (13)
145 13
8.88N 0.6
249 VisionLabs visionlabs 010 2021-02-05 1042 20 30
512 - 160
731 246
11837 (10)
21 (13)
32 (11)
36 (8)
39 (6)
43 7
3183.79N 0.2
250 VisionLabs visionlabs 011 2021-10-20 1042 20 33
512 - 162
735 249
12255 (11)
21 (7)
23 (8)
26 (7)
34 (8)
51 12
301.26N 0.3
126 232 127 (84) (70) (65) (65) (63) 183
0.02N 1.2
T = Threshold

251 Vixvizon vixvizion 009 2022-11-28 580 460 2048 - 907 14 389 312 714 1530 3105
252 Vocord vocord 5 2018-10-30 1035 185 50
768 k 178
780 41
7 (39)
158 (44)
204 (34)
383 (35)
767 (30)
1466 53
0.12N 1.0
257 179 239 (40) (47)
253 Vocord vocord 6 2018-10-30 1035 185 10240 k 785 243 170 216 - - -
254 Xforward AI Technology xforwardai 000 2020-07-24 236 171 159
2048 - 167
753 122
13 (222)
4603 (247)
7647 (209)
15723 (197)
23900 (199)
53729 174
0.56N 1.1
255 Xforward AI Technology xforwardai 001 2021-01-21 332 50 112
2048 - 136
677 161
16 (234)
5887 (225)
4384 (193)
8798 (192)
18553 (194)
48993 181
0.32N 1.1
256 Xforward AI Technology xforwardai 002 2021-05-24 691 50 233
4096 - 236
930 174
18 (241)
6957 (241)
6400 (204)
12659 (205)
31077 (203)
65158 179
0.52N 1.1
257 verihubs-inteligensia verihubs-inteligensia 000 2022-09-29 204 75 124
2048 - 100
575 137
14 (245)
9715 (250)
9670 (212)
18711 (207)
38110 (204)
79675 87
4.77N 1.0
Notes
T > 0 → Identification
T = 0 → Investigation

1 Configuration size does not capture static data present in libraries. Libraries are included but the size also includes any ancillary libraries for image processing (e.g. openCV) or numerical computation (e.g. blas).
2 Finalization is the processing of converting N = 1600000 templates into a searchable data structure an operation which can be a simple copy, or the building of an index or tree, for example. The duration of the
operation may be data dependent, and may not be linear in the number of input templates.
3 This multiplier expresses the increase in template size when k images are passed to the template generation function.
4 All durations are measured on Intel®Xeon®CPU E5-2630 v4 @ 2.20GHz processors. Estimates are made by wrapping the API function call in calls to std::chrono::high resolution clock which on the machine in (3)
counts 1ns clock ticks. Precision is somewhat worse than that however.
5 Search durations are measured as in the prior note. The power-law model in the final column mostly fits the empirical results in Figure 146. However in certain cases the model is not correct and should not be used
numerically.

Table 6: Summary of algorithms and properties included in this report. The blue superscripts give ranking for the quantity in that column. Missing search durations,

50
denoted by “-”, are absent because those runs were not executed, usually because we did not run on the larger galleries. Caution: The power-law model is sometimes
an incorrect model. It is included here only to show broad sublinear behavior, which is flagged in green. The models should not be used for prediction.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

MISS RATES INVESTIGATION , FNIR ( N , R = 1, T = 0) IDENTIFICATION , FNIR ( N , R = L , T ≥ 0) FOR FPIR = 0.001


11:12:06
2022/12/18

# ALGORITHM (0, 2] (2, 4] (4, 6] (6, 8] (8, 10] (10, 12] (12, 14] (14, 18] (0, 2] (2, 4] (4, 6] (6, 8] (8, 10] (10, 12] (12, 14] (14, 18]
98 97 97 97 97 139 139 140 101 98 98 98 98 137 138 136
1 3 DIVI -005 0.0207 0.0304 0.0415 0.0533 0.0646 0.0735 0.0884 0.1148 0.1580 0.2316 0.3033 0.3740 0.4285 0.4742 0.5329 0.5975
95 95 95 95 95 137 137 136 96 96 96 96 95 134 134 134
2 ANKE -000 0.0162 0.0245 0.0333 0.0428 0.0515 0.0615 0.0780 0.1028 0.1132 0.1761 0.2402 0.3057 0.3640 0.4200 0.4928 0.5680
49 50 50 49 48 88 87 86 54 54 56 57 57 95 96 95
3 ANKE -002 0.0055 0.0074 0.0090 0.0103 0.0116 0.0135 0.0162 0.0202 0.0329 0.0560 0.0843 0.1169 0.1481 0.1820 0.2280 0.2831
106 106 106 104 104 146 146 147 106 107 107 107 108 147 147 148
4 AWARE -005 0.0328 0.0519 0.0712 0.0910 0.1078 0.1235 0.1457 0.1831 0.3605 0.4949 0.5948 0.6783 0.7393 0.7905 0.8408 0.8831
110 110 110 110 110 155 154 154
5 AWARE -006 0.0702 0.1110 0.1502 0.1899 0.2253 0.2614 0.3045 0.3659
113 114 114 114 114 158 158 159 110 111 111 111 111 152 152 152
6 AYONIX -002 0.3360 0.4389 0.5144 0.5814 0.6340 0.6818 0.7297 0.7774 0.8288 0.9013 0.9375 0.9603 0.9744 0.9837 0.9893 0.9927
109 109 109 109 108 152 152 150 91 91 88 88 88 125 124 120
7 CAMVI -004 0.0623 0.0944 0.1243 0.1548 0.1812 0.2056 0.2344 0.2672 0.0810 0.1267 0.1721 0.2203 0.2619 0.3040 0.3543 0.4124
111 111 111 111 111 154 153 153
8 CAMVI -005 0.0849 0.1255 0.1631 0.1989 0.2298 0.2585 0.2915 0.3246
36 35 29 41 41 43
9 CANON -001 0.0052 0.0057 0.0042 0.0491 0.0606 0.0826
49 48 47 39 39 41
10 CANON -002 0.0062 0.0070 0.0070 0.0472 0.0582 0.0792
14 14 15 15 17 45 46 46 25 26 27 28 28 57 58 58
11 CIB -000 0.0022 0.0030 0.0037 0.0044 0.0049 0.0057 0.0069 0.0062 0.0139 0.0240 0.0373 0.0525 0.0689 0.0859 0.1109 0.1454
FNIR(N, R, T) =

4 4 4 9 11 27 34 37 16 18 18 19 19 42 46 46
12 CLEARVIEWAI -000 0.0017 0.0023 0.0028 0.0034 0.0039 0.0046 0.0056 0.0047 0.0066 0.0121 0.0194 0.0287 0.0385 0.0493 0.0662 0.0873
8 7 8 6 5 6 7 5 1 1 1 1 2 6 6 6
FPIR(N, T) =

13 CLOUDWALK - HR -000 0.0019 0.0024 0.0029 0.0032 0.0032 0.0036 0.0041 0.0020 0.0029 0.0041 0.0054 0.0064 0.0073 0.0085 0.0102 0.0112
9 3 2 3 3 3
14 CLOUDWALK - MT-000 0.0037 0.0038 0.0013 0.0065 0.0072 0.0075
7 1 1 2 2 2
15 CLOUDWALK - MT-001 0.0037 0.0037 0.0012 0.0045 0.0051 0.0042
91 90 93 92 92 132 131 129 77 79 77 77 75 113 113 117
16 COGENT-000 0.0128 0.0184 0.0250 0.0327 0.0407 0.0488 0.0611 0.0794 0.0559 0.0923 0.1342 0.1812 0.2243 0.2675 0.3240 0.3992

FRVT
90 91 92 93 93 131 130 130 78 78 76 76 76 114 112 116
17 COGENT-001 0.0128 0.0184 0.0250 0.0327 0.0407 0.0488 0.0611 0.0794 0.0559 0.0923 0.1342 0.1812 0.2243 0.2675 0.3240 0.3992
69 66 63 64 62 102 100 100 69 68 67 67 67 105 106 107
18 COGENT-002 0.0081 0.0105 0.0123 0.0137 0.0157 0.0175 0.0215 0.0280 0.0499 0.0827 0.1207 0.1639 0.2037 0.2432 0.2972 0.3638
71 67 65 67 66 108 109 106 80 80 80 80 80 120 122 123
19 COGENT-003 0.0082 0.0108 0.0128 0.0145 0.0168 0.0191 0.0239 0.0312 0.0582 0.0971 0.1417 0.1918 0.2380 0.2836 0.3440 0.4207

-
False pos. identification rate
False neg. identification rate

59 53 45 39 31 66 67 70 63 65 65 65 64 106 109 109


20 COGENT-004 0.0066 0.0080 0.0085 0.0080 0.0083 0.0092 0.0106 0.0130 0.0410 0.0720 0.1099 0.1539 0.1974 0.2443 0.3043 0.3757

FACE RECOGNITION VENDOR TEST


23 22 27 33 29 29
21 COGENT-006 0.0045 0.0049 0.0038 0.0370 0.0448 0.0602
105 103 103 103 102 144 144 143 100 99 99 99 99 138 137 135
22 COGNITEC -000 0.0265 0.0423 0.0588 0.0757 0.0894 0.1014 0.1169 0.1381 0.1522 0.2330 0.3051 0.3751 0.4300 0.4779 0.5307 0.5913
93 94 94 94 94 134 133 131 93 93 93 93 93 132 131 130
23 COGNITEC -001 0.0149 0.0228 0.0312 0.0399 0.0479 0.0546 0.0656 0.0806 0.0963 0.1562 0.2157 0.2771 0.3287 0.3771 0.4343 0.4959
77 80 81 81 81 120 118 117 72 71 72 73 71 107 105 105
24 COGNITEC -002 0.0101 0.0138 0.0170 0.0201 0.0237 0.0264 0.0309 0.0389 0.0517 0.0879 0.1269 0.1707 0.2098 0.2463 0.2919 0.3535
78 81 82 82 82 121 119 119 71 70 69 69 68 104 104 103
25 COGNITEC -003 0.0104 0.0140 0.0174 0.0205 0.0238 0.0266 0.0311 0.0401 0.0504 0.0855 0.1235 0.1662 0.2045 0.2403 0.2854 0.3451
64 63 62 59 59 101 96 96 53 53 52 51 50 86 83 82
26 COGNITEC -004 0.0073 0.0099 0.0118 0.0130 0.0147 0.0163 0.0189 0.0239 0.0325 0.0548 0.0798 0.1074 0.1325 0.1591 0.1952 0.2414
60 58 59 54 54 54
27 COGNITEC -006 0.0081 0.0086 0.0090 0.0777 0.0926 0.1274
7 5 5 4 4 8 14 14 6 6 7 8 8 17 18 18
28 CUBOX -000 0.0019 0.0024 0.0028 0.0031 0.0032 0.0037 0.0044 0.0027 0.0039 0.0059 0.0083 0.0111 0.0141 0.0185 0.0252 0.0339
50 45 41 35 32 67 68 66 32 33 33 32 33 66 65 65
29 CYBERLINK -002 0.0055 0.0068 0.0075 0.0078 0.0084 0.0094 0.0107 0.0114 0.0180 0.0302 0.0460 0.0643 0.0837 0.1058 0.1370 0.1787
35 34 27 25 25 56 53 55 19 19 20 21 21 48 50 52
30 CYBERLINK -003 0.0041 0.0052 0.0057 0.0058 0.0061 0.0068 0.0078 0.0078 0.0109 0.0175 0.0259 0.0356 0.0468 0.0594 0.0787 0.1072
30 28 28 27 28 61 64 63 30 32 31 30 30 63 62 61
31 DAHUA -002 0.0035 0.0047 0.0058 0.0067 0.0074 0.0082 0.0100 0.0108 0.0169 0.0294 0.0449 0.0635 0.0817 0.1013 0.1291 0.1638
19 19 19 20 20 48 55 50 29 30 29 29 29 61 61 59
32 DAHUA -003 0.0026 0.0036 0.0043 0.0050 0.0055 0.0062 0.0080 0.0073 0.0160 0.0280 0.0432 0.0615 0.0794 0.0987 0.1270 0.1587
R = Num. candidates examined
N = Num. enrolled subjects

17 16 14 13 13 32 40 39 12 10 11 11 11 22 22 23
33 DEEPGLINT-001 0.0024 0.0032 0.0037 0.0040 0.0043 0.0049 0.0060 0.0052 0.0058 0.0087 0.0119 0.0155 0.0199 0.0249 0.0338 0.0463
70 70 73 76 76 119 121 121 66 66 64 63 63 102 102 101
34 DEEPSEA -001 0.0081 0.0116 0.0149 0.0182 0.0216 0.0260 0.0332 0.0432 0.0458 0.0752 0.1086 0.1460 0.1812 0.2186 0.2663 0.3213
82 82 78 77 74 113 111 109 75 73 73 72 70 109 108 108
35 DERMALOG -006 0.0113 0.0142 0.0163 0.0183 0.0200 0.0218 0.0251 0.0329 0.0545 0.0889 0.1271 0.1697 0.2090 0.2498 0.3028 0.3670
88 88 88 88 87 126 127 127 92 92 92 92 92 131 130 131
36 DERMALOG -007 0.0125 0.0170 0.0214 0.0264 0.0309 0.0356 0.0432 0.0579 0.0910 0.1453 0.2009 0.2602 0.3134 0.3649 0.4289 0.5007
52 52 54 54 53 94 93 94 70 69 70 71 72 110 110 110
37 DERMALOG -008 0.0057 0.0077 0.0095 0.0110 0.0128 0.0148 0.0180 0.0223 0.0501 0.0850 0.1247 0.1692 0.2105 0.2541 0.3102 0.3762
41 38 30 45 45 45
38 DERMALOG -010 0.0056 0.0059 0.0043 0.0519 0.0643 0.0843

-
79 78 82 94 90 87

IDENTIFICATION
39 DILUSENSE -000 0.0123 0.0146 0.0180 0.1814 0.2149 0.2644
43 43 43 21 20 20
40 FIRSTCREDITKZ -001 0.0057 0.0066 0.0055 0.0240 0.0284 0.0368
65 63 65 75 74 72
41 FUJITSULAB -001 0.0089 0.0098 0.0111 0.1403 0.1723 0.2165
100 100 101 102 103 145 145 145 103 104 104 104 104 141 141 141
42 GORILLA -002 0.0213 0.0359 0.0528 0.0716 0.0895 0.1088 0.1367 0.1765 0.1828 0.2787 0.3654 0.4485 0.5168 0.5823 0.6508 0.7180
38 47 58 62 67 111 114 116 79 81 82 81 81 122 121 121
43 GORILLA -005 0.0044 0.0070 0.0102 0.0136 0.0170 0.0204 0.0272 0.0373 0.0566 0.0973 0.1432 0.1937 0.2398 0.2862 0.3437 0.4150
76 73 71 97 95 91
44 GORILLA -007 0.0108 0.0128 0.0145 0.1862 0.2198 0.2716

Table 7: Accuracy for the FRVT 2018 mugshot sets under ageing. The second row shows the time lapse between gallery and subsequent probe images, in years. The
T = Threshold

first two columns identify the algorithm The next 8 values give rank-based FNIR with R = 1, T = 0 and FPIR = 1. All these are relevant to investigational uses where
candidates from all searches would need human review. The second 8 values give threshold-based FNIR with T ≥ 0, FPIR = 0.001 and no rank criterion. The shaded
cells indictate indicate the three most accurate algorithms for that elapsed time. The gallery size is 3068801. The total number of searches is 10951064.
T > 0 → Identification
T = 0 → Investigation

51
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

MISS RATES INVESTIGATION , FNIR ( N , R = 1, T = 0) IDENTIFICATION , FNIR ( N , R = L,T ≥ 0 ) FOR FPIR = 0.001
11:12:06
2022/12/18

# ALGORITHM (0, 2] (2, 4] (4, 6] (6, 8] (8, 10] (10, 12] (12, 14] (14, 18] (0, 2] (2, 4] (4, 6] (6, 8] (8, 10] (10, 12] (12, 14] (14, 18]
58 56 58 84 79 80
45 GORILLA -008 0.0071 0.0081 0.0089 0.1557 0.1847 0.2340
26 23 25 34 32 30
46 GRIAULE -001 0.0046 0.0050 0.0038 0.0402 0.0487 0.0636
46 37 38 46 44 40
47 HZAILU -001 0.0058 0.0059 0.0049 0.0524 0.0630 0.0791
81 86 86 85 84 124 124 123 87 86 86 86 85 124 123 124
48 IDEMIA -003 0.0110 0.0151 0.0196 0.0238 0.0281 0.0313 0.0368 0.0504 0.0717 0.1147 0.1614 0.2113 0.2553 0.2976 0.3537 0.4334
80 84 85 84 83 123 123 124 58 55 54 53 52 85 82 83
49 IDEMIA -004 0.0107 0.0148 0.0192 0.0233 0.0277 0.0312 0.0367 0.0512 0.0373 0.0587 0.0833 0.1100 0.1340 0.1580 0.1911 0.2482
84 87 90 89 88 127 126 126 65 64 60 59 58 92 87 90
50 IDEMIA -005 0.0118 0.0167 0.0218 0.0270 0.0317 0.0357 0.0425 0.0579 0.0440 0.0689 0.0964 0.1254 0.1513 0.1762 0.2113 0.2698
87 89 89 87 86 125 122 122 62 59 57 52 49 78 76 75
51 IDEMIA -006 0.0124 0.0171 0.0218 0.0263 0.0302 0.0321 0.0356 0.0471 0.0409 0.0620 0.0850 0.1097 0.1309 0.1486 0.1738 0.2200
47 48 48 50 51 90 90 92 36 36 34 33 31 62 59 62
52 IDEMIA -007 0.0050 0.0071 0.0089 0.0106 0.0124 0.0142 0.0171 0.0220 0.0202 0.0335 0.0491 0.0663 0.0825 0.0999 0.1240 0.1645
5 6 6 5 7 11 17 19 3 3 5 5 5 11 10 10
53 IDEMIA -008 0.0018 0.0024 0.0029 0.0032 0.0035 0.0039 0.0046 0.0033 0.0034 0.0051 0.0069 0.0087 0.0102 0.0123 0.0146 0.0186
15 8 10 7 7 7
54 IDEMIA -009 0.0040 0.0042 0.0024 0.0094 0.0103 0.0123
33 33 31 29 30 64 65 64 39 39 40 40 38 71 70 68
55 IMAGUS -005 0.0039 0.0052 0.0061 0.0067 0.0077 0.0088 0.0103 0.0109 0.0212 0.0357 0.0539 0.0755 0.0967 0.1183 0.1485 0.1893
149 147 146
56 IMAGUS -008 0.1625 0.1704 0.1823
FNIR(N, R, T) =

34 35 36 38 40 77 76 80 49 51 51 54 56 96 99 98
57 IMPERIAL -000 0.0040 0.0054 0.0067 0.0079 0.0093 0.0112 0.0139 0.0178 0.0286 0.0503 0.0779 0.1116 0.1455 0.1844 0.2341 0.2951
FPIR(N, T) =

94 96 96 96 96 138 138 139 102 102 102 100 100 140 140 140
58 INCODE -003 0.0155 0.0247 0.0348 0.0463 0.0571 0.0674 0.0856 0.1114 0.1627 0.2507 0.3322 0.4122 0.4772 0.5368 0.6059 0.6766
56 59 59 61 64 104 107 105 73 74 75 75 77 115 114 113
59 INCODE -004 0.0061 0.0087 0.0110 0.0136 0.0161 0.0185 0.0236 0.0309 0.0532 0.0908 0.1334 0.1809 0.2245 0.2675 0.3249 0.3932
114 113 113 112 112 156 156 156 107 106 106 106 105 144 142 142
60 INNOVATRICS -004 0.3594 0.3629 0.3688 0.3754 0.3813 0.3870 0.3960 0.4135 0.4234 0.4642 0.5073 0.5522 0.5902 0.6274 0.6736 0.7253
41 41 42 45 45 80 81 81 55 56 58 58 59 99 98 96

FRVT
61 INNOVATRICS -005 0.0046 0.0063 0.0078 0.0092 0.0106 0.0124 0.0149 0.0178 0.0343 0.0590 0.0886 0.1222 0.1544 0.1881 0.2321 0.2874
135 134 133
62 INTELLIVISION -002 0.0577 0.0694 0.0881
16 11 11 18 17 17
63 INTEMA -000 0.0040 0.0043 0.0024 0.0193 0.0235 0.0294

-
False pos. identification rate
False neg. identification rate

24 24 25 26 26 59 61 62 52 52 53 56 55 91 94 92
64 IREX -000 0.0031 0.0042 0.0051 0.0060 0.0068 0.0080 0.0095 0.0107 0.0313 0.0539 0.0815 0.1137 0.1442 0.1755 0.2181 0.2718

FACE RECOGNITION VENDOR TEST


76 79 80 79 80 117 117 118 90 90 91 90 90 128 128 126
65 ISYSTEMS -002 0.0101 0.0135 0.0169 0.0197 0.0228 0.0256 0.0304 0.0398 0.0779 0.1258 0.1759 0.2299 0.2758 0.3204 0.3763 0.4401
75 69 69 69 70 110 106 103 84 84 84 84 83 118 117 115
66 ISYSTEMS -003 0.0089 0.0115 0.0139 0.0158 0.0177 0.0198 0.0234 0.0303 0.0647 0.1056 0.1502 0.1986 0.2402 0.2819 0.3351 0.3976
12 12 9 15 16 16
67 KAKAO -001 0.0039 0.0043 0.0022 0.0182 0.0220 0.0291
83 75 67 60 57 89 82 79 41 41 39 34 34 67 63 63
68 KEDACOM -001 0.0116 0.0130 0.0135 0.0133 0.0135 0.0141 0.0151 0.0176 0.0241 0.0360 0.0513 0.0689 0.0866 0.1060 0.1327 0.1694
25 19 16 26 26 25
69 LINECLOVA -002 0.0045 0.0049 0.0030 0.0307 0.0374 0.0497
86 83 77 70 71 106 99 99 64 62 61 61 60 98 97 94
70 LOOKMAN -003 0.0123 0.0144 0.0158 0.0168 0.0178 0.0188 0.0212 0.0260 0.0438 0.0687 0.0978 0.1296 0.1581 0.1879 0.2294 0.2756
85 77 70 66 61 99 91 89 51 49 46 46 46 80 81 81
71 LOOKMAN -005 0.0118 0.0134 0.0142 0.0144 0.0150 0.0160 0.0176 0.0213 0.0310 0.0480 0.0698 0.0954 0.1216 0.1491 0.1890 0.2381
82 80 78 90 84 84
72 MAXVISION -000 0.0128 0.0146 0.0169 0.1706 0.2023 0.2550
28 20 20 28 28 27
73 MAXVISION -001 0.0046 0.0049 0.0033 0.0351 0.0435 0.0581
115 115 115 116 116 160 160 161 111 112 112 112 112 151 151 151
74 MICROFOCUS -005 0.4269 0.5527 0.6355 0.7024 0.7503 0.7876 0.8234 0.8601 0.8338 0.9113 0.9468 0.9667 0.9771 0.9836 0.9880 0.9924
28 32 33 36 38 73 75 77 50 50 50 50 54 88 89 88
75 MICROSOFT-003 0.0034 0.0050 0.0064 0.0078 0.0092 0.0107 0.0135 0.0166 0.0288 0.0503 0.0763 0.1067 0.1359 0.1680 0.2116 0.2644
25 27 29 32 35 70 74 75 47 48 49 48 48 87 86 85
76 MICROSOFT-004 0.0032 0.0047 0.0060 0.0075 0.0087 0.0103 0.0131 0.0159 0.0268 0.0470 0.0716 0.1007 0.1291 0.1610 0.2052 0.2590
R = Num. candidates examined
N = Num. enrolled subjects

22 29 35 43 43 85 88 84 43 44 44 44 45 77 80 77
77 MICROSOFT-005 0.0031 0.0047 0.0066 0.0084 0.0103 0.0131 0.0164 0.0185 0.0243 0.0432 0.0658 0.0913 0.1172 0.1476 0.1874 0.2272
26 31 34 42 42 78 77 76 24 24 25 23 22 53 53 53
78 MICROSOFT-006 0.0032 0.0049 0.0065 0.0081 0.0096 0.0117 0.0144 0.0160 0.0134 0.0233 0.0346 0.0462 0.0578 0.0713 0.0903 0.1156
147 148 148 150 150 150
79 MUKH -002 0.1394 0.1754 0.2335 0.9761 0.9840 0.9899
97 99 99 99 98 141 141 141 89 89 89 89 89 127 125 125
80 NEC -000 0.0195 0.0316 0.0445 0.0581 0.0699 0.0817 0.0998 0.1237 0.0759 0.1245 0.1729 0.2240 0.2671 0.3117 0.3639 0.4348
104 102 100 100 101 143 142 142 94 94 94 94 94 133 133 132
81 NEC -001 0.0246 0.0382 0.0524 0.0672 0.0793 0.0904 0.1076 0.1317 0.1019 0.1623 0.2214 0.2834 0.3341 0.3844 0.4440 0.5183
27 22 18 16 15 31 33 28 15 11 10 10 9 14 15 15
82 NEC -002 0.0033 0.0041 0.0043 0.0044 0.0045 0.0049 0.0056 0.0041 0.0066 0.0090 0.0111 0.0131 0.0149 0.0171 0.0207 0.0267

-
31 26 24 24 24 51 51 53 9 9 9 7 6 13 12 12
83 NEC -003 0.0036 0.0046 0.0051 0.0055 0.0059 0.0067 0.0077 0.0073 0.0056 0.0076 0.0091 0.0105 0.0119 0.0137 0.0162 0.0209

IDENTIFICATION
32 25 22 18 14 30 30 23 7 5 2 2 1 4 4 4
84 NEC -004 0.0039 0.0045 0.0047 0.0046 0.0044 0.0046 0.0052 0.0036 0.0046 0.0057 0.0063 0.0066 0.0069 0.0076 0.0090 0.0105
10 6 6 5 5 5
85 NEC -005 0.0037 0.0041 0.0020 0.0080 0.0091 0.0107
13 9 7 1 1 1
86 NEC -006 0.0039 0.0042 0.0021 0.0030 0.0033 0.0012
101 101 102 101 100 140 140 138 109 109 110 110 110 149 149 149
87 NEUROTECHNOLOGY-003 0.0234 0.0379 0.0549 0.0682 0.0720 0.0747 0.0886 0.1066 0.6802 0.8187 0.8920 0.9355 0.9594 0.9738 0.9828 0.9885
79 78 76 73 72 112 110 107 83 82 81 79 78 116 115 114
88 NEUROTECHNOLOGY-004 0.0104 0.0134 0.0156 0.0173 0.0195 0.0212 0.0245 0.0320 0.0642 0.1015 0.1426 0.1881 0.2299 0.2722 0.3269 0.3943

Table 8: Accuracy for the FRVT 2018 mugshot sets under ageing. The second row shows the time lapse between gallery and subsequent probe images, in years. The
T = Threshold

first two columns identify the algorithm The next 8 values give rank-based FNIR with R = 1, T = 0 and FPIR = 1. All these are relevant to investigational uses where
candidates from all searches would need human review. The second 8 values give threshold-based FNIR with T ≥ 0, FPIR = 0.001 and no rank criterion. The shaded
cells indictate indicate the three most accurate algorithms for that elapsed time. The gallery size is 3068801. The total number of searches is 10951064.
T > 0 → Identification
T = 0 → Investigation

52
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

MISS RATES INVESTIGATION , FNIR ( N , R = 1, T = 0) IDENTIFICATION , FNIR ( N , R = L,T ≥ 0 ) FOR FPIR = 0.001
11:12:06
2022/12/18

# ALGORITHM (0, 2] (2, 4] (4, 6] (6, 8] (8, 10] (10, 12] (12, 14] (14, 18] (0, 2] (2, 4] (4, 6] (6, 8] (8, 10] (10, 12] (12, 14] (14, 18]
74 71 68 68 69 109 105 104 76 76 74 74 74 112 111 111
89 NEUROTECHNOLOGY-005 0.0089 0.0116 0.0136 0.0152 0.0173 0.0196 0.0233 0.0306 0.0556 0.0913 0.1315 0.1766 0.2192 0.2617 0.3174 0.3843
66 65 64 65 63 103 102 101 82 85 85 85 86 126 127 127
90 NEUROTECHNOLOGY-007 0.0078 0.0103 0.0124 0.0140 0.0161 0.0185 0.0225 0.0290 0.0641 0.1069 0.1546 0.2075 0.2572 0.3081 0.3713 0.4421
37 41 41 58 56 56
91 NEUROTECHNOLOGY-010 0.0053 0.0061 0.0053 0.0863 0.1050 0.1333
22 25 24 50 49 50
92 NEUROTECHNOLOGY-012 0.0044 0.0051 0.0038 0.0638 0.0783 0.1027
112 112 112 113 113 157 157 157 113 113 113 113 113 153 156 156
93 NOBLIS -002 0.1520 0.2419 0.3296 0.4114 0.4856 0.5528 0.6061 0.6532 0.9984 0.9996 0.9998 0.9999 0.9999 1.0000 1.0000 1.0000
65 76 87 90 91 133 136 137 68 72 79 83 87 129 129 129
94 NTECHLAB -003 0.0078 0.0131 0.0202 0.0295 0.0405 0.0543 0.0761 0.1035 0.0491 0.0881 0.1384 0.1985 0.2594 0.3270 0.4065 0.4891
62 68 79 86 89 130 132 134 60 63 66 66 73 121 126 128
95 NTECHLAB -004 0.0068 0.0110 0.0167 0.0239 0.0330 0.0447 0.0641 0.0891 0.0379 0.0688 0.1108 0.1629 0.2192 0.2846 0.3657 0.4524
51 62 72 83 85 128 129 132 56 60 63 64 66 111 118 122
96 NTECHLAB -006 0.0056 0.0095 0.0148 0.0218 0.0301 0.0413 0.0591 0.0814 0.0349 0.0636 0.1023 0.1506 0.2024 0.2617 0.3374 0.4185
37 43 49 57 60 107 112 112 45 46 48 49 51 89 91 93
97 NTECHLAB -007 0.0044 0.0066 0.0089 0.0118 0.0150 0.0189 0.0255 0.0342 0.0256 0.0450 0.0705 0.1012 0.1334 0.1692 0.2170 0.2752
18 21 26 31 44 91 108 113 26 28 32 37 40 76 85 89
98 NTECHLAB -008 0.0025 0.0038 0.0052 0.0074 0.0104 0.0146 0.0236 0.0348 0.0143 0.0267 0.0459 0.0733 0.1062 0.1469 0.2044 0.2698
13 15 16 17 19 53 59 61 18 17 17 17 18 36 37 44
99 NTECHLAB -009 0.0022 0.0031 0.0038 0.0045 0.0055 0.0067 0.0088 0.0100 0.0073 0.0117 0.0170 0.0238 0.0319 0.0419 0.0577 0.0833
42 44 51 29 30 35
100 NTECHLAB -011 0.0056 0.0066 0.0073 0.0351 0.0475 0.0724
FNIR(N, R, T) =

34 32 33 44 43 42
101 PANGIAM -000 0.0051 0.0055 0.0046 0.0503 0.0617 0.0810
FPIR(N, T) =

86 85 74
102 PANGIAM -001 0.0132 0.0153 0.0153
53 58 60 63 65 105 104 102
103 PARAVISION -002 0.0058 0.0083 0.0111 0.0137 0.0162 0.0187 0.0229 0.0295
44 44 51 52 54 92 92 91 57 58 59 60 61 100 100 97
104 PARAVISION -003 0.0048 0.0067 0.0090 0.0109 0.0128 0.0148 0.0178 0.0219 0.0354 0.0618 0.0931 0.1290 0.1625 0.1964 0.2408 0.2924
16 17 17 19 18 47 49 49 20 23 24 24 24 55 55 55
105 PARAVISION -004 0.0024 0.0032 0.0040 0.0047 0.0053 0.0061 0.0073 0.0072 0.0118 0.0209 0.0327 0.0465 0.0613 0.0779 0.1008 0.1285

FRVT
12 13 13 14 16 40 45 48 11 12 12 14 15 31 34 34
106 PARAVISION -005 0.0021 0.0028 0.0035 0.0041 0.0046 0.0054 0.0067 0.0070 0.0057 0.0093 0.0144 0.0207 0.0278 0.0368 0.0508 0.0715
6 8 7 8 8 19 21 17 10 13 14 13 14 30 31 31
107 PARAVISION -007 0.0019 0.0025 0.0029 0.0033 0.0036 0.0042 0.0049 0.0030 0.0057 0.0094 0.0144 0.0206 0.0275 0.0357 0.0485 0.0652
17 15 13 24 25 26

-
108 PARAVISION -009 0.0041 0.0046 0.0026 0.0283 0.0371 0.0525
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


72 73 71 72 73 114 113 114 97 97 97 97 97 136 136 137
109 PIXELALL -002 0.0085 0.0119 0.0147 0.0172 0.0198 0.0225 0.0270 0.0349 0.1193 0.1900 0.2601 0.3332 0.3955 0.4565 0.5268 0.6030
46 42 39 34 33 68 69 67 44 43 43 43 43 73 73 73
110 PIXELALL -003 0.0050 0.0063 0.0072 0.0077 0.0085 0.0095 0.0113 0.0119 0.0248 0.0418 0.0622 0.0861 0.1104 0.1364 0.1723 0.2167
45 40 40 37 36 72 72 72 38 40 42 42 39 72 72 76
111 PIXELALL -004 0.0049 0.0063 0.0072 0.0079 0.0089 0.0103 0.0127 0.0146 0.0211 0.0360 0.0553 0.0792 0.1045 0.1317 0.1700 0.2246
54 55 55 53 49 84 79 83 59 57 55 55 53 83 77 79
112 PTAKURATSATU -000 0.0061 0.0082 0.0097 0.0109 0.0120 0.0131 0.0146 0.0180 0.0375 0.0596 0.0842 0.1116 0.1357 0.1553 0.1820 0.2326
99 98 98 98 99 142 143 144 95 95 95 95 96 135 135 138
113 RANKONE -002 0.0212 0.0313 0.0431 0.0562 0.0712 0.0881 0.1130 0.1543 0.1111 0.1707 0.2305 0.2968 0.3646 0.4345 0.5172 0.6110
108 107 107 107 107 148 149 149 104 103 103 101 101 139 139 139
114 RANKONE -004 0.0424 0.0643 0.0875 0.1127 0.1364 0.1579 0.1914 0.2378 0.1855 0.2681 0.3431 0.4155 0.4785 0.5350 0.5980 0.6722
92 93 91 91 90 129 128 128 81 75 71 68 65 103 103 104
115 RANKONE -005 0.0136 0.0192 0.0246 0.0303 0.0362 0.0422 0.0521 0.0694 0.0582 0.0910 0.1260 0.1645 0.2005 0.2353 0.2816 0.3522
67 64 61 58 58 98 98 97 42 42 41 38 37 69 67 67
116 RANKONE -007 0.0078 0.0099 0.0113 0.0123 0.0139 0.0156 0.0191 0.0242 0.0242 0.0376 0.0542 0.0737 0.0935 0.1130 0.1416 0.1811
48 49 46 47 47 83 89 93 37 38 37 36 36 70 71 70
117 RANKONE -009 0.0054 0.0072 0.0085 0.0098 0.0113 0.0130 0.0169 0.0220 0.0208 0.0345 0.0504 0.0706 0.0930 0.1174 0.1504 0.2002
42 38 38 33 34 69 70 68 31 29 26 26 23 52 51 49
118 RANKONE -010 0.0047 0.0061 0.0070 0.0076 0.0087 0.0098 0.0113 0.0120 0.0177 0.0269 0.0368 0.0479 0.0590 0.0688 0.0803 0.0991
23 23 23 23 22 54 50 52 23 20 21 20 20 43 42 38
119 RANKONE -011 0.0031 0.0041 0.0047 0.0053 0.0058 0.0067 0.0077 0.0073 0.0127 0.0194 0.0265 0.0345 0.0422 0.0499 0.0611 0.0756
50 47 40 37 35 32
120 RANKONE -012 0.0065 0.0069 0.0053 0.0460 0.0540 0.0672
35 27 21 25 24 21
121 RANKONE -013 0.0051 0.0051 0.0035 0.0306 0.0355 0.0405
R = Num. candidates examined
N = Num. enrolled subjects

107 108 108 108 109 153 155 155 105 105 105 105 106 145 146 146
122 REALNETWORKS -002 0.0381 0.0687 0.1062 0.1495 0.1963 0.2513 0.3206 0.3927 0.2153 0.3323 0.4444 0.5485 0.6355 0.7132 0.7855 0.8437
103 105 105 106 106 151 151 152 98 100 101 103 103 143 144 144
123 REALNETWORKS -003 0.0245 0.0437 0.0686 0.0975 0.1312 0.1719 0.2294 0.2907 0.1468 0.2370 0.3313 0.4269 0.5142 0.5979 0.6815 0.7567
102 104 104 105 105 150 150 151 99 101 100 102 102 142 143 143
124 REALNETWORKS -004 0.0244 0.0428 0.0663 0.0939 0.1251 0.1634 0.2170 0.2785 0.1484 0.2377 0.3303 0.4249 0.5106 0.5924 0.6758 0.7534
57 52 56 64 60 60
125 REALNETWORKS -006 0.0069 0.0077 0.0080 0.1022 0.1253 0.1622
33 31 36 38 38 37
126 REALNETWORKS -008 0.0049 0.0054 0.0047 0.0462 0.0577 0.0745
29 26 26 40 40 39

-
127 S 1-002 0.0046 0.0051 0.0038 0.0482 0.0597 0.0788

IDENTIFICATION
44 42 44 51 52 51
128 S 1-003 0.0057 0.0063 0.0056 0.0681 0.0839 0.1061
68 72 75 78 78 118 120 120 88 88 87 87 84 123 120 119
129 SCANOVATE -001 0.0079 0.0117 0.0151 0.0185 0.0221 0.0259 0.0321 0.0427 0.0727 0.1169 0.1650 0.2115 0.2528 0.2925 0.3437 0.4084
96 92 84 75 68 87 60 45 40 25 19 18 12 19 14 13
130 SENSETIME -002 0.0186 0.0191 0.0183 0.0179 0.0173 0.0133 0.0089 0.0059 0.0220 0.0236 0.0237 0.0240 0.0245 0.0219 0.0195 0.0222
11 12 11 7 6 14 18 18 8 8 6 4 4 10 11 11
131 SENSETIME -003 0.0021 0.0028 0.0031 0.0033 0.0035 0.0040 0.0047 0.0033 0.0046 0.0064 0.0076 0.0086 0.0101 0.0122 0.0155 0.0196
3 3 3 3 3 5 13 12 4 4 3 3 3 12 13 14
132 SENSETIME -004 0.0016 0.0022 0.0025 0.0028 0.0030 0.0035 0.0043 0.0025 0.0036 0.0052 0.0066 0.0081 0.0099 0.0126 0.0169 0.0230

Table 9: Accuracy for the FRVT 2018 mugshot sets under ageing. The second row shows the time lapse between gallery and subsequent probe images, in years. The
T = Threshold

first two columns identify the algorithm The next 8 values give rank-based FNIR with R = 1, T = 0 and FPIR = 1. All these are relevant to investigational uses where
candidates from all searches would need human review. The second 8 values give threshold-based FNIR with T ≥ 0, FPIR = 0.001 and no rank criterion. The shaded
cells indictate indicate the three most accurate algorithms for that elapsed time. The gallery size is 3068801. The total number of searches is 10951064.
T > 0 → Identification
T = 0 → Investigation

53
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

MISS RATES INVESTIGATION , FNIR ( N , R = 1, T = 0) IDENTIFICATION , FNIR ( N , R = L,T ≥ 0 ) FOR FPIR = 0.001
11:12:06
2022/12/18

# ALGORITHM (0, 2] (2, 4] (4, 6] (6, 8] (8, 10] (10, 12] (12, 14] (14, 18] (0, 2] (2, 4] (4, 6] (6, 8] (8, 10] (10, 12] (12, 14] (14, 18]
2 2 2 2 2 4 10 15 5 7 8 9 10 20 23 24
133 SENSETIME -005 0.0015 0.0020 0.0024 0.0026 0.0029 0.0035 0.0043 0.0028 0.0036 0.0059 0.0089 0.0128 0.0177 0.0240 0.0345 0.0493
1 1 1 1 1 1 5 8 2 2 4 6 7 16 19 19
134 SENSETIME -006 0.0015 0.0019 0.0022 0.0025 0.0027 0.0033 0.0040 0.0021 0.0031 0.0049 0.0068 0.0097 0.0132 0.0184 0.0262 0.0359
3 2 3 9 9 9
135 SENSETIME -007 0.0035 0.0038 0.0015 0.0112 0.0140 0.0176
2 4 4 8 8 8
136 SENSETIME -008 0.0034 0.0039 0.0017 0.0103 0.0127 0.0163
117 117 117 117 117 161 161 160 112 110 109 109 109 148 148 147
137 SIAT-002 0.8309 0.8310 0.8311 0.8306 0.8296 0.8302 0.8300 0.8301 0.8340 0.8368 0.8404 0.8445 0.8480 0.8532 0.8595 0.8691
89 85 83 80 79 115 115 110 85 83 83 82 82 119 119 118
138 SYNESIS -003 0.0125 0.0151 0.0174 0.0199 0.0223 0.0240 0.0279 0.0331 0.0658 0.1052 0.1483 0.1968 0.2399 0.2834 0.3405 0.4046
40 37 37 40 37 71 71 73 46 45 45 45 44 74 75 74
139 SYNESIS -005 0.0044 0.0058 0.0070 0.0080 0.0091 0.0103 0.0125 0.0152 0.0262 0.0444 0.0666 0.0923 0.1156 0.1399 0.1736 0.2185
136 135 135
140 T 4 ISB -000 0.0606 0.0748 0.0970
57 61 66 71 77 122 125 125 86 87 90 91 91 130 132 133
141 TECH 5-001 0.0061 0.0093 0.0128 0.0171 0.0221 0.0289 0.0412 0.0560 0.0660 0.1156 0.1733 0.2385 0.2998 0.3629 0.4424 0.5284
73 74 74 74 75 116 116 115
142 TOSHIBA -001 0.0086 0.0119 0.0150 0.0178 0.0209 0.0241 0.0292 0.0365
36 36 30 28 27 62 62 60 35 37 38 35 35 68 66 66
143 TRUEFACE -000 0.0043 0.0057 0.0061 0.0067 0.0073 0.0084 0.0097 0.0099 0.0200 0.0338 0.0504 0.0705 0.0904 0.1112 0.1401 0.1792
58 56 56 56 56 93 94 90 61 61 62 62 62 101 101 102
FNIR(N, R, T) =

144 VERIDAS -001 0.0063 0.0083 0.0099 0.0113 0.0132 0.0148 0.0184 0.0219 0.0403 0.0684 0.1012 0.1386 0.1741 0.2113 0.2611 0.3233
43 46 52 55 55 96 95 98 74 77 78 78 79 117 116 112
145 VISIONLABS -004 0.0048 0.0069 0.0091 0.0111 0.0130 0.0152 0.0187 0.0242 0.0540 0.0916 0.1358 0.1855 0.2303 0.2745 0.3312 0.3913
FPIR(N, T) =

39 39 43 46 46 81 83 85 67 67 68 70 69 108 107 106


146 VISIONLABS -005 0.0044 0.0063 0.0081 0.0095 0.0109 0.0125 0.0151 0.0187 0.0479 0.0812 0.1212 0.1664 0.2078 0.2473 0.2999 0.3577
29 30 32 30 29 63 66 69 48 47 47 47 47 79 78 78
147 VISIONLABS -006 0.0035 0.0048 0.0061 0.0069 0.0077 0.0087 0.0105 0.0120 0.0273 0.0465 0.0702 0.0970 0.1228 0.1486 0.1847 0.2295
21 20 21 22 23 52 57 57 27 27 28 27 27 56 57 57
148 VISIONLABS -008 0.0028 0.0037 0.0047 0.0053 0.0058 0.0067 0.0081 0.0085 0.0143 0.0241 0.0373 0.0519 0.0677 0.0850 0.1104 0.1444
10 10 10 10 10 21 29 34 14 15 15 15 16 32 33 33

FRVT
149 VISIONLABS -009 0.0020 0.0026 0.0030 0.0034 0.0038 0.0044 0.0052 0.0046 0.0065 0.0105 0.0156 0.0217 0.0289 0.0368 0.0499 0.0681
9 9 9 11 9 20 24 35 17 16 16 16 17 35 36 36
150 VISIONLABS -010 0.0020 0.0025 0.0030 0.0034 0.0036 0.0043 0.0051 0.0047 0.0069 0.0113 0.0170 0.0238 0.0316 0.0411 0.0557 0.0740
18 16 22 23 21 22
151 VISIONLABS -011 0.0042 0.0046 0.0036 0.0270 0.0337 0.0432
100 97 95 93 88 86

-
False pos. identification rate
False neg. identification rate

152 VIXVIZION -009 0.0161 0.0190 0.0238 0.1787 0.2116 0.2595

FACE RECOGNITION VENDOR TEST


38 39 32 47 47 47
153 VNPT-002 0.0053 0.0059 0.0044 0.0534 0.0670 0.0882
116 116 116 115 115 159 159 158 108 108 108 108 107 146 145 145
154 VTS -000 0.5878 0.6312 0.6602 0.6863 0.7073 0.7246 0.7458 0.7747 0.5929 0.6397 0.6729 0.7034 0.7279 0.7493 0.7739 0.8076
39 36 42 49 48 48
155 VTS -003 0.0054 0.0059 0.0054 0.0597 0.0731 0.0950
20 18 20 21 21 55 54 54 28 31 30 31 32 65 64 64
156 XFORWARDAI -000 0.0027 0.0034 0.0044 0.0052 0.0058 0.0067 0.0079 0.0076 0.0157 0.0281 0.0443 0.0635 0.0834 0.1050 0.1330 0.1714
15 11 12 12 12 24 28 31 13 14 13 12 13 27 27 28
157 XFORWARDAI -001 0.0023 0.0028 0.0034 0.0037 0.0039 0.0045 0.0052 0.0043 0.0060 0.0096 0.0144 0.0200 0.0260 0.0334 0.0435 0.0586
60 57 53 48 50 95 101 108 33 34 35 39 41 81 92 99
158 YITU -002 0.0066 0.0083 0.0094 0.0101 0.0121 0.0150 0.0223 0.0328 0.0189 0.0317 0.0494 0.0750 0.1066 0.1494 0.2171 0.2958
63 60 57 51 52 97 103 111 34 35 36 41 42 82 93 100
159 YITU -003 0.0072 0.0089 0.0100 0.0107 0.0125 0.0153 0.0226 0.0334 0.0194 0.0321 0.0500 0.0756 0.1071 0.1500 0.2177 0.2964
55 51 44 41 39 74 86 88 22 22 23 25 26 60 69 71
160 YITU -004 0.0061 0.0075 0.0081 0.0081 0.0092 0.0107 0.0154 0.0207 0.0125 0.0204 0.0314 0.0469 0.0671 0.0955 0.1421 0.2006
61 54 47 44 41 75 84 87 21 21 22 22 25 59 68 69
161 YITU -005 0.0067 0.0080 0.0087 0.0085 0.0094 0.0108 0.0151 0.0204 0.0124 0.0198 0.0308 0.0462 0.0667 0.0953 0.1418 0.1930

Table 10: Accuracy for the FRVT 2018 mugshot sets under ageing. The second row shows the time lapse between gallery and subsequent probe images, in years. The
first two columns identify the algorithm The next 8 values give rank-based FNIR with R = 1, T = 0 and FPIR = 1. All these are relevant to investigational uses where
R = Num. candidates examined
N = Num. enrolled subjects

candidates from all searches would need human review. The second 8 values give threshold-based FNIR with T ≥ 0, FPIR = 0.001 and no rank criterion. The shaded
cells indictate indicate the three most accurate algorithms for that elapsed time. The gallery size is 3068801. The total number of searches is 10951064.

-
IDENTIFICATION
T = Threshold
T > 0 → Identification
T = 0 → Investigation

54
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

# ALGORITHM INVESTIGATION MODE IDENTIFICATION MODE FAILURE TO EXTRACT


11:12:06
2022/12/18

RANK ONE MISS RATE , FNIR ( N , 0, 1) HIGH T → FPIR = 0.001, FNIR ( N , T, L ) FEATURES
N =1.6 M N =1.6 M
GALLERY MUGSHOT MUGSHOT MUGSHOT VISA BORDER VISA MUGSHOT MUGSHOT MUGSHOT VISA BORDER VISA MUGSHOT MUGSHOT MUGSHOT VISA BORDER KIOSK
PROBE MUGSHOT WEBCAM PROFILE BORDER BOR ¿10 YR KIOSK MUGSHOT WEBCAM PROFILE BORDER BOR ¿10 YR KIOSK MUGSHOT WEBCAM PROFILE BORDER BOR ¿10 YR KIOSK
281 270 181 206 133 201 276 270 255 207 117 203
1 20 FACE -000 0.055 0.085 0.736 0.056 0.239 0.243 0.348 0.450 1.000 0.424 0.772 0.938 0.000 0.000 0.000 0.000
290 287 220 226 285 284 220 184
2 3 DIVI -003 0.083 0.206 0.141 0.474 0.400 0.626 0.605 0.821 0.002 0.005
248 258 198 205 255 260 196 159
3 3 DIVI -004 0.018 0.062 0.035 0.279 0.169 0.343 0.277 0.607 0.002 0.005
249 257 228 239 206 252 258 169 227 158
4 3 DIVI -005 0.018 0.062 0.930 0.821 0.279 0.166 0.339 0.996 0.864 0.597 0.002 0.005 0.442
259 265 202 215 254 259 197 162
5 3 DIVI -006 0.024 0.074 0.047 0.312 0.168 0.342 0.283 0.615 0.002 0.005
225 220 208 185 188 244 237 118 191 142
6 ACER -000 0.011 0.036 0.827 0.025 0.209 0.146 0.246 0.981 0.201 0.490 0.000 0.000 0.042
177 167 125 148 111 89 186 164 205 149 104 141
7 ACER -001 0.005 0.020 0.422 0.008 0.050 0.098 0.056 0.109 0.999 0.068 0.406 0.479 0.001 0.001 0.041 0.000
183 181 171 172 113 171 206 192 147 163 97 120
8 AIZE -001 0.006 0.022 0.683 0.016 0.050 0.165 0.077 0.143 0.994 0.101 0.364 0.387 0.001 0.001 0.047 0.000
244 245 215 201 211 241 221 185 185 180
FNIR(N, R, T) =

9 ALCHERA -000 0.016 0.047 0.870 0.046 0.292 0.138 0.216 0.999 0.176 0.803 0.006 0.014 0.328
319 315 241 277 316 319 317 279
10 ALCHERA -001 0.987 1.000 1.000 1.000 0.999 1.000 1.000 1.000 0.006 0.013 0.324
FPIR(N, T) =

292 284 242 236 224 292 281 214 226 181
11 ALCHERA -002 0.095 0.166 0.954 0.668 0.446 0.486 0.591 1.000 0.827 0.811 0.001 0.002 0.106
222 218 182 173 186 246 233 198 184 136
12 ALCHERA -003 0.010 0.035 0.741 0.016 0.206 0.155 0.239 0.999 0.172 0.464 0.001 0.002 0.106
228 224 117 174 124 160 284 276 139 208 111 153
13 ALCHERA -004 0.011 0.038 0.345 0.017 0.088 0.144 0.394 0.529 0.991 0.424 0.708 0.546 0.001 0.001 0.046 0.000
231 214 218 180 208 218 208 136 166 150

FRVT
14 ALLGOVISION -000 0.011 0.033 0.894 0.021 0.282 0.088 0.166 0.990 0.117 0.526 0.002 0.003 0.122
211 230 167 179 199 224 225 125 178 143
15 ALLGOVISION -001 0.009 0.038 0.661 0.021 0.241 0.102 0.221 0.986 0.150 0.491 0.001 0.001 0.042
239 225 231 253 312 228 224 146 279 226
16 ANKE -000 0.013 0.038 0.931 1.000 1.000 0.117 0.220 0.994 1.000 1.000 0.000 0.001 0.080
240 226 237 244 315 232 223 153 268 235

-
False pos. identification rate
False neg. identification rate

17 ANKE -001 0.013 0.038 0.946 1.000 1.000 0.119 0.220 0.994 1.000 1.000 0.000 0.001 0.080

FACE RECOGNITION VENDOR TEST


138 140 144 110 132 147 128 88 108 86
18 ANKE -002 0.003 0.016 0.522 0.005 0.119 0.032 0.079 0.948 0.034 0.245 0.001 0.001 0.049
267 271 254 226 210 236 252 122 209 151
19 AWARE -003 0.031 0.090 0.966 0.316 0.290 0.128 0.298 0.984 0.428 0.530 0.004 0.003 0.874
285 286 263 218 222 268 275 216 204 182
20 AWARE -004 0.068 0.176 0.976 0.122 0.414 0.269 0.509 1.000 0.397 0.816 0.003 0.003 0.776
268 259 264 204 214 279 239 222 195 197
21 AWARE -005 0.031 0.067 0.978 0.048 0.308 0.364 0.253 1.000 0.255 0.916 0.001 0.002 0.189
287 280 266 217 223 269 263 209 202 174
22 AWARE -006 0.070 0.128 0.983 0.111 0.421 0.276 0.398 0.999 0.368 0.749 0.001 0.002 0.189
312 310 275 234 238 304 303 175 231 212
23 AYONIX -000 0.450 0.685 0.996 0.607 0.867 0.811 0.939 0.998 0.954 0.982 0.010 0.031 0.939
307 302 271 240 235 306 298 204 236 208
24 AYONIX -001 0.341 0.527 0.993 0.994 0.778 0.824 0.920 0.999 0.999 0.969 0.010 0.031 0.939
306 303 270 230 234 305 299 207 228 209
25 AYONIX -002 0.341 0.527 0.993 0.464 0.778 0.824 0.920 0.999 0.915 0.969 0.010 0.031 0.939
280 272 221 214 219 201 184 99 165 124
26 CAMVI -003 0.052 0.090 0.911 0.093 0.360 0.071 0.132 0.970 0.114 0.402 0.006 0.013 0.675
278 266 184 212 213 202 186 203 162 178
27 CAMVI -004 0.047 0.077 0.744 0.072 0.296 0.072 0.136 0.999 0.100 0.787 0.000 0.000 0.000
284 278 186 215 218 222 215 213 179 220
28 CAMVI -005 0.065 0.103 0.746 0.098 0.341 0.099 0.179 1.000 0.156 0.999 0.000 0.000 0.000
15 5 42 27 20 24 43 30 22 30 29 40
29 CANON -001 0.001 0.006 0.088 0.001 0.007 0.062 0.005 0.023 0.365 0.008 0.068 0.139 0.001 0.000 0.042 0.000
23 6 52 16 24 20 36 27 26 54 34 71
R = Num. candidates examined
N = Num. enrolled subjects

30 CANON -002 0.001 0.006 0.106 0.001 0.007 0.059 0.005 0.020 0.407 0.013 0.075 0.188 0.001 0.000 0.042 0.000
58 31 50 46 51 37 76 71 230 65 51 194
31 CIB -000 0.002 0.008 0.100 0.002 0.011 0.069 0.012 0.045 1.000 0.017 0.141 0.894 0.000 0.000 0.000 0.000
16 14 11 25 15 13 45 35 104 31 23 98
32 CLEARVIEWAI -000 0.001 0.007 0.062 0.001 0.006 0.056 0.006 0.025 0.974 0.008 0.057 0.268 0.000 0.000 0.037 0.000
54 55 15 41 18 14 13 12 3 15 10 20
33 CLOUDWALK - HR -000 0.001 0.010 0.064 0.002 0.006 0.057 0.002 0.013 0.133 0.005 0.033 0.099 0.001 0.000 0.042 0.000
76 74 5 9 5 5 12 11 2 3 2 2
34 CLOUDWALK - MT-000 0.002 0.011 0.057 0.001 0.004 0.051 0.002 0.013 0.109 0.002 0.018 0.072 0.001 0.000 0.042 0.000
75 75 2 1 1 1 10 4 1 1 1 1
35 CLOUDWALK - MT-001 0.002 0.011 0.053 0.001 0.003 0.042 0.002 0.012 0.070 0.001 0.015 0.056 0.001 0.000 0.042 0.000

-
224 242 253 176 188 158
36 COGENT-000 0.010 0.046 0.965 0.053 0.140 0.995 0.000 0.000 0.000

IDENTIFICATION
223 243 252 177 189 159
37 COGENT-001 0.010 0.046 0.965 0.053 0.140 0.995 0.000 0.000 0.000
153 169 226 163 154 174
38 COGENT-002 0.004 0.020 0.925 0.044 0.098 0.998 0.000 0.000 0.000
155 174 235 168 148 177
39 COGENT-003 0.004 0.021 0.939 0.046 0.095 0.998 0.000 0.000 0.000
97 106 225 96 76 125 148 77 172 79 47 133
40 COGENT-004 0.002 0.013 0.922 0.004 0.019 0.113 0.033 0.051 0.997 0.022 0.126 0.456 0.000 0.000 0.000 0.000
69 63 59 47 48 134 60 60 132 49 37 195
41 COGENT-005 0.002 0.010 0.126 0.002 0.010 0.120 0.009 0.037 0.989 0.011 0.082 0.905 0.000 0.000 0.000 0.000
33 19 25 18 29 99 32 31 15 20 105 36
42 COGENT-006 0.001 0.007 0.067 0.001 0.007 0.101 0.004 0.023 0.238 0.006 0.422 0.130 0.000 0.000 0.041 0.000
261 253 250 250 253 140
43 COGNITEC -000 0.025 0.059 0.964 0.161 0.303 0.992 0.003 0.002 0.924
232 216 244 223 229 309
44 COGNITEC -001 0.012 0.034 0.958 0.102 0.230 1.000 0.003 0.002 0.924
T = Threshold

184 198 239 179 214 229


45 COGNITEC -002 0.006 0.025 0.949 0.053 0.178 1.000 0.003 0.002 0.924
188 197 230 175 206 232
46 COGNITEC -003 0.006 0.025 0.930 0.053 0.162 1.000 0.004 0.002 0.878

Table 11: Miss rates by dataset: At left, rank 1 miss rates relevant to investigations; at right, with threshold set to target FPIR = 0.01 for higher volume, low prior, uses.
Yellow indicates most accurate algorithm. Throughout blue superscripts indicate the rank of the algorithm for that column.
T > 0 → Identification
T = 0 → Investigation

55
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

# ALGORITHM INVESTIGATION MODE IDENTIFICATION MODE FAILURE TO EXTRACT


11:12:06
2022/12/18

RANK ONE MISS RATE , FNIR ( N , 0, 1) HIGH T → FPIR = 0.001, FNIR ( N , T, L ) FEATURES
N =1.6 M N =1.6 M
GALLERY MUGSHOT MUGSHOT MUGSHOT VISA BORDER VISA MUGSHOT MUGSHOT MUGSHOT VISA BORDER VISA MUGSHOT MUGSHOT MUGSHOT VISA BORDER KIOSK
PROBE MUGSHOT WEBCAM PROFILE BORDER BOR ¿10 YR KIOSK MUGSHOT WEBCAM PROFILE BORDER BOR ¿10 YR KIOSK MUGSHOT WEBCAM PROFILE BORDER BOR ¿10 YR KIOSK
146 139 205 164 117 159 146 152 133 148 94 100
47 COGNITEC -004 0.003 0.016 0.813 0.013 0.057 0.143 0.031 0.097 0.990 0.068 0.316 0.288 0.002 0.001 0.635 0.006
66 61 178 181 108 128 62 67 301 120 61 66
48 COGNITEC -005 0.002 0.010 0.713 0.021 0.037 0.115 0.010 0.041 1.000 0.041 0.157 0.179 0.002 0.001 0.614 0.017
62 50 175 138 87 120 55 63 283 96 67 168
49 COGNITEC -006 0.002 0.010 0.703 0.007 0.024 0.111 0.008 0.040 1.000 0.030 0.171 0.681 0.002 0.001 0.568 0.003
47 58 8 30 7 3 21 22 6 11 9 3
50 CUBOX -000 0.001 0.010 0.058 0.002 0.004 0.049 0.003 0.019 0.168 0.004 0.028 0.073 0.001 0.000 0.042 0.000
157 166 179 141 152 187 168 162 146 114
51 CYBERLINK -000 0.004 0.020 0.717 0.007 0.134 0.056 0.116 0.995 0.063 0.339 0.001 0.001 0.063
151 154 180 134 151 180 165 157 143 164
52 CYBERLINK -001 0.004 0.018 0.731 0.007 0.133 0.054 0.109 0.995 0.062 0.652 0.000 0.000 0.040
130 89 158 90 112 86 87 131 83 101
53 CYBERLINK -002 0.003 0.012 0.577 0.004 0.107 0.015 0.053 0.988 0.024 0.288 0.001 0.000 0.042
63 40 135 71 52 65 56 56 102 51 42 117
54 CYBERLINK -003 0.002 0.009 0.474 0.003 0.012 0.082 0.008 0.035 0.972 0.012 0.100 0.368 0.000 0.000 0.039 0.000
68 85 126 69 49 103 52 57 249 53 43 207
55 CYBERLINK -004 0.002 0.011 0.423 0.003 0.011 0.104 0.007 0.036 1.000 0.013 0.109 0.954 0.000 0.000 0.011 0.000
FNIR(N, R, T) =

80 68 80 51 43 90 66 64 218 56 38 202
56 CYBERLINK -005 0.002 0.011 0.209 0.002 0.010 0.098 0.010 0.041 1.000 0.014 0.089 0.926 0.000 0.000 0.034 0.000
FPIR(N, T) =

215 201 214 185


57 DAHUA -000 0.009 0.026 0.086 0.135 0.004 0.003
192 192 176 204 176 114
58 DAHUA -001 0.007 0.024 0.703 0.073 0.122 0.980 0.002 0.002 0.346
85 88 105 66 68 87 73 48 62 54
59 DAHUA -002 0.002 0.012 0.304 0.003 0.084 0.015 0.046 0.638 0.017 0.159 0.001 0.000 0.099
29 16 78 43 40 43 81 66 43 52 36 39
60 DAHUA -003 0.001 0.007 0.206 0.002 0.009 0.073 0.014 0.041 0.579 0.013 0.081 0.134 0.000 0.000 0.000 0.000

FRVT
14 21 64 31 23 35 51 37 36 37 19 31
61 DAHUA -004 0.001 0.008 0.144 0.002 0.007 0.069 0.007 0.026 0.485 0.008 0.051 0.113 0.000 0.000 0.000 0.000
160 147 147 111 78 139 117 101 219 84 69 189
62 DAON -000 0.004 0.017 0.530 0.005 0.020 0.125 0.023 0.061 1.000 0.025 0.173 0.846 0.002 0.002 0.108 0.001
104 87 88 100 75 116 120 107 53 88 68 84
63 DECATUR -000 0.002 0.011 0.229 0.004 0.019 0.109 0.023 0.066 0.675 0.027 0.173 0.239 0.001 0.000 0.044 0.001

-
False pos. identification rate
False neg. identification rate

51 13 76 59 44 26 13 211 21 53

FACE RECOGNITION VENDOR TEST


64 DEEPGLINT-001 0.001 0.007 0.200 0.002 0.073 0.003 0.014 1.000 0.006 0.159 0.000 0.000 0.038
165 137 206 151 158 167 157 123 153 111
65 DEEPSEA -001 0.004 0.016 0.814 0.010 0.140 0.046 0.101 0.985 0.077 0.326 0.000 0.001 0.047
296 289 225 229 291 286 225 192
66 DERMALOG -003 0.126 0.217 0.296 0.560 0.482 0.655 0.677 0.870 0.002 0.002 0.103
295 288 229 219 225 290 287 163 219 190
67 DERMALOG -004 0.125 0.215 0.930 0.135 0.467 0.480 0.657 0.995 0.603 0.856 0.001 0.002 0.107
243 223 174 224 221 217 199 134 198 161
68 DERMALOG -005 0.015 0.037 0.701 0.242 0.384 0.088 0.154 0.990 0.300 0.614 0.001 0.002 0.102
206 196 164 152 167 174 159 117 141 110
69 DERMALOG -006 0.008 0.024 0.619 0.010 0.155 0.052 0.105 0.981 0.059 0.318 0.003 0.006 0.181
214 203 169 168 173 215 197 135 161 156
70 DERMALOG -007 0.009 0.027 0.675 0.014 0.170 0.086 0.152 0.990 0.099 0.557 0.001 0.002 0.102
139 129 142 131 101 157 165 145 237 138 101 204
71 DERMALOG -008 0.003 0.015 0.516 0.007 0.029 0.139 0.045 0.094 1.000 0.057 0.382 0.940 0.000 0.000 0.002 0.000
137 124 72 139 137 106 110 108 234 98 128 187
72 DERMALOG -009 0.003 0.014 0.167 0.007 0.999 0.106 0.021 0.066 1.000 0.031 0.999 0.840 0.001 0.001 0.018 0.003
108 82 21 199 128 126 50 91 194 159 129 148
73 DERMALOG -010 0.002 0.011 0.066 0.038 0.124 0.113 0.007 0.055 0.999 0.089 1.000 0.522 0.001 0.001 0.018 0.003
315 304 219 235 135 237 297 280 151 223 118 185
74 DIGIDATA -000 0.590 0.548 0.895 0.642 0.707 0.813 0.610 0.577 0.994 0.646 0.789 0.824 0.002 0.001 0.070 0.001
110 94 103 144 98 94 144 127 49 115 110 75
75 DILUSENSE -000 0.002 0.012 0.297 0.008 0.028 0.099 0.030 0.078 0.655 0.039 0.664 0.203 0.001 0.001 0.219 0.006
289 282 248 216 220 281 278 154 217 179
76 EYEDEA -003 0.080 0.148 0.960 0.101 0.379 0.388 0.543 0.994 0.570 0.792 0.001 0.003 0.161
R = Num. candidates examined
N = Num. enrolled subjects

236 168 317 246 251 184


77 F 8-001 0.012 0.669 1.000 1.000 0.166 0.998 0.004 1.000 0.158
227 217 192 192 127 182 240 222 223 186 109 134
78 FINCORE -000 0.011 0.034 0.767 0.032 0.117 0.191 0.134 0.217 1.000 0.187 0.598 0.458 0.000 0.001 0.043 0.000
31 33 49 44 44 28 23 23 18 28 26 16
79 FIRSTCREDITKZ -001 0.001 0.008 0.094 0.002 0.010 0.065 0.003 0.019 0.291 0.007 0.061 0.097 0.000 0.001 0.047 0.001
111 117 129 95 81 91 111 94 81 70 85
80 FUJITSULAB -000 0.002 0.014 0.440 0.004 0.023 0.098 0.021 0.056 0.024 0.177 0.240 0.000 0.001 0.016 0.000
87 110 131 97 90 108 100 96 142 82 112 88
81 FUJITSULAB -001 0.002 0.013 0.455 0.004 0.026 0.106 0.018 0.058 0.992 0.024 0.739 0.247 0.000 0.003 0.150 0.002
300 295 274 223 231 280 279 156 211 186

-
82 GLORY-000 0.178 0.320 0.994 0.228 0.678 0.367 0.547 0.995 0.453 0.839 0.011 0.013 0.985

IDENTIFICATION
297 292 269 222 230 271 277 144 206 183
83 GLORY-001 0.127 0.267 0.992 0.178 0.594 0.305 0.537 0.993 0.408 0.819 0.011 0.013 0.988
282 274 233 210 216 286 271 246 212 292
84 GORILLA -001 0.060 0.095 0.936 0.069 0.329 0.406 0.453 1.000 0.468 1.000 0.001 0.001 0.069
255 239 188 186 193 258 246 248 194 225
85 GORILLA -002 0.020 0.044 0.753 0.027 0.214 0.188 0.268 1.000 0.250 1.000 0.001 0.001 0.069
269 261 207 203 203 273 268 302 205 288
86 GORILLA -003 0.036 0.070 0.821 0.048 0.265 0.318 0.434 1.000 0.407 1.000 0.001 0.001 0.069
189 193 173 157 170 220 205 90 172 130
87 GORILLA -004 0.006 0.024 0.697 0.012 0.162 0.089 0.160 0.959 0.135 0.438 0.000 0.001 0.042
145 155 79 124 137 191 191 55 158 108
88 GORILLA -005 0.003 0.018 0.209 0.006 0.124 0.058 0.142 0.700 0.088 0.315 0.000 0.000 0.040
74 91 57 79 70 105 135 139 41 89 65 78
89 GORILLA -006 0.002 0.012 0.122 0.003 0.018 0.105 0.027 0.089 0.531 0.028 0.166 0.218 0.000 0.000 0.041 0.000
70 67 55 56 68 73 133 126 42 85 84 64
90 GORILLA -007 0.002 0.011 0.114 0.002 0.016 0.088 0.027 0.077 0.534 0.026 0.264 0.178 0.000 0.000 0.041 0.000
T = Threshold

55 52 40 35 55 66 121 133 31 95 96 62
91 GORILLA -008 0.001 0.010 0.085 0.002 0.012 0.082 0.024 0.083 0.463 0.030 0.319 0.178 0.000 0.000 0.041 0.000
128 111 111 155 105 141 108 104 160 105 74 73
92 GRIAULE -000 0.002 0.014 0.327 0.011 0.031 0.126 0.020 0.063 0.995 0.033 0.185 0.198 0.000 0.002 0.090 0.001

Table 12: Miss rates by dataset: At left, rank 1 miss rates relevant to investigations; at right, with threshold set to target FPIR = 0.01 for higher volume, low prior, uses.
Yellow indicates most accurate algorithm. Throughout blue superscripts indicate the rank of the algorithm for that column.
T > 0 → Identification
T = 0 → Investigation

56
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

# ALGORITHM INVESTIGATION MODE IDENTIFICATION MODE FAILURE TO EXTRACT


11:12:06
2022/12/18

RANK ONE MISS RATE , FNIR ( N , 0, 1) HIGH T → FPIR = 0.001, FNIR ( N , T, L ) FEATURES
N =1.6 M N =1.6 M
GALLERY MUGSHOT MUGSHOT MUGSHOT VISA BORDER VISA MUGSHOT MUGSHOT MUGSHOT VISA BORDER VISA MUGSHOT MUGSHOT MUGSHOT VISA BORDER KIOSK
PROBE MUGSHOT WEBCAM PROFILE BORDER BOR ¿10 YR KIOSK MUGSHOT WEBCAM PROFILE BORDER BOR ¿10 YR KIOSK MUGSHOT WEBCAM PROFILE BORDER BOR ¿10 YR KIOSK
28 28 61 7 83 29 37 42 75 25 126 19
93 GRIAULE -001 0.001 0.008 0.132 0.001 0.023 0.065 0.005 0.028 0.865 0.007 0.995 0.099 0.000 0.000 0.000 0.000
233 206 172 160 164 225 201 97 175 132
94 HIK -003 0.012 0.027 0.689 0.012 0.151 0.103 0.158 0.969 0.142 0.445 0.000 0.000 0.048
230 204 183 158 166 221 198 105 173 129
95 HIK -004 0.011 0.027 0.743 0.012 0.152 0.099 0.153 0.976 0.137 0.434 0.000 0.000 0.048
169 142 149 136 119 160 125 208 147 152
96 HIK -005 0.005 0.017 0.535 0.007 0.111 0.044 0.077 0.999 0.068 0.541 0.000 0.000 0.000
170 141 150 169 136 241
97 HIK -006 0.005 0.017 0.535 0.047 0.086 1.000 0.000 0.000 0.000
43 77 24 29 21 22 33 49 12 26 21 24
98 HYPERVERGE -001 0.001 0.011 0.067 0.002 0.007 0.061 0.004 0.031 0.220 0.007 0.053 0.101 0.001 0.000 0.041 0.000
40 73 12 19 17 19 27 40 10 18 18 13
99 HYPERVERGE -002 0.001 0.011 0.063 0.001 0.006 0.058 0.004 0.027 0.210 0.006 0.048 0.093 0.001 0.000 0.041 0.000
109 109 92 72 65 76 107 78 95 72 93 48
100 HZAILU -000 0.002 0.013 0.244 0.003 0.015 0.090 0.020 0.051 0.967 0.020 0.316 0.153 0.001 0.001 0.054 0.001
94 76 53 52 125 79 57 216 128 190 263 167
101 HZAILU -001 0.002 0.011 0.106 0.002 0.113 0.092 0.009 0.183 0.986 0.196 1.000 0.679 0.000 0.000 0.039 0.000
FNIR(N, R, T) =

196 215 243 176 190 170 207 168 176


102 IDEMIA -003 0.007 0.034 0.958 0.018 0.210 0.047 0.165 0.123 0.766 0.000 0.000 0.041
FPIR(N, T) =

191 213 238 177 189 156 172 103 167 177
103 IDEMIA -004 0.007 0.032 0.947 0.018 0.210 0.037 0.118 0.973 0.123 0.766 0.000 0.000 0.041
205 231 241 183 194 162 196 109 169 193
104 IDEMIA -005 0.008 0.039 0.954 0.021 0.217 0.044 0.150 0.978 0.130 0.879 0.000 0.000 0.041
219 263 257 188 202 159 227 119 176 172
105 IDEMIA -006 0.010 0.072 0.969 0.030 0.253 0.043 0.226 0.982 0.144 0.733 0.000 0.000 0.041
129 134 303 125 107 148 99 90 279 132 72 254
106 IDEMIA -007 0.003 0.015 1.000 0.006 0.036 0.131 0.018 0.055 1.000 0.052 0.182 1.000 0.000 0.000 0.040 0.000

FRVT
12 8 37 28 25 49 9 10 9 14 14 27
107 IDEMIA -008 0.001 0.007 0.079 0.001 0.007 0.075 0.002 0.013 0.204 0.005 0.036 0.106 0.000 0.000 0.040 0.000
5 7 18 8 11 6 3 3 4 5 8 6
108 IDEMIA -009 0.001 0.006 0.065 0.001 0.005 0.051 0.002 0.011 0.141 0.003 0.027 0.074 0.000 0.000 0.040 0.000
303 293 268 301 293 239
109 IMAGUS -002 0.220 0.301 0.988 0.749 0.816 1.000 0.004 0.008 0.550

-
False pos. identification rate
False neg. identification rate

309 300 272 303 297 231

FACE RECOGNITION VENDOR TEST


110 IMAGUS -003 0.356 0.513 0.993 0.807 0.909 1.000 0.004 0.008 0.550
93 90 108 123 79 150 103 106 73 91 64 81
111 IMAGUS -005 0.002 0.012 0.319 0.006 0.022 0.132 0.018 0.066 0.838 0.029 0.161 0.231 0.000 0.000 0.000 0.000
98 114 102 98 77 122 106 112 82 90 63 92
112 IMAGUS -006 0.002 0.014 0.293 0.004 0.019 0.112 0.019 0.069 0.897 0.028 0.161 0.260 0.000 0.000 0.000 0.000
101 107 109 91 80 130 119 118 80 99 66 97
113 IMAGUS -007 0.002 0.013 0.321 0.004 0.022 0.117 0.023 0.073 0.893 0.031 0.169 0.265 0.000 0.000 0.000 0.000
291 273 106 182 123 133 311 290 165 215 130 147
114 IMAGUS -008 0.086 0.093 0.305 0.021 0.081 0.119 0.974 0.774 0.996 0.520 1.000 0.518 0.000 0.000 0.000 0.000
125 131 99 106 87 127 110 189 121 87
115 IMPERIAL -000 0.002 0.015 0.280 0.004 0.097 0.026 0.068 0.999 0.042 0.245 0.000 0.000 0.000
279 276 240 272 266 181
116 INCODE -000 0.049 0.100 0.951 0.310 0.420 0.998 0.001 0.004 0.173
246 244 189 261 249 253
117 INCODE -001 0.017 0.046 0.762 0.212 0.296 1.000 0.001 0.004 0.173
250 246 210 257 247 145
118 INCODE -002 0.018 0.048 0.843 0.184 0.269 0.993 0.000 0.001 0.066
238 233 190 253 243 206
119 INCODE -003 0.013 0.040 0.764 0.167 0.264 0.999 0.000 0.001 0.066
152 152 136 149 154 183 175 155 145 106
120 INCODE -004 0.004 0.017 0.475 0.008 0.135 0.054 0.120 0.995 0.063 0.313 0.000 0.001 0.066
67 83 67 58 58 59 70 69 39 64 55 51
121 INCODE -005 0.002 0.011 0.147 0.002 0.013 0.079 0.011 0.043 0.528 0.017 0.145 0.155 0.000 0.000 0.042 0.000
277 264 213 266 254 245
122 INNOVATRICS -002 0.045 0.074 0.853 0.234 0.310 1.000 0.000 0.001 0.046
R = Num. candidates examined
N = Num. enrolled subjects

263 249 212 262 250 217


123 INNOVATRICS -003 0.026 0.055 0.845 0.221 0.297 1.000 0.000 0.001 0.046
237 235 245 238 226 112
124 INNOVATRICS -004 0.012 0.040 0.958 0.132 0.222 0.980 0.000 0.001 0.046
126 123 124 109 115 149 140 74 128 89
125 INNOVATRICS -005 0.002 0.014 0.407 0.005 0.109 0.034 0.089 0.846 0.047 0.251 0.000 0.001 0.041
71 81 93 61 60 52 77 79 58 63 40 49
126 INNOVATRICS -007 0.002 0.011 0.248 0.002 0.013 0.077 0.013 0.051 0.743 0.017 0.093 0.154 0.000 0.001 0.041 0.000
99 98 81 104 106 136 122 124 65 136 82 90
127 INTELIGENSIA -000 0.002 0.012 0.210 0.004 0.033 0.124 0.024 0.077 0.786 0.053 0.235 0.255 0.001 0.000 0.046 0.001
270 277 259 207 132 217 270 264 221 200 114 169
128 INTELLIVISION -001 0.036 0.102 0.972 0.057 0.222 0.333 0.279 0.404 1.000 0.328 0.749 0.685 0.001 0.000 0.044 0.000

-
226 212 236 175 122 184 245 217 192 171 106 135

IDENTIFICATION
129 INTELLIVISION -002 0.011 0.031 0.942 0.018 0.080 0.200 0.154 0.196 0.999 0.134 0.437 0.460 0.001 0.000 0.043 0.000
19 35 7 10 14 4 16 18 236 16 90 8
130 INTEMA -000 0.001 0.008 0.058 0.001 0.005 0.051 0.002 0.017 1.000 0.005 0.288 0.081 0.000 0.000 0.040 0.000
298 190 157 211 149 314 308 215 235 221
131 INTSYSMSU -000 0.146 0.023 0.562 0.072 0.132 0.998 1.000 1.000 0.999 0.999 0.000 0.000 0.050
166 46 170 57 53 63 139 100 89 124 92 60
132 IREX -000 0.004 0.010 0.681 0.002 0.012 0.082 0.028 0.060 0.957 0.044 0.302 0.170 0.000 0.000 0.042 0.000
190 200 211 208 179 173
133 ISYSTEMS -002 0.006 0.026 0.844 0.078 0.126 0.998 0.002 0.002 0.142
178 187 195 192 163 220
134 ISYSTEMS -003 0.005 0.023 0.791 0.059 0.107 1.000 0.002 0.002 0.142
53 66 56 60 57 54 89 93 33 69 50 52
135 KAKAO -000 0.001 0.011 0.119 0.002 0.013 0.078 0.015 0.056 0.468 0.019 0.141 0.158 0.000 0.000 0.041 0.000
44 41 6 5 9 2 18 20 5 10 17 5
136 KAKAO -001 0.001 0.009 0.058 0.001 0.004 0.047 0.003 0.017 0.159 0.004 0.042 0.074 0.000 0.000 0.040 0.000
T = Threshold

201 219 260 194 196 118 116 127 137 104
137 KEDACOM -001 0.008 0.036 0.972 0.034 0.237 0.023 0.072 0.986 0.055 0.305 0.000 0.000 0.000
185 205 155 187 183
138 KNERON -000 0.006 0.027 0.552 0.028 0.195 0.000 0.000 0.000

Table 13: Miss rates by dataset: At left, rank 1 miss rates relevant to investigations; at right, with threshold set to target FPIR = 0.01 for higher volume, low prior, uses.
Yellow indicates most accurate algorithm. Throughout blue superscripts indicate the rank of the algorithm for that column.
T > 0 → Identification
T = 0 → Investigation

57
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

# ALGORITHM INVESTIGATION MODE IDENTIFICATION MODE FAILURE TO EXTRACT


11:12:06
2022/12/18

RANK ONE MISS RATE , FNIR ( N , 0, 1) HIGH T → FPIR = 0.001, FNIR ( N , T, L ) FEATURES
N =1.6 M N =1.6 M
GALLERY MUGSHOT MUGSHOT MUGSHOT VISA BORDER VISA MUGSHOT MUGSHOT MUGSHOT VISA BORDER VISA MUGSHOT MUGSHOT MUGSHOT VISA BORDER KIOSK
PROBE MUGSHOT WEBCAM PROFILE BORDER BOR ¿10 YR KIOSK MUGSHOT WEBCAM PROFILE BORDER BOR ¿10 YR KIOSK MUGSHOT WEBCAM PROFILE BORDER BOR ¿10 YR KIOSK
266 309 91 221 131 207
139 KNERON -001 0.030 0.621 0.237 0.144 0.207 0.280 0.000 0.000 0.000 0.000
112 115 87 116 99 110 145 149 126 87 260
140 LINE -000 0.002 0.014 0.223 0.005 0.029 0.107 0.031 0.095 0.046 0.278 1.000 0.000 0.000 0.000 0.000
18 18 14 39 35 72 38 38 242 44 31 286
141 LINE -001 0.001 0.007 0.063 0.002 0.008 0.085 0.005 0.027 1.000 0.009 0.072 1.000 0.000 0.000 0.000 0.000
38 20 28 33 50 18 28 181 116 118 131 170
142 LINECLOVA -002 0.001 0.008 0.070 0.002 0.011 0.058 0.004 0.130 0.981 0.040 1.000 0.700 0.000 0.001 0.040 0.001
210 229 197 198 161 167 157 115
143 LOOKMAN -003 0.009 0.038 0.035 0.239 0.044 0.112 0.084 0.355 0.000 0.000
212 232 262 164 161 106
144 LOOKMAN -004 0.009 0.039 0.973 0.045 0.105 0.977 0.000 0.000 0.000
204 222 261 196 197 143 135 108 144 105
145 LOOKMAN -005 0.008 0.036 0.972 0.035 0.237 0.030 0.086 0.978 0.062 0.308 0.000 0.000 0.000
73 59 177 132 88 121 67 65 268 92 59 224
146 MANTRA -000 0.002 0.010 0.709 0.007 0.024 0.112 0.010 0.041 1.000 0.029 0.152 1.000 0.002 0.001 0.591 0.003
123 127 112 101 114 96 140 230 62 177 127 155
147 MAXVISION -000 0.002 0.015 0.327 0.004 0.051 0.101 0.028 0.237 0.767 0.149 0.997 0.557 0.000 0.000 0.042 0.000
FNIR(N, R, T) =

30 22 17 12 73 17 31 34 11 24 122 23
148 MAXVISION -001 0.001 0.008 0.064 0.001 0.018 0.057 0.004 0.025 0.219 0.007 0.951 0.100 0.000 0.000 0.042 0.000
FPIR(N, T) =

234 151 286 203 153


149 MEGVII -001 0.012 0.017 1.000 0.072 0.097 0.002 0.000
235 153 130 275 207 151 183
150 MEGVII -002 0.012 0.017 0.450 1.000 0.077 0.096 0.998 0.002 0.000 0.033
317 313 238 240 309 307 234 216
151 MICROFOCUS -003 0.594 0.781 0.708 0.907 0.931 0.979 0.982 0.991 0.001 0.005
314 312 237 239 315 305 232 214
152 MICROFOCUS -004 0.576 0.758 0.701 0.904 0.999 0.975 0.974 0.989 0.001 0.005

FRVT
310 307 232 233 307 301 230 213
153 MICROFOCUS -005 0.424 0.601 0.494 0.777 0.835 0.928 0.935 0.985 0.001 0.005
311 306 231 236 312 300 229 210
154 MICROFOCUS -006 0.427 0.583 0.490 0.782 0.978 0.923 0.923 0.971 0.001 0.005
64 95 88 117 137 143 111 83
155 MICROSOFT-003 0.002 0.012 0.004 0.109 0.028 0.091 0.036 0.233 0.000 0.001

-
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


56 93 82 118 128 137 106 79
156 MICROSOFT-004 0.001 0.012 0.004 0.109 0.026 0.087 0.033 0.222 0.000 0.001
88 72 65 76 92 125 114 44 86 67
157 MICROSOFT-005 0.002 0.011 0.144 0.003 0.099 0.026 0.070 0.587 0.027 0.180 0.000 0.001 0.049
96 86 69 86 95 72 58 24 102 63
158 MICROSOFT-006 0.002 0.011 0.150 0.004 0.100 0.012 0.037 0.386 0.032 0.178 0.000 0.001 0.049
262 221 165 156 121 144 295 236 228 182 113 121
159 MUKH -002 0.026 0.036 0.638 0.012 0.079 0.129 0.594 0.242 1.000 0.170 0.741 0.389 0.000 0.000 0.042 0.000
247 237 247 184 200 210 190 111 140
160 NEC -000 0.017 0.041 0.959 0.025 0.243 0.079 0.140 0.979 0.474 0.001 0.002 0.890
256 250 255 193 204 227 219 126 170 138
161 NEC -001 0.021 0.056 0.967 0.033 0.277 0.106 0.197 0.986 0.133 0.468 0.005 0.003 0.934
11 39 120 81 129 19 26 202 33 166
162 NEC -002 0.001 0.009 0.363 0.003 0.117 0.003 0.020 0.999 0.008 0.676 0.000 0.001 0.041
41 56 119 85 56 135 15 19 70 36 15 165
163 NEC -003 0.001 0.010 0.352 0.004 0.013 0.120 0.002 0.017 0.824 0.008 0.036 0.668 0.000 0.001 0.041 0.001
49 37 151 74 31 48 6 9 47 12 4 22
164 NEC -004 0.001 0.009 0.538 0.003 0.007 0.075 0.002 0.013 0.622 0.004 0.019 0.100 0.000 0.001 0.041 0.001
27 23 39 37 10 45 4 5 52 7 3 18
165 NEC -005 0.001 0.008 0.081 0.002 0.005 0.073 0.002 0.012 0.673 0.003 0.019 0.099 0.000 0.001 0.040 0.001
34 30 19 40 12 31 11 21 32 8 7 14
166 NEC -006 0.001 0.008 0.066 0.002 0.005 0.065 0.002 0.018 0.463 0.004 0.026 0.094 0.000 0.001 0.040 0.001
257 238 249 299 245 289
167 NEUROTECHNOLOGY-003 0.022 0.042 0.961 0.636 0.266 1.000 0.000 0.001 0.131
180 165 258 197 169 150
168 NEUROTECHNOLOGY-004 0.006 0.020 0.970 0.063 0.117 0.994 0.000 0.001 0.131
R = Num. candidates examined
N = Num. enrolled subjects

164 195 217 184 182 176


169 NEUROTECHNOLOGY-005 0.004 0.024 0.893 0.054 0.130 0.998 0.000 0.000 0.030
251 241 162 267 265
170 NEUROTECHNOLOGY-006 0.018 0.045 0.606 0.249 0.418 0.000 0.000
156 173 198 150 179 196 211 227 201 252
171 NEUROTECHNOLOGY-007 0.004 0.021 0.796 0.009 0.180 0.062 0.173 1.000 0.339 1.000 0.001 0.001 0.041
107 122 132 93 84 98 178 130 247 110 91 74
172 NEUROTECHNOLOGY-008 0.002 0.014 0.457 0.004 0.023 0.101 0.053 0.080 1.000 0.035 0.293 0.203 0.000 0.001 0.052 0.001
50 69 75 50 59 58 90 83 45 71 60 57
173 NEUROTECHNOLOGY-009 0.001 0.011 0.179 0.002 0.013 0.079 0.015 0.052 0.588 0.020 0.153 0.165 0.001 0.000 0.046 0.000
32 45 29 22 30 34 65 61 17 47 35 34
174 NEUROTECHNOLOGY-010 0.001 0.009 0.070 0.001 0.007 0.068 0.010 0.037 0.277 0.010 0.075 0.126 0.000 0.000 0.041 0.000

-
9 26 13 4 13 15 49 53 91 40 25 198

IDENTIFICATION
175 NEUROTECHNOLOGY-012 0.001 0.008 0.063 0.001 0.005 0.057 0.007 0.032 0.959 0.008 0.061 0.916 0.000 0.000 0.039 0.000
288 279 232 288 272 195
176 NEWLAND -002 0.079 0.117 0.936 0.438 0.466 0.999 0.007 0.012 0.200
305 301 273 317 321 244
177 NOBLIS -001 0.249 0.522 0.993 1.000 1.000 1.000 0.000 0.000 0.000
301 298 265 313 316 250
178 NOBLIS -002 0.179 0.392 0.982 0.997 1.000 1.000 0.000 0.000 0.000
127 96 77 94 66 84 94 99 46 76 58 61
179 NOTIONTAG -000 0.002 0.012 0.204 0.004 0.016 0.095 0.017 0.059 0.611 0.021 0.150 0.176 0.000 0.000 0.000 0.000
186 184 138 182 170 72
180 NTECHLAB -003 0.006 0.023 0.504 0.054 0.118 0.837 0.000 0.000 0.040
173 160 139 143 145 157 160 71 135 94
181 NTECHLAB -004 0.005 0.019 0.506 0.008 0.129 0.041 0.105 0.833 0.053 0.263 0.000 0.000 0.040
171 156 121 147 131 158 158 63 151 102
182 NTECHLAB -005 0.005 0.018 0.367 0.008 0.118 0.042 0.102 0.771 0.073 0.294 0.000 0.000 0.040
T = Threshold

161 146 118 140 127 152 146 61 139 93


183 NTECHLAB -006 0.004 0.017 0.347 0.007 0.113 0.037 0.094 0.754 0.057 0.260 0.000 0.000 0.040
131 99 110 105 109 124 109 60 103 80
184 NTECHLAB -007 0.003 0.012 0.326 0.004 0.107 0.026 0.067 0.750 0.032 0.223 0.000 0.000 0.042

Table 14: Miss rates by dataset: At left, rank 1 miss rates relevant to investigations; at right, with threshold set to target FPIR = 0.01 for higher volume, low prior, uses.
Yellow indicates most accurate algorithm. Throughout blue superscripts indicate the rank of the algorithm for that column.
T > 0 → Identification
T = 0 → Investigation

58
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

# ALGORITHM INVESTIGATION MODE IDENTIFICATION MODE FAILURE TO EXTRACT


11:12:06
2022/12/18

RANK ONE MISS RATE , FNIR ( N , 0, 1) HIGH T → FPIR = 0.001, FNIR ( N , T, L ) FEATURES
N =1.6 M N =1.6 M
GALLERY MUGSHOT MUGSHOT MUGSHOT VISA BORDER VISA MUGSHOT MUGSHOT MUGSHOT VISA BORDER VISA MUGSHOT MUGSHOT MUGSHOT VISA BORDER KIOSK
PROBE MUGSHOT WEBCAM PROFILE BORDER BOR ¿10 YR KIOSK MUGSHOT WEBCAM PROFILE BORDER BOR ¿10 YR KIOSK MUGSHOT WEBCAM PROFILE BORDER BOR ¿10 YR KIOSK
72 47 71 80 69 83 72 40 107 68
185 NTECHLAB -008 0.002 0.010 0.157 0.003 0.084 0.014 0.045 0.529 0.033 0.183 0.000 0.000 0.044
35 27 62 48 61 47 40 29 28 58 44 43
186 NTECHLAB -009 0.001 0.008 0.138 0.002 0.013 0.074 0.005 0.022 0.430 0.015 0.109 0.142 0.000 0.000 0.041 0.001
17 32 41 38 34 16 17 16 16 22 24 17
187 NTECHLAB -010 0.001 0.008 0.085 0.002 0.008 0.057 0.003 0.015 0.252 0.007 0.059 0.098 0.001 0.001 0.043 0.000
10 11 30 24 33 7 22 15 13 43 33 11
188 NTECHLAB -011 0.001 0.007 0.072 0.001 0.007 0.051 0.003 0.015 0.228 0.009 0.074 0.091 0.000 0.000 0.040 0.000
25 24 32 42 32 30 46 47 21 46 49 26
189 PANGIAM -000 0.001 0.008 0.074 0.002 0.007 0.065 0.006 0.030 0.318 0.009 0.136 0.105 0.000 0.001 0.044 0.001
195 104 35 13 41 27 71 46 23 42 120 42
190 PANGIAM -001 0.007 0.013 0.078 0.001 0.009 0.064 0.011 0.030 0.383 0.009 0.860 0.141 0.003 0.000 0.040 0.000
252 228 148 229 228 219 209 197 213 201
191 PARAVISION -000 0.019 0.038 0.534 0.423 0.529 0.089 0.170 0.999 0.470 0.926 0.000 0.000 0.000
154 170 113 228 227 171 180 187 210 173
192 PARAVISION -001 0.004 0.020 0.329 0.414 0.484 0.049 0.128 0.999 0.444 0.739 0.000 0.000 0.000
159 177 115 170 175 172 173 120 154 144
193 PARAVISION -002 0.004 0.022 0.335 0.015 0.175 0.050 0.119 0.983 0.080 0.497 0.000 0.000 0.032
FNIR(N, R, T) =

144 162 94 171 172 150 150 152 140 103


194 PARAVISION -003 0.003 0.019 0.252 0.015 0.167 0.035 0.096 0.994 0.058 0.296 0.000 0.000 0.032
FPIR(N, T) =

65 62 51 121 123 69 62 251 66 196


195 PARAVISION -004 0.002 0.010 0.104 0.006 0.112 0.010 0.038 1.000 0.018 0.908 0.000 0.000 0.032
60 51 36 135 107 30 32 113 48 37
196 PARAVISION -005 0.002 0.010 0.079 0.007 0.106 0.004 0.024 0.980 0.011 0.132 0.000 0.000 0.038
24 29 20 114 45 97 29 33 238 41 45 242
197 PARAVISION -007 0.001 0.008 0.066 0.005 0.010 0.101 0.004 0.025 1.000 0.009 0.113 1.000 0.000 0.000 0.000 0.000
8 17 22 21 6 10 20 24 57 6 11 4
198 PARAVISION -009 0.001 0.007 0.067 0.001 0.004 0.054 0.003 0.019 0.735 0.003 0.033 0.073 0.000 0.001 0.025 0.000

FRVT
168 180 203 154 180 226 262 252 218 320
199 PIXELALL -002 0.005 0.022 0.810 0.011 0.187 0.105 0.388 1.000 0.602 1.000 0.000 0.000 0.000
105 121 141 130 163 114 117 212 114 154
200 PIXELALL -003 0.002 0.014 0.515 0.006 0.151 0.022 0.073 1.000 0.037 0.554 0.000 0.000 0.000
102 128 145 118 165 102 129 233 130 217
201 PIXELALL -004 0.002 0.015 0.523 0.005 0.152 0.018 0.079 1.000 0.051 0.994 0.000 0.000 0.000

-
False pos. identification rate
False neg. identification rate

90 71 96 159 95 161 74 76 240 87 76 223


202 PIXELALL -005 0.002 0.011 0.264 0.012 0.028 0.146 0.012 0.050 1.000 0.027 0.203 1.000 0.000 0.000 0.000 0.000

FACE RECOGNITION VENDOR TEST


142 145 161 117 93 104 151 178 85 127 78 82
203 PTAKURATSATU -000 0.003 0.017 0.605 0.005 0.027 0.105 0.037 0.124 0.924 0.046 0.206 0.232 0.000 0.001 0.039 0.000
202 208 143 166 115 168 237 232 257 187 108 219
204 QNAP -000 0.008 0.027 0.522 0.013 0.054 0.158 0.129 0.238 1.000 0.191 0.539 0.998 0.001 0.000 0.054 0.000
162 178 137 128 110 124 181 187 86 155 99 113
205 QNAP -001 0.004 0.022 0.498 0.006 0.041 0.112 0.054 0.137 0.928 0.081 0.368 0.331 0.000 0.000 0.004 0.000
174 172 74 99 104 140 129 162 64 134 88 99
206 QNAP -002 0.005 0.021 0.172 0.004 0.031 0.125 0.026 0.106 0.772 0.052 0.281 0.272 0.001 0.004 0.057 0.001
136 149 70 145 118 82 105 295 141 214 160 191
207 QNAP -003 0.003 0.017 0.152 0.008 0.061 0.093 0.019 0.835 0.992 0.502 1.000 0.865 0.000 0.001 0.002 0.001
302 311 300
208 QUANTASOFT-001 0.218 0.727 0.639 0.000 0.000
254 262 230 241
209 RANKONE -002 0.019 0.071 0.118 0.261 0.000 0.000
253 260 231 240
210 RANKONE -003 0.019 0.068 0.118 0.255 0.000 0.000
276 281 259 267
211 RANKONE -004 0.041 0.141 0.193 0.426 0.000 0.000
216 236 267 193 212 178
212 RANKONE -005 0.009 0.041 0.986 0.059 0.173 0.998 0.000 0.000 0.489
176 200 153 107
213 RANKONE -006 0.005 0.797 0.037 0.977 0.002 0.167
148 159 197 116 147 94
214 RANKONE -007 0.003 0.019 0.796 0.022 0.095 0.967 0.001 0.001 0.102
R = Num. candidates examined
N = Num. enrolled subjects

120 101 152 120 153 97 121 98 142 112


215 RANKONE -009 0.002 0.013 0.549 0.006 0.134 0.018 0.076 0.969 0.062 0.328 0.000 0.000 0.000
113 53 122 112 91 142 80 97 68 133 79 91
216 RANKONE -010 0.002 0.010 0.374 0.005 0.027 0.126 0.014 0.058 0.802 0.052 0.208 0.259 0.000 0.000 0.000 0.000
57 84 86 84 74 67 58 74 113 71 211
217 RANKONE -011 0.002 0.011 0.223 0.004 0.019 0.082 0.009 0.048 0.037 0.182 0.977 0.000 0.000 0.000 0.000
42 65 60 75 63 36 54 88 93 54 137
218 RANKONE -012 0.001 0.010 0.127 0.003 0.014 0.069 0.008 0.053 0.029 0.144 0.465 0.000 0.000 0.000 0.000
13 10 34 15 36 9 34 55 168 67 52 44
219 RANKONE -013 0.001 0.007 0.076 0.001 0.008 0.054 0.005 0.034 0.996 0.018 0.141 0.142 0.000 0.000 0.033 0.000
275 268 264 256
220 REALNETWORKS -000 0.040 0.078 0.234 0.319 0.001 0.000

-
274 269 265 257

IDENTIFICATION
221 REALNETWORKS -001 0.040 0.078 0.234 0.319 0.001 0.000
271 267 263 255
222 REALNETWORKS -002 0.039 0.078 0.231 0.315 0.001 0.000
260 256 193 191 187 249 244 182 181 145
223 REALNETWORKS -003 0.024 0.062 0.771 0.031 0.209 0.159 0.266 0.998 0.164 0.500 0.001 0.000 0.009
258 254 199 190 192 248 242 199 183 160
224 REALNETWORKS -004 0.024 0.059 0.797 0.031 0.213 0.158 0.263 0.999 0.170 0.613 0.001 0.000 0.009
116 108 128 102 82 100 136 119 100 112 80 77
225 REALNETWORKS -005 0.002 0.013 0.433 0.004 0.023 0.102 0.028 0.074 0.971 0.037 0.223 0.215 0.000 0.000 0.006 0.000
46 54 101 64 46 55 84 85 115 59 46 50
226 REALNETWORKS -006 0.001 0.010 0.287 0.002 0.010 0.078 0.015 0.053 0.980 0.016 0.120 0.154 0.000 0.000 0.009 0.000
39 44 97 34 37 42 63 70 110 50 107 41
227 REALNETWORKS -007 0.001 0.009 0.267 0.002 0.009 0.072 0.010 0.043 0.979 0.012 0.463 0.140 0.000 0.000 0.009 0.000
22 25 43 36 28 77 47 45 96 38 30 35
228 REALNETWORKS -008 0.001 0.008 0.089 0.002 0.007 0.091 0.006 0.029 0.968 0.008 0.068 0.129 0.000 0.000 0.042 0.000
T = Threshold

150 157 166 142 162 185 174 196 150 171
229 REMARKAI -000 0.003 0.018 0.660 0.008 0.148 0.055 0.120 0.999 0.069 0.717 0.000 0.000 0.000
209 211 235 220
230 REMARKAI -000 0.009 0.030 0.128 0.203 0.000 0.001

Table 15: Miss rates by dataset: At left, rank 1 miss rates relevant to investigations; at right, with threshold set to target FPIR = 0.01 for higher volume, low prior, uses.
Yellow indicates most accurate algorithm. Throughout blue superscripts indicate the rank of the algorithm for that column.
T > 0 → Identification
T = 0 → Investigation

59
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

# ALGORITHM INVESTIGATION MODE IDENTIFICATION MODE FAILURE TO EXTRACT


11:12:06
2022/12/18

RANK ONE MISS RATE , FNIR ( N , 0, 1) HIGH T → FPIR = 0.001, FNIR ( N , T, L ) FEATURES
N =1.6 M N =1.6 M
GALLERY MUGSHOT MUGSHOT MUGSHOT VISA BORDER VISA MUGSHOT MUGSHOT MUGSHOT VISA BORDER VISA MUGSHOT MUGSHOT MUGSHOT VISA BORDER KIOSK
PROBE MUGSHOT WEBCAM PROFILE BORDER BOR ¿10 YR KIOSK MUGSHOT WEBCAM PROFILE BORDER BOR ¿10 YR KIOSK MUGSHOT WEBCAM PROFILE BORDER BOR ¿10 YR KIOSK
207 210 201 234 218 138
231 REMARKAI -002 0.008 0.029 0.802 0.124 0.196 0.991 0.000 0.001 0.017
61 130 127 126 94 70 73 98 81 77 73 58
232 RENDIP -000 0.002 0.015 0.424 0.006 0.028 0.084 0.012 0.059 0.894 0.022 0.185 0.167 0.000 0.000 0.041 0.000
86 49 98 49 54 46 75 68 54 75 41 45
233 REVEALMEDIA -000 0.002 0.010 0.275 0.002 0.012 0.074 0.012 0.042 0.680 0.021 0.093 0.143 0.000 0.000 0.041 0.000
122 144 95 119 89 75 138 134 256 129 314 276
234 S 1-000 0.002 0.017 0.258 0.005 0.025 0.090 0.028 0.085 1.000 0.047 1.000 1.000 0.000 0.000 0.040 0.000
143 118 84 70 71 50 91 82 124 68 48 46
235 S 1-001 0.003 0.014 0.215 0.003 0.018 0.077 0.016 0.052 0.985 0.019 0.136 0.148 0.001 0.000 0.035 0.000
48 42 48 11 47 12 44 48 8 27 119 188
236 S 1-002 0.001 0.009 0.093 0.001 0.010 0.055 0.006 0.031 0.196 0.007 0.792 0.841 0.000 0.000 0.028 0.000
52 48 54 26 27 21 59 59 225 57 102 271
237 S 1-003 0.001 0.010 0.114 0.001 0.007 0.060 0.009 0.037 1.000 0.014 0.396 1.000 0.000 0.000 0.033 0.000
175 240 156 195 191 200 235 79 192 123
238 SCANOVATE -000 0.005 0.045 0.560 0.035 0.211 0.067 0.240 0.893 0.215 0.400 0.000 0.001 0.057
179 234 160 189 178 211 228 84 188 126
239 SCANOVATE -001 0.005 0.040 0.585 0.031 0.178 0.081 0.227 0.911 0.192 0.404 0.000 0.001 0.044
FNIR(N, R, T) =

118 136 146 112 103 282


240 SENSETIME -000 0.002 0.016 0.528 0.021 0.063 1.000 0.004 0.000 0.042
FPIR(N, T) =

119 135 115 105


241 SENSETIME -001 0.002 0.016 0.022 0.064 0.004 0.000
241 163 123 153 102 85 43 148 101 149
242 SENSETIME -002 0.014 0.020 0.384 0.011 0.104 0.015 0.028 0.994 0.032 0.523 0.009 0.000 0.040
7 9 68 67 78 8 6 34 34 38
243 SENSETIME -003 0.001 0.007 0.150 0.003 0.091 0.002 0.012 0.477 0.008 0.133 0.000 0.000 0.041
6 12 31 55 71 5 8 14 19 30
244 SENSETIME -004 0.001 0.007 0.072 0.002 0.084 0.002 0.013 0.229 0.006 0.113 0.000 0.000 0.041

FRVT
4 4 10 54 26 62 14 14 7 23 20 25
245 SENSETIME -005 0.001 0.006 0.059 0.002 0.007 0.082 0.002 0.014 0.173 0.007 0.051 0.104 0.000 0.000 0.041 0.000
3 3 4 6 4 26 7 7 180 9 12 12
246 SENSETIME -006 0.001 0.006 0.055 0.001 0.004 0.064 0.002 0.012 0.998 0.004 0.034 0.093 0.000 0.000 0.025 0.000
2 2 1 3 3 23 2 2 201 4 6 9
247 SENSETIME -007 0.001 0.006 0.052 0.001 0.003 0.062 0.001 0.009 0.999 0.003 0.024 0.085 0.000 0.000 0.025 0.000

-
False pos. identification rate
False neg. identification rate

1 1 3 2 2 32 1 1 25 2 5 7
248 SENSETIME -008 0.001 0.006 0.054 0.001 0.003 0.067 0.001 0.009 0.405 0.002 0.021 0.080 0.000 0.000 0.039 0.000

FACE RECOGNITION VENDOR TEST


293 285 289 282
249 SHAMAN -003 0.124 0.172 0.451 0.597 0.020 0.011
304 294 298 289
250 SHAMAN -004 0.222 0.319 0.615 0.754 0.020 0.011
273 252 234 242 231 101
251 SHAMAN -006 0.040 0.058 0.938 0.141 0.237 0.972 0.020 0.011 0.869
272 251 243 234
252 SHAMAN -007 0.040 0.057 0.141 0.240 0.020 0.010
79 296 103 93 95 261 97
253 SIAT-001 0.002 0.333 0.004 0.099 0.018 0.365 0.031 0.000 0.000
82 299 227 101 113 273 203 200
254 SIAT-002 0.002 0.446 0.348 0.102 0.022 0.478 0.372 0.923 0.000 0.000
318 314 310 306
255 SMILART-004 0.965 0.974 0.968 0.976 0.011 0.013
256 SMILART-005 0.011 0.013
163 161 100 115 92 88 239 238 66 117 95 128
257 SQISOFT-001 0.004 0.019 0.282 0.005 0.027 0.097 0.132 0.252 0.797 0.040 0.317 0.420 0.000 0.000 0.039 0.000
37 199 45 113 134 57 142 83 222 206
258 SQISOFT-002 0.001 0.026 0.090 0.005 0.282 0.078 0.029 0.904 0.621 0.953 0.000 0.000 0.039 0.000
198 168 163 178 116 169 194 269 224 216 124 297
259 STAQU -000 0.007 0.020 0.613 0.020 0.055 0.159 0.062 0.443 1.000 0.535 0.961 1.000 0.000 0.000 0.000 0.000
299 290 294 285
260 SYNESIS -003 0.170 0.235 0.582 0.646 0.006 0.015
R = Num. candidates examined
N = Num. enrolled subjects

245 188 209 162 155 198 177 92 152 107


261 SYNESIS -003 0.016 0.023 0.827 0.013 0.136 0.065 0.123 0.960 0.075 0.314 0.000 0.001 0.063
208 103 185 78 80 123 115 121 104 76
262 SYNESIS -005 0.009 0.013 0.744 0.003 0.092 0.025 0.072 0.984 0.032 0.214 0.001 0.000 0.135
221 191 133 65 126 61 92 86 37 73 115 55
263 T 4 ISB -000 0.010 0.023 0.462 0.003 0.115 0.081 0.016 0.053 0.510 0.021 0.759 0.161 0.000 0.000 0.000 0.000
158 143 159 133 111 188 302 258 193 218
264 TECH 5-001 0.004 0.017 0.584 0.007 0.107 0.057 0.935 1.000 0.244 0.994 0.000 0.000 0.006
132 70 107 77 100 74 132 113 69 116 77 131
265 TECH 5-002 0.003 0.011 0.312 0.003 0.029 0.089 0.027 0.070 0.805 0.039 0.205 0.440 0.001 0.000 0.041 0.000
242 247 256 251
266 TEVIAN -003 0.015 0.052 0.177 0.298 0.001 0.002

-
229 227 229 213

IDENTIFICATION
267 TEVIAN -004 0.011 0.038 0.117 0.176 0.001 0.002
199 209 134 216 193 93
268 TEVIAN -005 0.007 0.028 0.467 0.087 0.144 0.962 0.001 0.002 0.116
124 79 58 68 62 40 64 51 27 60 39 205
269 TEVIAN -006 0.002 0.011 0.123 0.003 0.013 0.071 0.010 0.032 0.425 0.016 0.093 0.951 0.001 0.000 0.062 0.000
78 43 47 45 42 33 42 28 20 45 28 32
270 TEVIAN -007 0.002 0.009 0.093 0.002 0.009 0.067 0.005 0.022 0.301 0.009 0.065 0.122 0.000 0.000 0.062 0.000
283 275 283 274
271 TIGER -000 0.062 0.095 0.390 0.500 0.000 0.000
182 186 140 213 203 191
272 TIGER -002 0.006 0.023 0.514 0.086 0.158 0.999 0.000 0.000 0.056
181 185 212 202
273 TIGER -003 0.006 0.023 0.086 0.158 0.000 0.000
193 182 205 166
274 TONGYITRANS -000 0.007 0.022 0.074 0.112 0.003 0.001
T = Threshold

194 183 199 156


275 TONGYITRANS -001 0.007 0.022 0.066 0.101 0.003 0.001
167 175 191 195 171 161
276 TOSHIBA -000 0.004 0.022 0.766 0.062 0.118 0.995 0.000 0.000 0.070

Table 16: Miss rates by dataset: At left, rank 1 miss rates relevant to investigations; at right, with threshold set to target FPIR = 0.01 for higher volume, low prior, uses.
Yellow indicates most accurate algorithm. Throughout blue superscripts indicate the rank of the algorithm for that column.
T > 0 → Identification
T = 0 → Investigation

60
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

# ALGORITHM INVESTIGATION MODE IDENTIFICATION MODE FAILURE TO EXTRACT


11:12:06
2022/12/18

RANK ONE MISS RATE , FNIR ( N , 0, 1) HIGH T → FPIR = 0.001, FNIR ( N , T, L ) FEATURES
N =1.6 M N =1.6 M
GALLERY MUGSHOT MUGSHOT MUGSHOT VISA BORDER VISA MUGSHOT MUGSHOT MUGSHOT VISA BORDER VISA MUGSHOT MUGSHOT MUGSHOT VISA BORDER KIOSK
PROBE MUGSHOT WEBCAM PROFILE BORDER BOR ¿10 YR KIOSK MUGSHOT WEBCAM PROFILE BORDER BOR ¿10 YR KIOSK MUGSHOT WEBCAM PROFILE BORDER BOR ¿10 YR KIOSK
172 179 190 144
277 TOSHIBA -001 0.005 0.022 0.058 0.092 0.000 0.000
147 112 89 137 86 81 101 102 76 94 75 70
278 TRUEFACE -000 0.003 0.014 0.230 0.007 0.024 0.092 0.018 0.062 0.882 0.030 0.194 0.188 0.001 0.001 0.047 0.003
217 202 38 200 130 195 274 296 143 233 147 222
279 TURINGTECHVIP -001 0.009 0.026 0.081 0.045 0.199 0.220 0.345 0.850 0.993 0.978 1.000 0.999 0.001 0.003 0.044 0.000
313 305 308 304
280 VD -000 0.474 0.551 0.917 0.946 0.011 0.013
265 248 260 248
281 VD -001 0.028 0.053 0.201 0.281 0.005 0.001
218 207 216 165 112 176 209 195 164 160 98 118
282 VD -002 0.010 0.027 0.893 0.013 0.050 0.176 0.079 0.148 0.996 0.095 0.367 0.372 0.004 0.003 0.156 0.002
200 176 194 146 102 156 166 155 193 131 83 109
283 VD -003 0.008 0.022 0.773 0.008 0.030 0.137 0.046 0.100 0.999 0.051 0.244 0.315 0.003 0.003 0.144 0.002
135 120 154 127 96 147 155 132 129 123 85 95
284 VERIDAS -001 0.003 0.014 0.550 0.006 0.028 0.131 0.037 0.082 0.987 0.044 0.266 0.264 0.000 0.002 0.093 0.001
134 119 153 129 97 146 154 131 130 122 86 96
285 VERIDAS -002 0.003 0.014 0.550 0.006 0.028 0.131 0.037 0.082 0.987 0.044 0.266 0.264 0.000 0.002 0.093 0.001
FNIR(N, R, T) =

81 78 104 92 67 114 93 92 170 70 57 65


286 VERIDAS -003 0.002 0.011 0.297 0.004 0.016 0.108 0.017 0.055 0.997 0.020 0.150 0.178 0.000 0.002 0.093 0.001
FPIR(N, T) =

308 297 256 213 129 212 302 292 188 199 121 157
287 VERIJELAS -000 0.355 0.369 0.968 0.086 0.191 0.292 0.799 0.813 0.999 0.324 0.933 0.589 0.002 0.001 0.070 0.001
286 283 246 287 288 186
288 VIGILANTSOLUTIONS -003 0.069 0.151 0.958 0.408 0.660 0.999 0.000 0.001 0.127
294 291 251 293 294 167
289 VIGILANTSOLUTIONS -004 0.125 0.244 0.965 0.549 0.817 0.996 0.000 0.001 0.127
213 223 282 254
290 VIGILANTSOLUTIONS -005 0.009 0.920 0.388 1.000 0.000 0.001 0.127

FRVT
220 224 277 243
291 VIGILANTSOLUTIONS -006 0.010 0.921 0.353 1.000 0.000 0.001 0.127
149 148 227 163 119 174 141 138 166 156 100 122
292 VIGILANTSOLUTIONS -007 0.003 0.017 0.925 0.013 0.068 0.175 0.028 0.088 0.996 0.081 0.371 0.391 0.000 0.001 0.127 0.001
141 150 222 167 120 177 109 123 190 164 103 146
293 VIGILANTSOLUTIONS -008 0.003 0.017 0.913 0.014 0.072 0.178 0.021 0.077 0.999 0.104 0.398 0.511 0.000 0.001 0.127 0.001

-
False pos. identification rate
False neg. identification rate

91 80 187 108 69 56 98 95 137 80 56 56

FACE RECOGNITION VENDOR TEST


294 VISIONBOX -000 0.002 0.011 0.752 0.005 0.017 0.078 0.018 0.057 0.990 0.023 0.146 0.162 0.000 0.001 0.043 0.001
133 164 116 189 204 78
295 VISIONLABS -004 0.003 0.020 0.343 0.058 0.159 0.890 0.001 0.001 0.046
121 158 114 173 194 77
296 VISIONLABS -005 0.002 0.019 0.334 0.050 0.147 0.888 0.001 0.001 0.046
83 133 83 87 86 131 142 50
297 VISIONLABS -006 0.002 0.015 0.211 0.004 0.096 0.027 0.090 0.672 0.001 0.001 0.051
77 132 82 83 85 130 141 51 100 69
298 VISIONLABS -007 0.002 0.015 0.211 0.004 0.095 0.027 0.090 0.672 0.031 0.185 0.001 0.001 0.051
100 113 63 53 60 78 81 35 61 47
299 VISIONLABS -008 0.002 0.014 0.141 0.002 0.081 0.013 0.051 0.481 0.017 0.151 0.001 0.000 0.075
20 36 46 23 39 35 36 67 39 29
300 VISIONLABS -009 0.001 0.008 0.091 0.001 0.071 0.005 0.025 0.799 0.008 0.113 0.000 0.000 0.060
45 64 27 20 16 38 41 41 32 22 28
301 VISIONLABS -010 0.001 0.010 0.069 0.001 0.006 0.069 0.005 0.027 0.008 0.055 0.109 0.000 0.000 0.040 0.000
26 38 16 17 8 25 25 25 13 13 10
302 VISIONLABS -011 0.001 0.009 0.064 0.001 0.004 0.063 0.003 0.020 0.004 0.034 0.090 0.000 0.000 0.032 0.000
114 102 85 107 103 113 134 122 59 119 89 139
303 VIXVIZION -009 0.002 0.013 0.220 0.005 0.031 0.107 0.027 0.077 0.745 0.041 0.286 0.472 0.000 0.000 0.000 0.000
106 125 66 63 85 41 82 111 56 109 125 116
304 VNPT-001 0.002 0.014 0.145 0.002 0.023 0.071 0.014 0.068 0.718 0.035 0.990 0.362 0.001 0.000 0.042 0.000
89 97 26 14 19 11 48 50 19 29 32 15
305 VNPT-002 0.002 0.012 0.068 0.001 0.006 0.054 0.007 0.032 0.292 0.007 0.072 0.096 0.001 0.000 0.042 0.000
187 194 202 209 181 233 200 179 180 125
306 VOCORD -003 0.006 0.024 0.804 0.061 0.188 0.122 0.155 0.998 0.157 0.404 0.001 0.011 0.425
R = Num. candidates examined
N = Num. enrolled subjects

203 171 196 161 143 278 210 226 189 215
307 VOCORD -004 0.008 0.021 0.792 0.012 0.127 0.355 0.173 1.000 0.193 0.991 0.000 0.000 0.000
197 189 204 205 185 247 183 171 174 119
308 VOCORD -005 0.007 0.023 0.812 0.055 0.206 0.158 0.130 0.997 0.138 0.381 0.001 0.009 0.554
321 317 313 248 319 319 312 263 274 233
309 VOCORD -006 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 0.001 0.009 0.554
316 308 220 233 136 232 296 283 200 221 116 175
310 VTS -000 0.594 0.608 0.909 0.607 0.724 0.739 0.598 0.619 0.999 0.613 0.760 0.761 0.000 0.001 0.047 0.000
59 57 73 122 72 53 79 80 149 78 53 72
311 VTS -001 0.002 0.010 0.167 0.006 0.018 0.077 0.013 0.051 0.994 0.022 0.141 0.192 0.000 0.000 0.040 0.000
92 105 90 169 109 138 126 120 210 125 81 127
312 VTS -002 0.002 0.013 0.233 0.014 0.038 0.125 0.026 0.075 1.000 0.045 0.231 0.417 0.000 0.000 0.029 0.000

-
IDENTIFICATION
21 15 33 32 38 8 53 54 235 55 123 163
313 VTS -003 0.001 0.007 0.074 0.002 0.009 0.053 0.007 0.033 1.000 0.014 0.954 0.635 0.000 0.001 0.029 0.000
115 116 44 89 64 83 88 89 29 74 62 59
314 XFORWARDAI -000 0.002 0.014 0.089 0.004 0.015 0.094 0.015 0.053 0.440 0.021 0.159 0.169 0.000 0.000 0.000 0.000
103 100 23 73 39 64 39 44 30 35 27 33
315 XFORWARDAI -001 0.002 0.013 0.067 0.003 0.009 0.082 0.005 0.028 0.448 0.008 0.062 0.123 0.000 0.000 0.000 0.000
95 92 9 62 22 51 24 17 38 17 16 21
316 XFORWARDAI -002 0.002 0.012 0.059 0.002 0.007 0.077 0.003 0.016 0.525 0.005 0.041 0.099 0.000 0.000 0.000 0.000
264 255 208 209 275 291 224 199
317 YISHENG -001 0.027 0.060 0.058 0.287 0.346 0.808 0.666 0.919 0.002 0.005
84 60 96 75
318 YITU -002 0.002 0.010 0.018 0.049 0.000 0.000
140 138 104 84
319 YITU -003 0.003 0.016 0.019 0.052 0.003 0.001
36 34 214 61 39 87
320 YITU -004 0.001 0.008 0.866 0.010 0.027 0.936 0.000 0.000 0.000
T = Threshold

117 126 68 52
321 YITU -005 0.002 0.014 0.010 0.032 0.003 0.001

Table 17: Miss rates by dataset: At left, rank 1 miss rates relevant to investigations; at right, with threshold set to target FPIR = 0.01 for higher volume, low prior, uses.
Yellow indicates most accurate algorithm. Throughout blue superscripts indicate the rank of the algorithm for that column.
T > 0 → Identification
T = 0 → Investigation

61
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 62

MISSES BELOW THRESHOLD , T ENROL MOST RECENT


FNIR ( N , T > 0, R >L ) DATASET: FRVT 2018 MUGSHOTS
# ALGORITHM N =0.64M N =1.6M N =3.0M N =6.0M N =12.0M
252 252 222 213 204
1 3 DIVI -005 0.1358 0.1664 0.1915 0.2370 0.3054
245 244 216 206 197
2 ACER -000 0.1185 0.1455 0.1714 0.2074 0.2537
243 246 218 214 212
3 ALCHERA -003 0.1176 0.1553 0.1853 0.2409 0.3553
217 218 199 191 177
4 ALLGOVISION -000 0.0688 0.0881 0.1084 0.1389 0.2129
223 224 206 198 184
5 ALLGOVISION -001 0.0785 0.1017 0.1218 0.1584 0.2273
230 228 211 203 198
6 ANKE -000 0.0942 0.1169 0.1404 0.1776 0.2559
147 147 145 139 125
7 ANKE -002 0.0229 0.0318 0.0406 0.0605 0.1466
240 236 212 201 189
8 AWARE -003 0.1098 0.1283 0.1447 0.1768 0.2364
281 279 229 222 196
9 AWARE -005 0.3389 0.3643 0.3993 0.4526 0.2531
305 305 236 228 221
10 AYONIX -002 0.7862 0.8242 0.8508 0.8704 0.8939
173 202 194 216 201
11 CAMVI -004 0.0367 0.0716 0.0983 0.2508 0.2701
43 43 43 41 49
12 CANON -001 0.0039 0.0054 0.0074 0.0158 0.0924
38 36 37 32 30
13 CANON -002 0.0036 0.0047 0.0061 0.0124 0.0808
72 76 75 79 96
14 CIB -000 0.0086 0.0125 0.0160 0.0303 0.1251
45 45 45 42 52
15 CLEARVIEWAI -000 0.0040 0.0058 0.0078 0.0159 0.0971
13 13 10 15 21
16 CLOUDWALK - HR -000 0.0019 0.0020 0.0023 0.0072 0.0701
14 12 9 6 9
17 CLOUDWALK - MT-000 0.0019 0.0020 0.0022 0.0049 0.0466
12 10 6 7 15
18 CLOUDWALK - MT-001 0.0018 0.0019 0.0020 0.0052 0.0555
191 176 175 177 166
19 COGENT-000 0.0430 0.0527 0.0695 0.1133 0.1960
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

192 177 176 176 165


20 COGENT-001 0.0430 0.0527 0.0695 0.1133 0.1960
159 163 161 174 179
21 COGENT-002 0.0322 0.0444 0.0610 0.1116 0.2180
160 168 173 184 191
22 COGENT-003 0.0328 0.0463 0.0683 0.1294 0.2445
144 148 156 179 176
23 COGENT-004 0.0210 0.0331 0.0527 0.1138 0.2119
60 60 60 80 92
24 COGENT-005 0.0064 0.0091 0.0123 0.0303 0.1233
32 32 32 28 34
25 COGENT-006 0.0032 0.0044 0.0057 0.0120 0.0830
254 250 221 208 203
26 COGNITEC -000 0.1377 0.1606 0.1870 0.2176 0.2831
225 223 205 194 182
27 COGNITEC -001 0.0807 0.1017 0.1214 0.1513 0.2238
184 179 169 162 161
28 COGNITEC -002 0.0406 0.0531 0.0666 0.0935 0.1874
181 175 164 157 154
29 COGNITEC -003 0.0400 0.0526 0.0650 0.0895 0.1772
146 146 141 130 73
30 COGNITEC -004 0.0222 0.0313 0.0388 0.0540 0.1103
59 62 68 73 51
31 COGNITEC -005 0.0063 0.0096 0.0144 0.0287 0.0967
53 55 59 60 45
32 COGNITEC -006 0.0053 0.0077 0.0117 0.0254 0.0919
186 187 180 170 172
33 CYBERLINK -000 0.0414 0.0565 0.0707 0.1031 0.2050
177 180 174 167 155
34 CYBERLINK -001 0.0392 0.0536 0.0695 0.0973 0.1794
83 86 90 102 97
35 CYBERLINK -002 0.0105 0.0148 0.0202 0.0399 0.1255
55 56 53 56 93
36 CYBERLINK -003 0.0056 0.0077 0.0100 0.0235 0.1237
51 52 55 48 100
37 CYBERLINK -004 0.0051 0.0071 0.0102 0.0199 0.1269
63 66 66 99 139
38 CYBERLINK -005 0.0067 0.0099 0.0138 0.0394 0.1566
206 204 188 180 160
39 DAHUA -001 0.0569 0.0727 0.0878 0.1148 0.1867
88 87 85 75 85
40 DAHUA -002 0.0108 0.0151 0.0191 0.0291 0.1153
80 81 81 76 79
41 DAHUA -003 0.0100 0.0139 0.0180 0.0296 0.1130
49 51 50 44 35
42 DAHUA -004 0.0048 0.0069 0.0090 0.0164 0.0853
117 117 117 138 148
43 DAON -000 0.0161 0.0226 0.0293 0.0562 0.1702
120 120 118 112 122
44 DECATUR -000 0.0173 0.0229 0.0305 0.0464 0.1433
26 26 25 30 48
45 DEEPGLINT-001 0.0027 0.0033 0.0043 0.0121 0.0922
169 167 160 155 150
46 DEEPSEA -001 0.0347 0.0462 0.0586 0.0802 0.1708
221 217 201 197 192
47 DERMALOG -005 0.0700 0.0880 0.1144 0.1578 0.2451
178 174 165 166 153
48 DERMALOG -006 0.0395 0.0517 0.0659 0.0973 0.1745
218 215 200 193 187
49 DERMALOG -007 0.0691 0.0863 0.1107 0.1504 0.2299
165 165 163 171 185
50 DERMALOG -008 0.0338 0.0455 0.0626 0.1060 0.2276
110 110 109 106 114
51 DERMALOG -009 0.0148 0.0206 0.0268 0.0416 0.1374
52 50 49 49 53
52 DERMALOG -010 0.0052 0.0069 0.0088 0.0207 0.0971
143 144 139 134 120
53 DILUSENSE -000 0.0208 0.0305 0.0377 0.0543 0.1429
22 23 24 21 27
54 FIRSTCREDITKZ -001 0.0023 0.0030 0.0039 0.0093 0.0760
111 111 113 132 152
55 FUJITSULAB -000 0.0148 0.0206 0.0277 0.0541 0.1739
93 100 104 142 173
56 FUJITSULAB -001 0.0126 0.0182 0.0251 0.0646 0.2079
258 258 225 217 211
57 GORILLA -002 0.1539 0.1880 0.2184 0.2596 0.3398
220 220 197 189 169
58 GORILLA -004 0.0699 0.0892 0.1048 0.1370 0.1969
196 191 179 168 126
59 GORILLA -005 0.0453 0.0583 0.0704 0.0974 0.1474
135 135 125 122 76
60 GORILLA -006 0.0196 0.0275 0.0331 0.0516 0.1113
132 133 131 126 78
61 GORILLA -007 0.0190 0.0271 0.0348 0.0520 0.1129
119 121 119 113 77
62 GORILLA -008 0.0170 0.0238 0.0308 0.0469 0.1120
107 108 105 104 123
63 GRIAULE -000 0.0145 0.0201 0.0253 0.0407 0.1440
34 37 39 39 42
64 GRIAULE -001 0.0033 0.0047 0.0064 0.0153 0.0910
226 225 204 196 194
65 HIK -003 0.0828 0.1028 0.1202 0.1525 0.2480
224 221 202 192 195
66 HIK -004 0.0796 0.0988 0.1147 0.1474 0.2483
157 160 159 159 178
67 HIK -005 0.0312 0.0436 0.0560 0.0911 0.2129
33 33 33 25 37
68 HYPERVERGE -001 0.0033 0.0045 0.0059 0.0117 0.0872
28 27 27 12 12
69 HYPERVERGE -002 0.0028 0.0037 0.0046 0.0064 0.0538
106 107 106 105 88
70 HZAILU -000 0.0143 0.0197 0.0255 0.0411 0.1174
62 57 56 50 66
71 HZAILU -001 0.0066 0.0086 0.0109 0.0207 0.1052
167 170 189 219 215
72 IDEMIA -003 0.0346 0.0471 0.0892 0.2789 0.4311

Table 18: Identification-mode: Effect of N on FNIR at high threshold. Values are threshold-based miss rates i.e. FNIR at FPIR =
0.001 for five enrollment population sizes, N. The right six columns apply for enrollment of one image. Missing entries usually
apply because another algorithm from the same developer was run instead. Some developers are missing because less accurate
algorithms were not run on galleries with N ≥ 3 000 000. Throughout blue superscripts indicate the rank of the algorithm for that
column.

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 63

MISSES BELOW THRESHOLD , T ENROL MOST RECENT


FNIR ( N , T > 0, R >L ) DATASET: FRVT 2018 MUGSHOTS
# ALGORITHM N =0.64M N =1.6M N =3.0M N =6.0M N =12.0M
156 156 148 140 147
73 IDEMIA -004 0.0300 0.0373 0.0447 0.0617 0.1635
172 162 157 154 162
74 IDEMIA -005 0.0360 0.0440 0.0537 0.0764 0.1915
170 159 155 150 180
75 IDEMIA -006 0.0351 0.0433 0.0525 0.0734 0.2201
101 99 92 91 118
76 IDEMIA -007 0.0136 0.0181 0.0228 0.0357 0.1402
9 9 12 8 10
77 IDEMIA -008 0.0016 0.0019 0.0024 0.0053 0.0470
3 3 3 11 14
78 IDEMIA -009 0.0013 0.0016 0.0018 0.0061 0.0550
102 103 98 93 69
79 IMAGUS -005 0.0137 0.0185 0.0237 0.0368 0.1067
103 106 102 100 86
80 IMAGUS -006 0.0137 0.0190 0.0244 0.0396 0.1159
115 119 115 109 89
81 IMAGUS -007 0.0160 0.0228 0.0284 0.0444 0.1179
128 127 137 149 156
82 IMPERIAL -000 0.0187 0.0259 0.0358 0.0733 0.1794
251 253 223 211 206
83 INCODE -003 0.1324 0.1672 0.1961 0.2345 0.3123
182 183 168 161 144
84 INCODE -004 0.0403 0.0538 0.0662 0.0917 0.1619
70 70 69 57 43
85 INCODE -005 0.0083 0.0113 0.0145 0.0247 0.0912
76 77 74 62 70
86 INNOVATRICS -007 0.0093 0.0125 0.0159 0.0259 0.1092
15 16 17 22 25
87 INTEMA -000 0.0019 0.0024 0.0032 0.0098 0.0745
316 314 240 231 225
88 INTSYSMSU -000 0.9982 0.9984 0.9985 0.9987 0.9988
133 139 142 145 129
89 IREX -000 0.0190 0.0280 0.0391 0.0677 0.1479
208 208 193 190 186
90 ISYSTEMS -002 0.0584 0.0783 0.0973 0.1373 0.2295
194 192 186 182 188
91 ISYSTEMS -003 0.0438 0.0590 0.0807 0.1259 0.2357
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

89 89 87 86 57
92 KAKAO -000 0.0109 0.0151 0.0196 0.0324 0.1010
20 18 18 19 20
93 KAKAO -001 0.0021 0.0026 0.0032 0.0085 0.0693
123 118 107 108 109
94 KEDACOM -001 0.0181 0.0227 0.0265 0.0422 0.1340
27 28 28 29 32
95 LINECLOVA -002 0.0028 0.0040 0.0049 0.0120 0.0824
168 161 153 148 145
96 LOOKMAN -003 0.0346 0.0437 0.0514 0.0724 0.1620
148 143 136 121 108
97 LOOKMAN -005 0.0240 0.0301 0.0356 0.0512 0.1334
61 67 70 81 62
98 MANTRA -000 0.0065 0.0101 0.0151 0.0308 0.1035
142 140 133 123 110
99 MAXVISION -000 0.0206 0.0282 0.0355 0.0517 0.1340
30 31 29 31 41
100 MAXVISION -001 0.0031 0.0043 0.0055 0.0122 0.0895
204 203 187 186 202
101 MEGVII -001 0.0562 0.0722 0.0872 0.1309 0.2713
313 307 237 229 222
102 MICROFOCUS -005 0.9732 0.8354 0.8555 0.8755 0.8954
136 137 135 129 135
103 MICROSOFT-003 0.0198 0.0278 0.0356 0.0538 0.1539
126 128 126 124 133
104 MICROSOFT-004 0.0185 0.0259 0.0333 0.0517 0.1510
124 125 123 120 131
105 MICROSOFT-005 0.0181 0.0256 0.0320 0.0512 0.1491
75 72 76 78 130
106 MICROSOFT-006 0.0091 0.0120 0.0162 0.0301 0.1482
295 295 234 226 220
107 MUKH -002 0.5041 0.5942 0.6674 0.7314 0.8276
212 210 192 181 164
108 NEC -000 0.0637 0.0789 0.0933 0.1163 0.1941
227 227 207 195 183
109 NEC -001 0.0863 0.1055 0.1249 0.1519 0.2253
18 19 19 35 18
110 NEC -002 0.0020 0.0026 0.0033 0.0135 0.0653
19 15 14 10 13
111 NEC -003 0.0021 0.0024 0.0028 0.0059 0.0540
10 6 5 3 4
112 NEC -004 0.0017 0.0018 0.0020 0.0037 0.0329
5 4 4 13 2
113 NEC -005 0.0015 0.0017 0.0019 0.0065 0.0307
11 11 13 24 16
114 NEC -006 0.0018 0.0020 0.0026 0.0103 0.0573
298 299 235 227 219
115 NEUROTECHNOLOGY-003 0.5698 0.6362 0.7035 0.7602 0.8224
198 197 181 178 175
116 NEUROTECHNOLOGY-004 0.0466 0.0629 0.0779 0.1135 0.2102
179 184 171 165 168
117 NEUROTECHNOLOGY-005 0.0396 0.0538 0.0675 0.0950 0.1966
193 196 183 187 190
118 NEUROTECHNOLOGY-007 0.0436 0.0623 0.0802 0.1320 0.2393
166 178 190 202 209
119 NEUROTECHNOLOGY-008 0.0339 0.0530 0.0893 0.1769 0.3288
87 90 88 84 72
120 NEUROTECHNOLOGY-009 0.0108 0.0152 0.0196 0.0324 0.1102
66 65 67 111 151
121 NEUROTECHNOLOGY-010 0.0069 0.0099 0.0138 0.0449 0.1727
48 49 52 66 112
122 NEUROTECHNOLOGY-012 0.0047 0.0068 0.0097 0.0265 0.1343
94 94 93 92 101
123 NOTIONTAG -000 0.0128 0.0175 0.0228 0.0357 0.1270
188 182 170 158 141
124 NTECHLAB -003 0.0421 0.0537 0.0674 0.0907 0.1582
158 157 154 147 132
125 NTECHLAB -004 0.0312 0.0405 0.0519 0.0722 0.1503
162 158 158 153 138
126 NTECHLAB -005 0.0334 0.0424 0.0537 0.0760 0.1543
154 152 151 144 134
127 NTECHLAB -006 0.0288 0.0367 0.0471 0.0670 0.1523
129 124 121 118 107
128 NTECHLAB -007 0.0188 0.0256 0.0317 0.0495 0.1306
85 83 83 72 55
129 NTECHLAB -008 0.0107 0.0145 0.0187 0.0286 0.0995
40 40 38 33 24
130 NTECHLAB -009 0.0037 0.0049 0.0062 0.0125 0.0735
16 17 15 17 23
131 NTECHLAB -010 0.0020 0.0025 0.0030 0.0077 0.0710
21 22 23 16 17
132 NTECHLAB -011 0.0022 0.0030 0.0038 0.0075 0.0625
46 46 46 43 39
133 PANGIAM -000 0.0042 0.0060 0.0080 0.0160 0.0876
150 150 149 143 146
134 PARAVISION -003 0.0260 0.0351 0.0447 0.0657 0.1630
67 69 65 67 98
135 PARAVISION -004 0.0074 0.0101 0.0136 0.0267 0.1256
31 30 31 46 63
136 PARAVISION -005 0.0032 0.0041 0.0057 0.0174 0.1037
29 29 30 51 71
137 PARAVISION -007 0.0030 0.0040 0.0055 0.0211 0.1097
17 20 22 23 36
138 PARAVISION -009 0.0020 0.0026 0.0038 0.0098 0.0857
222 226 214 215 214
139 PIXELALL -002 0.0716 0.1052 0.1475 0.2489 0.3904
114 114 116 114 83
140 PIXELALL -003 0.0158 0.0218 0.0288 0.0474 0.1138
97 102 103 94 115
141 PIXELALL -004 0.0129 0.0183 0.0245 0.0378 0.1375
73 74 78 59 65
142 PIXELALL -005 0.0087 0.0121 0.0171 0.0250 0.1052
151 151 150 127 11
143 PTAKURATSATU -000 0.0275 0.0366 0.0458 0.0523 0.0523
183 181 167 160 142
144 QNAP -001 0.0404 0.0536 0.0661 0.0916 0.1595

Table 19: Identification-mode: Effect of N on FNIR at high threshold. Values are threshold-based miss rates i.e. FNIR at FPIR =
0.001 for five enrollment population sizes, N. The right six columns apply for enrollment of one image. Missing entries usually
apply because another algorithm from the same developer was run instead. Some developers are missing because less accurate
algorithms were not run on galleries with N ≥ 3 000 000. Throughout blue superscripts indicate the rank of the algorithm for that
column.

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 64

MISSES BELOW THRESHOLD , T ENROL MOST RECENT


FNIR ( N , T > 0, R >L ) DATASET: FRVT 2018 MUGSHOTS
# ALGORITHM N =0.64M N =1.6M N =3.0M N =6.0M N =12.0M
137 129 124 116 111
145 QNAP -002 0.0200 0.0265 0.0327 0.0490 0.1341
105 105 100 95 119
146 QNAP -003 0.0139 0.0189 0.0239 0.0379 0.1414
300 300 233 217
147 QUANTASOFT-001 0.6387 0.6387 0.6387 0.6387
235 230 209 200 200
148 RANKONE -002 0.0973 0.1175 0.1359 0.1718 0.2613
236 231 208 199 199
149 RANKONE -003 0.0973 0.1175 0.1359 0.1718 0.2613
199 193 177 163 170
150 RANKONE -005 0.0473 0.0592 0.0700 0.0944 0.1998
118 116 108 97 80
151 RANKONE -007 0.0168 0.0222 0.0266 0.0381 0.1132
98 97 95 88 47
152 RANKONE -009 0.0132 0.0177 0.0230 0.0344 0.0921
84 80 79 65 28
153 RANKONE -010 0.0106 0.0136 0.0174 0.0265 0.0785
58 58 57 68 82
154 RANKONE -011 0.0063 0.0087 0.0115 0.0269 0.1135
56 54 54 52 75
155 RANKONE -012 0.0058 0.0077 0.0100 0.0220 0.1111
36 34 34 34 38
156 RANKONE -013 0.0034 0.0046 0.0059 0.0127 0.0875
264 263 228 221 208
157 REALNETWORKS -002 0.1943 0.2314 0.2656 0.3134 0.3208
250 249 220 209 205
158 REALNETWORKS -003 0.1300 0.1594 0.1858 0.2246 0.3076
249 248 219 210 207
159 REALNETWORKS -004 0.1279 0.1581 0.1857 0.2329 0.3179
138 136 134 137 121
160 REALNETWORKS -005 0.0202 0.0277 0.0355 0.0560 0.1431
78 84 82 82 54
161 REALNETWORKS -006 0.0097 0.0145 0.0182 0.0308 0.0991
65 63 61 54 44
162 REALNETWORKS -007 0.0068 0.0097 0.0125 0.0233 0.0917
47 47 47 36 31
163 REALNETWORKS -008 0.0044 0.0062 0.0082 0.0139 0.0824
185 185 172 169 171
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

164 REMARKAI -000 0.0406 0.0552 0.0676 0.1028 0.2003


71 73 72 71 90
165 RENDIP -000 0.0085 0.0121 0.0156 0.0277 0.1182
74 75 73 70 59
166 REVEALMEDIA -000 0.0090 0.0122 0.0158 0.0277 0.1019
141 138 140 141 149
167 S 1-000 0.0204 0.0279 0.0382 0.0630 0.1707
90 91 89 98 99
168 S 1-001 0.0115 0.0156 0.0199 0.0392 0.1256
44 44 44 64 103
169 S 1-002 0.0040 0.0056 0.0077 0.0264 0.1285
57 59 58 69 104
170 S 1-003 0.0061 0.0088 0.0116 0.0277 0.1298
200 200 184 173 74
171 SCANOVATE -000 0.0498 0.0667 0.0804 0.1097 0.1109
211 211 195 183 167
172 SCANOVATE -001 0.0630 0.0815 0.0993 0.1292 0.1960
113 112 111 101 91
173 SENSETIME -000 0.0158 0.0208 0.0270 0.0398 0.1232
116 115 114 107 105
174 SENSETIME -001 0.0161 0.0219 0.0277 0.0420 0.1304
108 85 71 55 19
175 SENSETIME -002 0.0146 0.0148 0.0153 0.0234 0.0657
7 8 8 9 7
176 SENSETIME -003 0.0016 0.0018 0.0021 0.0054 0.0451
6 5 7 4 5
177 SENSETIME -004 0.0015 0.0018 0.0021 0.0040 0.0354
8 14 16 20 8
178 SENSETIME -005 0.0016 0.0022 0.0031 0.0089 0.0454
4 7 11 5 6
179 SENSETIME -006 0.0014 0.0018 0.0023 0.0047 0.0372
2 2 2 2 3
180 SENSETIME -007 0.0012 0.0014 0.0016 0.0036 0.0316
1 1 1 1 1
181 SENSETIME -008 0.0011 0.0013 0.0015 0.0031 0.0288
247 243 215 204 193
182 SHAMAN -007 0.1212 0.1413 0.1587 0.1879 0.2460
100 95 96 87 61
183 SIAT-001 0.0136 0.0176 0.0230 0.0344 0.1035
112 113 112 103 102
184 SIAT-002 0.0154 0.0216 0.0273 0.0404 0.1283
228 239 217 212 223
185 SQISOFT-001 0.0921 0.1322 0.1781 0.2348 0.9271
121 142 146 151 113
186 SQISOFT-002 0.0177 0.0290 0.0415 0.0739 0.1351
296 294 232 225 218
187 SYNESIS -003 0.5341 0.5821 0.6113 0.6479 0.6822
201 198 185 172 163
188 SYNESIS -003 0.0499 0.0652 0.0804 0.1095 0.1916
122 123 122 125 140
189 SYNESIS -005 0.0181 0.0248 0.0319 0.0518 0.1580
187 188 191 207 213
190 TECH 5-001 0.0420 0.0574 0.0911 0.2106 0.3725
134 132 130 128 143
191 TECH 5-002 0.0194 0.0269 0.0346 0.0537 0.1607
219 216 198 185 158
192 TEVIAN -005 0.0692 0.0873 0.1066 0.1301 0.1840
69 64 63 63 106
193 TEVIAN -006 0.0078 0.0098 0.0130 0.0261 0.1305
42 42 40 40 50
194 TEVIAN -007 0.0038 0.0052 0.0065 0.0154 0.0957
214 213 196 188 181
195 TIGER -002 0.0647 0.0861 0.1036 0.1332 0.2231
197 195 182 175 174
196 TOSHIBA -000 0.0460 0.0620 0.0780 0.1117 0.2082
99 101 99 96 117
197 TRUEFACE -000 0.0134 0.0182 0.0238 0.0380 0.1385
260 260 227 218 210
198 VD -001 0.1642 0.2015 0.2351 0.2736 0.3293
153 155 152 152 136
199 VERIDAS -001 0.0278 0.0373 0.0491 0.0753 0.1541
152 154 138 117 26
200 VERIDAS -002 0.0278 0.0373 0.0373 0.0491 0.0753
91 93 91 110 137
201 VERIDAS -003 0.0117 0.0166 0.0219 0.0446 0.1543
109 109 110 115 87
202 VIGILANTSOLUTIONS -008 0.0146 0.0205 0.0269 0.0489 0.1164
92 98 101 224
203 VISIONBOX -000 0.0122 0.0177 0.0239 0.9538
190 189 178 164 159
204 VISIONLABS -004 0.0427 0.0578 0.0703 0.0949 0.1853
175 173 162 156 157
205 VISIONLABS -005 0.0369 0.0502 0.0626 0.0847 0.1815
131 131 128 133 127
206 VISIONLABS -006 0.0188 0.0267 0.0336 0.0542 0.1478
130 130 127 131 128
207 VISIONLABS -007 0.0188 0.0266 0.0335 0.0540 0.1479
77 78 77 74 95
208 VISIONLABS -008 0.0096 0.0131 0.0166 0.0291 0.1247
37 35 35 37 40
209 VISIONLABS -009 0.0034 0.0046 0.0060 0.0140 0.0881
41 41 42 38 46
210 VISIONLABS -010 0.0038 0.0051 0.0070 0.0149 0.0920
24 25 26 27 33
211 VISIONLABS -011 0.0025 0.0033 0.0044 0.0120 0.0825
140 134 132 135 116
212 VIXVIZION -009 0.0203 0.0273 0.0348 0.0545 0.1377
82 82 84 77 60
213 VNPT-001 0.0104 0.0143 0.0190 0.0296 0.1028
50 48 48 45 56
214 VNPT-002 0.0051 0.0065 0.0083 0.0172 0.1005
244 247 224 220 216
215 VOCORD -005 0.1179 0.1577 0.2183 0.3122 0.4490
81 79 80 83 94
216 VTS -001 0.0102 0.0133 0.0175 0.0322 0.1243

Table 20: Identification-mode: Effect of N on FNIR at high threshold. Values are threshold-based miss rates i.e. FNIR at FPIR =
0.001 for five enrollment population sizes, N. The right six columns apply for enrollment of one image. Missing entries usually
apply because another algorithm from the same developer was run instead. Some developers are missing because less accurate
algorithms were not run on galleries with N ≥ 3 000 000. Throughout blue superscripts indicate the rank of the algorithm for that
column.

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 65
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

MISSES BELOW THRESHOLD , T ENROL MOST RECENT


FNIR ( N , T > 0, R >L ) DATASET: FRVT 2018 MUGSHOTS
# ALGORITHM N =0.64M N =1.6M N =3.0M N =6.0M N =12.0M
125 126 129 136 124
217 VTS -002 0.0185 0.0259 0.0344 0.0549 0.1447
54 53 51 47 58
218 VTS -003 0.0053 0.0073 0.0096 0.0188 0.1017
86 88 86 85 67
219 XFORWARDAI -000 0.0107 0.0151 0.0195 0.0324 0.1057
39 39 36 26 29
220 XFORWARDAI -001 0.0037 0.0049 0.0060 0.0120 0.0800
25 24 21 18 22
221 XFORWARDAI -002 0.0026 0.0030 0.0035 0.0078 0.0706
96 96 94 89 81
222 YITU -002 0.0129 0.0177 0.0228 0.0345 0.1133
104 104 97 90 84
223 YITU -003 0.0138 0.0185 0.0236 0.0353 0.1148
64 61 62 53 64
224 YITU -004 0.0067 0.0096 0.0129 0.0232 0.1046
68 68 64 61 68
225 YITU -005 0.0074 0.0101 0.0135 0.0255 0.1057

Table 21: Identification-mode: Effect of N on FNIR at high threshold. Values are threshold-based miss rates i.e. FNIR at FPIR =
0.001 for five enrollment population sizes, N. The right six columns apply for enrollment of one image. Missing entries usually
apply because another algorithm from the same developer was run instead. Some developers are missing because less accurate
algorithms were not run on galleries with N ≥ 3 000 000. Throughout blue superscripts indicate the rank of the algorithm for that
column.

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 66

MISSES AT GIVEN RANK ENROL MOST RECENT


FNIR ( N , T = 0, R ) RANK 1 RANK 50
# ALGORITHM N =0.64M N =1.6M N =3.0M N =6.0M N =12.0M aN b N =0.64M N =1.6M N =3.0M N =6.0M N =12.0M aN b
251 249 215 209 203 159
1 3 DIVI -005 0.0137 0.0176 0.0210 0.0253 0.0302 0.0004 N0.271 186 229
0.0040 229
0.0049 205
0.0057 201
0.0068 195
0.0081 49
0.0002 N0.240 190
217 225 203 201 195 66
2 ACER -000 0.0081 0.0106 0.0128 0.0157 0.0195 0.0001 N0.299 210 173
0.0020 193
0.0026 182
0.0031 182
0.0037 177
0.0045 19
0.0000 N0.284 204
213 222 201 200 194 98
3 ALCHERA -003 0.0079 0.0104 0.0123 0.0147 0.0180 0.0002 N0.278 197 210
0.0027 208
0.0032 188
0.0035 185
0.0042 178
0.0048 53
0.0002 N0.199 180
233 231 202 199 193 199
4 ALLGOVISION -000 0.0101 0.0114 0.0127 0.0145 0.0166 0.0010 N0.171 117 250
0.0063 245
0.0067 210
0.0071 204
0.0075 194
0.0081 207
0.0020 N0.086 135
205 211 196 194 190 83
5 ALLGOVISION -001 0.0069 0.0090 0.0107 0.0128 0.0157 0.0002 N0.277 195 196
0.0023 197
0.0027 183
0.0031 177
0.0036 173
0.0043 42
0.0001 N0.211 185
236 239 211 206 199 137
6 ANKE -000 0.0102 0.0132 0.0155 0.0188 0.0225 0.0003 N0.270 185 219
0.0032 221
0.0040 201
0.0046 194
0.0056 186
0.0066 40
0.0001 N0.247 192
137 138 138 134 128 76
7 ANKE -002 0.0024 0.0028 0.0032 0.0037 0.0043 0.0002 N0.203 135 143
0.0016 144
0.0017 137
0.0017 129
0.0018 122
0.0019 114
0.0006 N0.067 122
269 267 227 221 217 196
8 AWARE -003 0.0238 0.0306 0.0361 0.0431 0.0506 0.0008 N0.258 180 244
0.0055 253
0.0075 221
0.0092 216
0.0113 214
0.0143 30
0.0001 N0.323 214
270 268 228 222 207 218
9 AWARE -005 0.0245 0.0311 0.0366 0.0434 0.0312 0.0056 N0.118 68 248
0.0062 259
0.0082 223
0.0101 219
0.0128 197
0.0089 141
0.0007 N0.169 175
306 306 238 230 223 223
10 AYONIX -002 0.2935 0.3414 0.3736 0.4101 0.4465 0.0440 N0.143 88 305
0.0950 307
0.1274 238
0.1524 229
0.1828 222
0.2150 209
0.0023 N0.279 201
244 278 232 229 222 3
11 CAMVI -004 0.0124 0.0468 0.0719 0.2363 0.2367 0.0000 N1.055 224 277
0.0117 292
0.0464 234
0.0715 230
0.2361 223
0.2364 3
0.0000 N1.071 224
17 15 17 18 15 115
12 CANON -001 0.0011 0.0011 0.0012 0.0013 0.0014 0.0002 N0.113 61 22
0.0009 22
0.0009 22
0.0009 22
0.0009 22
0.0010 113
0.0006 N0.026 63
21 23 30 31 30 75
13 CANON -002 0.0011 0.0012 0.0013 0.0014 0.0016 0.0002 N0.142 87 23
0.0009 21
0.0009 19
0.0009 17
0.0009 17
0.0009 142
0.0007 N0.015 34
63 58 59 62 184 4
14 CIB -000 0.0014 0.0015 0.0017 0.0019 0.0131 0.0000 N0.635 223 70
0.0012 63
0.0012 62
0.0012 61
0.0012 209
0.0122 4
0.0000 N0.647 223
14 16 16 20 20 88
15 CLEARVIEWAI -000 0.0010 0.0011 0.0012 0.0013 0.0015 0.0002 N0.129 79 25
0.0009 20
0.0009 21
0.0009 23
0.0009 19
0.0010 131
0.0007 N0.019 50
67 54 50 43 37 191
16 CLOUDWALK - HR -000 0.0015 0.0015 0.0015 0.0016 0.0017 0.0007 N0.054 12 126
0.0014 113
0.0014 103
0.0014 96
0.0014 76
0.0014 188
0.0012 N0.012 24
95 76 68 57 49 201
17 CLOUDWALK - MT-000 0.0018 0.0018 0.0018 0.0019 0.0020 0.0011 N0.036 6 159
0.0018 153
0.0018 143
0.0018 124
0.0018 109
0.0018 203
0.0017 N0.002 4
96 75 65 54 47 203
18 CLOUDWALK - MT-001 0.0018 0.0018 0.0018 0.0018 0.0019 0.0012 N0.029 4 158
0.0017 154
0.0018 141
0.0018 125
0.0018 108
0.0018 202
0.0017 N0.003 7
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

234 224 197 189 181 215


19 COGENT-000 0.0101 0.0105 0.0109 0.0115 0.0125 0.0038 N0.071 23 184
0.0021 185
0.0024 178
0.0028 179
0.0036 201
0.0095 9
0.0000 N0.466 219
235 223 198 190 182 216
20 COGENT-001 0.0101 0.0105 0.0109 0.0115 0.0125 0.0038 N0.071 22 185
0.0021 186
0.0024 177
0.0028 180
0.0036 200
0.0095 8
0.0000 N0.466 220
150 153 151 149 144 45
21 COGENT-002 0.0029 0.0036 0.0041 0.0049 0.0059 0.0001 N0.244 171 124
0.0014 134
0.0015 131
0.0017 135
0.0019 136
0.0021 54
0.0002 N0.144 169
156 155 156 152 148 62
22 COGENT-003 0.0031 0.0038 0.0043 0.0051 0.0060 0.0001 N0.230 159 138
0.0015 147
0.0017 152
0.0018 147
0.0020 141
0.0022 59
0.0002 N0.143 168
97 97 95 95 87 106
23 COGENT-004 0.0018 0.0020 0.0022 0.0025 0.0028 0.0002 N0.159 105 115
0.0013 111
0.0014 108
0.0014 101
0.0015 90
0.0015 123
0.0007 N0.050 101
74 69 69 66 58 158
24 COGENT-005 0.0016 0.0017 0.0018 0.0020 0.0021 0.0004 N0.108 56 118
0.0013 103
0.0013 96
0.0014 85
0.0014 75
0.0014 181
0.0011 N0.017 40
36 33 31 27 24 157
25 COGENT-006 0.0012 0.0012 0.0013 0.0014 0.0015 0.0004 N0.088 39 51
0.0011 51
0.0011 47
0.0011 38
0.0011 34
0.0011 157
0.0008 N0.019 47
262 261 223 218 211 186
26 COGNITEC -000 0.0195 0.0252 0.0297 0.0352 0.0417 0.0006 N0.259 181 239
0.0050 243
0.0065 217
0.0077 214
0.0097 208
0.0122 37
0.0001 N0.305 208
228 232 208 204 197 126
27 COGNITEC -001 0.0090 0.0117 0.0139 0.0166 0.0199 0.0002 N0.271 188 215
0.0030 214
0.0034 197
0.0040 193
0.0046 182
0.0054 52
0.0002 N0.207 184
186 184 176 170 168 108
28 COGNITEC -002 0.0048 0.0057 0.0067 0.0079 0.0094 0.0002 N0.232 162 198
0.0024 194
0.0026 180
0.0028 172
0.0030 161
0.0034 93
0.0005 N0.117 155
189 188 179 176 171 133
29 COGNITEC -003 0.0053 0.0062 0.0072 0.0085 0.0100 0.0003 N0.222 149 212
0.0028 206
0.0030 185
0.0032 175
0.0035 167
0.0037 150
0.0008 N0.097 143
145 146 146 144 141 34
30 COGNITEC -004 0.0027 0.0032 0.0037 0.0045 0.0056 0.0001 N0.253 177 114
0.0013 115
0.0014 117
0.0015 118
0.0017 116
0.0019 63
0.0002 N0.123 160
64 66 72 69 71 67
31 COGNITEC -005 0.0014 0.0016 0.0018 0.0021 0.0024 0.0001 N0.169 114 59
0.0011 58
0.0011 55
0.0012 57
0.0012 48
0.0012 121
0.0007 N0.037 78
60 62 58 61 60 86
32 COGNITEC -006 0.0014 0.0016 0.0017 0.0019 0.0022 0.0002 N0.154 97 60
0.0011 57
0.0011 54
0.0012 52
0.0012 51
0.0012 122
0.0007 N0.036 77
162 157 159 155 150 102
33 CYBERLINK -000 0.0034 0.0040 0.0046 0.0054 0.0063 0.0002 N0.209 142 181
0.0021 177
0.0022 169
0.0023 165
0.0025 155
0.0027 109
0.0006 N0.092 140
153 151 153 151 147 46
34 CYBERLINK -001 0.0030 0.0035 0.0042 0.0050 0.0060 0.0001 N0.243 170 148
0.0016 151
0.0017 145
0.0018 143
0.0020 139
0.0022 79
0.0004 N0.109 150
136 130 128 120 109 177
35 CYBERLINK -002 0.0024 0.0026 0.0028 0.0031 0.0035 0.0005 N0.121 70 176
0.0020 171
0.0021 164
0.0021 157
0.0022 142
0.0022 191
0.0012 N0.036 76
65 63 61 52 50 149
36 CYBERLINK -003 0.0015 0.0016 0.0017 0.0018 0.0020 0.0003 N0.110 58 64
0.0011 61
0.0012 58
0.0012 60
0.0012 55
0.0013 105
0.0006 N0.047 96
78 68 67 58 53 178
37 CYBERLINK -004 0.0016 0.0017 0.0018 0.0019 0.0021 0.0005 N0.085 38 125
0.0014 117
0.0014 104
0.0014 97
0.0014 84
0.0015 180
0.0010 N0.022 54
87 80 74 71 66 169
38 CYBERLINK -005 0.0017 0.0018 0.0019 0.0021 0.0023 0.0004 N0.099 49 130
0.0014 118
0.0014 116
0.0015 104
0.0015 89
0.0015 169
0.0009 N0.032 72
191 192 183 181 175 85
39 DAHUA -001 0.0053 0.0067 0.0079 0.0093 0.0112 0.0002 N0.256 179 209
0.0027 200
0.0029 184
0.0031 174
0.0034 169
0.0038 99
0.0005 N0.121 158
85 85 87 82 80 101
40 DAHUA -002 0.0017 0.0018 0.0021 0.0023 0.0027 0.0002 N0.156 99 104
0.0013 104
0.0013 99
0.0014 91
0.0014 83
0.0015 136
0.0007 N0.043 92
16 29 36 42 44 25
41 DAHUA -003 0.0010 0.0012 0.0014 0.0016 0.0018 0.0001 N0.199 132 16
0.0009 14
0.0009 12
0.0009 16
0.0009 14
0.0009 107
0.0006 N0.027 64
15 14 14 13 16 112
42 DAHUA -004 0.0010 0.0011 0.0012 0.0013 0.0014 0.0002 N0.113 64 20
0.0009 19
0.0009 16
0.0009 20
0.0009 15
0.0009 116
0.0006 N0.023 55
173 160 154 143 138 205
43 DAON -000 0.0039 0.0041 0.0043 0.0045 0.0049 0.0014 N0.077 28 225
0.0036 217
0.0036 193
0.0036 181
0.0037 166
0.0037 211
0.0030 N0.013 28
100 104 107 104 103 53
44 DECATUR -000 0.0018 0.0021 0.0024 0.0028 0.0033 0.0001 N0.201 134 95
0.0013 95
0.0013 95
0.0013 98
0.0014 92
0.0016 90
0.0005 N0.072 125
59 51 46 45 43 163
45 DEEPGLINT-001 0.0014 0.0014 0.0015 0.0016 0.0018 0.0004 N0.089 41 105
0.0013 89
0.0013 88
0.0013 77
0.0013 69
0.0013 178
0.0010 N0.017 38
161 165 166 163 160 19
46 DEEPSEA -001 0.0033 0.0043 0.0052 0.0065 0.0081 0.0001 N0.311 213 92
0.0012 109
0.0014 121
0.0015 121
0.0017 124
0.0020 45
0.0001 N0.157 173
243 243 213 214 215 8
47 DERMALOG -005 0.0114 0.0149 0.0201 0.0289 0.0447 0.0000 N0.470 222 270
0.0094 272
0.0122 229
0.0171 223
0.0254 218
0.0406 10
0.0000 N0.505 221
208 206 191 182 172 210
48 DERMALOG -006 0.0075 0.0081 0.0086 0.0093 0.0104 0.0017 N0.109 57 249
0.0062 242
0.0063 207
0.0064 200
0.0065 187
0.0068 216
0.0043 N0.028 67
215 214 195 192 186 184
49 DERMALOG -007 0.0080 0.0092 0.0102 0.0118 0.0140 0.0006 N0.190 129 241
0.0051 236
0.0054 204
0.0056 195
0.0058 185
0.0063 208
0.0020 N0.068 123
135 139 140 138 136 40
50 DERMALOG -008 0.0024 0.0029 0.0034 0.0040 0.0048 0.0001 N0.239 167 137
0.0015 131
0.0015 128
0.0016 120
0.0017 119
0.0019 85
0.0004 N0.088 136
142 137 132 126 113 179
51 DERMALOG -009 0.0026 0.0028 0.0030 0.0033 0.0037 0.0005 N0.121 71 186
0.0022 178
0.0022 167
0.0022 159
0.0023 145
0.0024 196
0.0015 N0.028 68
123 108 103 90 75 200
52 DERMALOG -010 0.0022 0.0022 0.0023 0.0024 0.0025 0.0010 N0.054 13 177
0.0020 168
0.0020 162
0.0020 151
0.0021 134
0.0021 206
0.0018 N0.008 16
106 110 110 106 105 71
53 DILUSENSE -000 0.0019 0.0022 0.0025 0.0028 0.0033 0.0002 N0.187 126 84
0.0012 78
0.0013 80
0.0013 87
0.0014 78
0.0014 98
0.0005 N0.062 117
35 31 26 22 17 167
54 FIRSTCREDITKZ -001 0.0012 0.0012 0.0013 0.0013 0.0015 0.0004 N0.073 25 75
0.0012 64
0.0012 60
0.0012 50
0.0012 43
0.0012 185
0.0011 N0.004 10
107 111 116 114 104 68
55 FUJITSULAB -000 0.0019 0.0022 0.0025 0.0029 0.0033 0.0001 N0.190 128 116
0.0013 108
0.0014 105
0.0014 100
0.0015 96
0.0016 104
0.0006 N0.059 111
86 87 89 83 78 116
56 FUJITSULAB -001 0.0017 0.0019 0.0021 0.0023 0.0026 0.0002 N0.149 93 117
0.0013 110
0.0014 102
0.0014 99
0.0015 87
0.0015 137
0.0007 N0.045 93
253 255 219 213 208 141
57 GORILLA -002 0.0147 0.0197 0.0238 0.0288 0.0351 0.0003 N0.295 206 218
0.0032 223
0.0041 203
0.0049 198
0.0062 193
0.0080 20
0.0000 N0.315 211
187 189 181 180 177 41
58 GORILLA -004 0.0048 0.0063 0.0075 0.0091 0.0114 0.0001 N0.292 204 142
0.0015 156
0.0018 165
0.0021 163
0.0025 160
0.0029 32
0.0001 N0.222 187
141 145 148 146 142 22
59 GORILLA -005 0.0026 0.0032 0.0038 0.0046 0.0056 0.0001 N0.271 190 89
0.0012 88
0.0013 101
0.0014 112
0.0016 110
0.0018 61
0.0002 N0.127 162
68 74 86 88 90 23
60 GORILLA -006 0.0015 0.0017 0.0020 0.0024 0.0029 0.0001 N0.226 156 49
0.0010 52
0.0011 53
0.0011 48
0.0012 50
0.0012 94
0.0005 N0.056 107
58 70 75 86 88 21
61 GORILLA -007 0.0014 0.0017 0.0019 0.0023 0.0028 0.0001 N0.241 169 47
0.0010 47
0.0011 41
0.0011 41
0.0011 42
0.0012 103
0.0006 N0.043 90
47 55 57 64 61 47
62 GORILLA -008 0.0013 0.0015 0.0017 0.0019 0.0022 0.0001 N0.181 123 46
0.0010 46
0.0011 42
0.0011 37
0.0011 37
0.0011 117
0.0007 N0.033 74
130 128 125 118 110 140
63 GRIAULE -000 0.0022 0.0025 0.0028 0.0030 0.0035 0.0003 N0.152 96 146
0.0016 143
0.0017 146
0.0018 132
0.0018 118
0.0019 128
0.0007 N0.062 115
28 28 24 25 23 134
64 GRIAULE -001 0.0011 0.0012 0.0012 0.0014 0.0015 0.0003 N0.104 54 40
0.0010 42
0.0010 33
0.0010 30
0.0010 30
0.0011 153
0.0008 N0.019 48
229 233 209 205 198 118
65 HIK -003 0.0091 0.0117 0.0140 0.0169 0.0203 0.0002 N0.274 193 197
0.0024 205
0.0030 192
0.0036 190
0.0044 181
0.0053 25
0.0001 N0.278 200
227 230 206 202 196 121
66 HIK -004 0.0088 0.0113 0.0134 0.0163 0.0195 0.0002 N0.271 191 200
0.0024 201
0.0030 189
0.0035 186
0.0042 180
0.0052 31
0.0001 N0.259 197
168 169 168 162 159 36
67 HIK -005 0.0036 0.0046 0.0053 0.0065 0.0081 0.0001 N0.274 192 83
0.0012 126
0.0015 127
0.0016 136
0.0019 138
0.0022 35
0.0001 N0.202 182
48 43 39 36 36 164
68 HYPERVERGE -001 0.0013 0.0014 0.0014 0.0015 0.0017 0.0004 N0.084 34 107
0.0013 94
0.0013 82
0.0013 69
0.0013 56
0.0013 190
0.0012 N0.004 11
49 40 38 32 26 192
69 HYPERVERGE -002 0.0013 0.0014 0.0014 0.0015 0.0016 0.0007 N0.047 10 99
0.0013 83
0.0013 81
0.0013 67
0.0013 57
0.0013 187
0.0011 N0.008 17
110 109 109 107 106 77
70 HZAILU -000 0.0019 0.0022 0.0025 0.0028 0.0033 0.0002 N0.185 125 132
0.0014 119
0.0014 114
0.0015 105
0.0015 94
0.0016 171
0.0009 N0.032 73
101 94 88 77 73 180
71 HZAILU -001 0.0018 0.0020 0.0021 0.0022 0.0025 0.0005 N0.095 46 134
0.0014 130
0.0015 123
0.0016 115
0.0016 102
0.0016 149
0.0008 N0.048 98
192 196 188 187 180 55
72 IDEMIA -003 0.0054 0.0069 0.0084 0.0101 0.0122 0.0001 N0.281 199 192
0.0023 198
0.0027 181
0.0031 178
0.0036 171
0.0041 46
0.0002 N0.201 181

Table 22: Investigation-mode: Effect of N on FNIR on recent images For five enrollment population sizes, N, with T = 0 and FPIR
= 1. The left five columns are rank 1 miss rates The right five columns are rank 50 miss rates Missing entries usually apply because
another algorithm from the same developer was run instead. Some developers are missing because less accurate algorithms were
not run on galleries with N > 1 600 000. Throughout blue superscripts indicate the rank of the algorithm for that column, and yellow
highlighting indicates the most accurate value. Caution: The Power-low models are mostly intended to draw attention to the kind
of behavior, not as a model to be used for prediction.

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 67

MISSES AT GIVEN RANK ENROL MOST RECENT


FNIR ( N , T = 0, R ) RANK 1 RANK 50
# ALGORITHM N =0.64M N =1.6M N =3.0M N =6.0M N =12.0M aN b N =0.64M N =1.6M N =3.0M N =6.0M N =12.0M aN b
195 191 184 186 179 64
73 IDEMIA -004 0.0054 0.0066 0.0079 0.0097 0.0117 0.0001 N0.270 184 166
0.0018 172
0.0021 174
0.0025 171
0.0030 165
0.0036 29
0.0001 N0.241 191
200 205 193 191 187 72
74 IDEMIA -005 0.0064 0.0081 0.0097 0.0118 0.0143 0.0002 N0.277 196 191
0.0022 203
0.0030 191
0.0036 191
0.0044 183
0.0055 17
0.0000 N0.300 207
211 219 200 196 192 122
75 IDEMIA -006 0.0076 0.0096 0.0113 0.0135 0.0161 0.0002 N0.259 182 211
0.0028 218
0.0037 200
0.0046 196
0.0059 191
0.0076 15
0.0000 N0.341 216
120 129 131 129 129 26
76 IDEMIA -007 0.0021 0.0026 0.0030 0.0036 0.0044 0.0001 N0.250 175 57
0.0011 68
0.0012 71
0.0012 88
0.0014 88
0.0015 64
0.0002 N0.110 152
9 12 12 12 13 95
77 IDEMIA -008 0.0010 0.0011 0.0011 0.0013 0.0014 0.0002 N0.121 72 21
0.0009 18
0.0009 15
0.0009 18
0.0009 13
0.0009 135
0.0007 N0.016 36
5 5 6 5 6 128
78 IDEMIA -009 0.0009 0.0010 0.0010 0.0011 0.0012 0.0002 N0.097 47 14
0.0008 11
0.0009 11
0.0009 10
0.0009 7
0.0009 151
0.0008 N0.007 13
94 93 93 93 85 105
79 IMAGUS -005 0.0018 0.0019 0.0022 0.0025 0.0028 0.0002 N0.158 104 96
0.0013 101
0.0013 97
0.0014 93
0.0014 93
0.0016 97
0.0005 N0.066 121
98 98 94 92 89 110
80 IMAGUS -006 0.0018 0.0020 0.0022 0.0025 0.0029 0.0002 N0.156 98 123
0.0014 121
0.0014 115
0.0015 103
0.0015 98
0.0016 134
0.0007 N0.049 100
91 101 100 100 93 60
81 IMAGUS -007 0.0017 0.0020 0.0022 0.0026 0.0030 0.0001 N0.189 127 79
0.0012 82
0.0013 77
0.0013 75
0.0013 81
0.0015 96
0.0005 N0.064 119
128 125 123 119 111 132
82 IMPERIAL -000 0.0022 0.0024 0.0027 0.0030 0.0035 0.0003 N0.157 100 151
0.0016 146
0.0017 138
0.0017 128
0.0018 114
0.0018 172
0.0009 N0.041 87
232 238 210 207 200 94
83 INCODE -003 0.0098 0.0129 0.0154 0.0191 0.0233 0.0002 N0.296 207 199
0.0024 207
0.0031 195
0.0036 192
0.0046 184
0.0056 23
0.0001 N0.285 205
151 152 152 150 146 44
84 INCODE -004 0.0029 0.0035 0.0041 0.0049 0.0060 0.0001 N0.244 172 164
0.0018 159
0.0019 158
0.0020 153
0.0021 140
0.0022 112
0.0006 N0.077 129
70 67 66 67 67 117
85 INCODE -005 0.0015 0.0017 0.0018 0.0020 0.0023 0.0002 N0.140 85 87
0.0012 80
0.0013 79
0.0013 78
0.0013 73
0.0014 133
0.0007 N0.041 84
76 71 73 70 72 114
86 INNOVATRICS -007 0.0016 0.0017 0.0019 0.0021 0.0024 0.0002 N0.143 90 85
0.0012 77
0.0012 73
0.0013 70
0.0013 66
0.0013 139
0.0007 N0.037 81
20 19 19 19 28 99
87 INTEMA -000 0.0011 0.0011 0.0012 0.0013 0.0016 0.0002 N0.124 74 42
0.0010 43
0.0010 48
0.0011 43
0.0011 54
0.0013 75
0.0003 N0.079 130
298 298 234 226 219 225
88 INTSYSMSU -000 0.1395 0.1457 0.1498 0.1544 0.1591 0.0768 N0.045 8 307
0.1098 305
0.1163 237
0.1206 228
0.1252 221
0.1296 225
0.0519 N0.056 108
177 166 157 145 135 212
89 IREX -000 0.0043 0.0044 0.0044 0.0046 0.0048 0.0028 N0.032 5 234
0.0043 225
0.0043 199
0.0043 188
0.0043 172
0.0043 215
0.0042 N0.002 5
190 190 180 174 169 153
90 ISYSTEMS -002 0.0053 0.0064 0.0072 0.0083 0.0096 0.0003 N0.204 137 221
0.0033 215
0.0034 190
0.0036 183
0.0038 170
0.0041 192
0.0013 N0.071 124
180 178 169 164 157 168
0.0004 N0.174 119 217 211 186 176 168 193
0.0013 N0.063 118
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

91 ISYSTEMS -003 0.0046 0.0052 0.0057 0.0066 0.0076 0.0031 0.0033 0.0034 0.0035 0.0037
43 53 55 60 63 39
92 KAKAO -000 0.0013 0.0015 0.0016 0.0019 0.0022 0.0001 N0.192 131 30
0.0009 30
0.0010 29
0.0010 31
0.0010 31
0.0011 89
0.0005 N0.050 102
53 44 41 35 32 183
93 KAKAO -001 0.0014 0.0014 0.0015 0.0015 0.0016 0.0006 N0.060 16 103
0.0013 91
0.0013 87
0.0013 74
0.0013 61
0.0013 184
0.0011 N0.012 23
210 201 185 173 162 217
94 KEDACOM -001 0.0076 0.0077 0.0079 0.0083 0.0087 0.0040 N0.047 9 256
0.0071 249
0.0072 214
0.0072 202
0.0073 188
0.0073 220
0.0063 N0.009 19
185 185 177 171 167 119
95 KNERON -000 0.0048 0.0059 0.0067 0.0079 0.0093 0.0002 N0.226 155 236
0.0048 238
0.0059 209
0.0067 207
0.0079 199
0.0093 62
0.0002 N0.226 188
42 38 34 34 31 165
96 LINECLOVA -002 0.0013 0.0013 0.0014 0.0015 0.0016 0.0004 N0.079 31 77
0.0012 67
0.0012 61
0.0012 53
0.0012 47
0.0012 182
0.0011 N0.008 14
221 210 192 184 173 213
97 LOOKMAN -003 0.0083 0.0088 0.0091 0.0096 0.0104 0.0030 N0.076 27 260
0.0072 252
0.0074 216
0.0075 205
0.0076 192
0.0077 218
0.0054 N0.022 53
212 204 187 177 165 214
98 LOOKMAN -005 0.0078 0.0080 0.0083 0.0086 0.0092 0.0038 N0.053 11 257
0.0072 250
0.0072 215
0.0073 203
0.0073 189
0.0074 219
0.0060 N0.013 30
71 73 76 76 76 70
99 MANTRA -000 0.0015 0.0017 0.0019 0.0022 0.0025 0.0002 N0.171 116 78
0.0012 69
0.0012 68
0.0012 64
0.0013 60
0.0013 119
0.0007 N0.042 88
118 123 124 122 121 57
100 MAXVISION -000 0.0021 0.0024 0.0027 0.0032 0.0038 0.0001 N0.206 140 98
0.0013 107
0.0014 107
0.0014 109
0.0015 107
0.0017 71
0.0003 N0.100 145
33 30 29 24 22 151
101 MAXVISION -001 0.0012 0.0012 0.0013 0.0014 0.0015 0.0003 N0.089 40 55
0.0011 50
0.0011 45
0.0011 42
0.0011 35
0.0011 165
0.0009 N0.014 32
239 234 204 198 191 207
102 MEGVII -001 0.0105 0.0118 0.0128 0.0142 0.0161 0.0015 N0.143 89 262
0.0077 258
0.0080 218
0.0082 212
0.0086 198
0.0089 214
0.0040 N0.048 99
310 310 239 231 224 224
103 MICROFOCUS -005 0.3700 0.4242 0.4610 0.5000 0.5391 0.0674 N0.128 78 310
0.1300 310
0.1724 239
0.2046 231
0.2425 224
0.2810 213
0.0040 N0.263 198
39 64 71 80 86 14
104 MICROSOFT-003 0.0013 0.0016 0.0018 0.0022 0.0028 0.0000 N0.271 189 2
0.0006 2
0.0006 4
0.0007 8
0.0008 10
0.0009 28
0.0001 N0.158 174
37 56 64 75 84 13
105 MICROSOFT-004 0.0012 0.0015 0.0018 0.0021 0.0028 0.0000 N0.281 198 1
0.0006 1
0.0006 1
0.0007 1
0.0007 6
0.0009 38
0.0001 N0.139 166
69 88 104 116 116 9
106 MICROSOFT-005 0.0015 0.0019 0.0023 0.0030 0.0037 0.0000 N0.320 215 3
0.0006 3
0.0006 2
0.0007 2
0.0008 8
0.0009 39
0.0001 N0.136 165
73 96 113 117 120 12
107 MICROSOFT-006 0.0016 0.0020 0.0025 0.0030 0.0038 0.0000 N0.305 211 4
0.0006 4
0.0007 3
0.0007 11
0.0009 27
0.0010 22
0.0000 N0.184 177
266 262 224 219 214 189
108 MUKH -002 0.0204 0.0258 0.0305 0.0361 0.0430 0.0007 N0.255 178 243
0.0054 248
0.0070 219
0.0083 215
0.0101 210
0.0124 41
0.0001 N0.280 202
247 247 214 208 202 148
109 NEC -000 0.0131 0.0170 0.0203 0.0244 0.0294 0.0003 N0.276 194 214
0.0029 220
0.0038 202
0.0048 197
0.0059 190
0.0074 16
0.0000 N0.319 212
259 256 218 212 204 209
110 NEC -001 0.0180 0.0209 0.0233 0.0266 0.0304 0.0016 N0.179 121 275
0.0109 268
0.0113 224
0.0116 217
0.0121 211
0.0129 217
0.0051 N0.056 106
6 11 11 11 8 104
111 NEC -002 0.0009 0.0010 0.0011 0.0012 0.0013 0.0002 N0.113 63 5
0.0008 5
0.0008 5
0.0008 5
0.0008 5
0.0008 88
0.0005 N0.038 82
45 41 40 39 34 170
112 NEC -003 0.0013 0.0014 0.0015 0.0016 0.0016 0.0005 N0.079 29 73
0.0012 66
0.0012 64
0.0012 58
0.0012 52
0.0012 168
0.0009 N0.019 49
56 49 43 38 35 185
113 NEC -004 0.0014 0.0014 0.0015 0.0016 0.0017 0.0006 N0.059 15 101
0.0013 90
0.0013 85
0.0013 82
0.0013 64
0.0013 179
0.0010 N0.016 37
31 27 23 15 12 172
114 NEC -005 0.0011 0.0012 0.0012 0.0013 0.0014 0.0005 N0.066 18 54
0.0011 49
0.0011 44
0.0011 39
0.0011 33
0.0011 167
0.0009 N0.013 29
38 34 32 29 21 175
115 NEC -006 0.0012 0.0013 0.0013 0.0014 0.0015 0.0005 N0.070 21 62
0.0011 59
0.0011 56
0.0012 49
0.0012 46
0.0012 156
0.0008 N0.025 60
258 257 220 215 209 195
116 NEUROTECHNOLOGY-003 0.0179 0.0225 0.0263 0.0306 0.0361 0.0007 N0.239 168 233
0.0042 237
0.0057 213
0.0072 213
0.0090 207
0.0112 21
0.0000 N0.334 215
182 180 175 169 163 127
117 NEUROTECHNOLOGY-004 0.0046 0.0056 0.0064 0.0074 0.0088 0.0002 N0.220 148 188
0.0022 187
0.0025 179
0.0028 173
0.0031 163
0.0034 67
0.0003 N0.154 171
166 164 162 158 152 89
118 NEUROTECHNOLOGY-005 0.0035 0.0043 0.0049 0.0057 0.0068 0.0002 N0.223 151 183
0.0021 181
0.0023 172
0.0024 166
0.0025 157
0.0028 111
0.0006 N0.092 141
159 156 158 153 149 79
119 NEUROTECHNOLOGY-007 0.0032 0.0039 0.0044 0.0052 0.0062 0.0002 N0.222 150 178
0.0020 174
0.0022 168
0.0023 160
0.0024 150
0.0026 140
0.0007 N0.076 128
103 107 108 112 107 48
120 NEUROTECHNOLOGY-008 0.0019 0.0022 0.0025 0.0029 0.0034 0.0001 N0.205 139 112
0.0013 97
0.0013 94
0.0013 90
0.0014 82
0.0015 138
0.0007 N0.043 89
46 50 52 53 54 65
121 NEUROTECHNOLOGY-009 0.0013 0.0014 0.0016 0.0018 0.0021 0.0001 N0.162 108 56
0.0011 54
0.0011 51
0.0011 47
0.0012 44
0.0012 145
0.0007 N0.029 69
30 32 33 33 33 107
122 NEUROTECHNOLOGY-010 0.0011 0.0012 0.0013 0.0015 0.0016 0.0002 N0.125 76 44
0.0010 40
0.0010 31
0.0010 29
0.0010 28
0.0011 159
0.0008 N0.014 31
13 9 8 8 10 125
123 NEUROTECHNOLOGY-012 0.0010 0.0010 0.0011 0.0012 0.0013 0.0002 N0.102 52 31
0.0009 28
0.0009 24
0.0009 21
0.0009 21
0.0010 158
0.0008 N0.009 18
134 127 118 111 99 171
124 NOTIONTAG -000 0.0023 0.0024 0.0026 0.0029 0.0032 0.0005 N0.117 66 170
0.0019 166
0.0019 161
0.0020 149
0.0020 135
0.0021 194
0.0013 N0.027 65
183 186 182 183 176 27
125 NTECHLAB -003 0.0046 0.0062 0.0076 0.0094 0.0114 0.0001 N0.310 212 108
0.0013 138
0.0016 150
0.0018 156
0.0022 151
0.0026 24
0.0001 N0.237 189
169 173 170 166 161 29
126 NTECHLAB -004 0.0037 0.0048 0.0058 0.0071 0.0085 0.0001 N0.291 203 66
0.0011 102
0.0013 120
0.0015 119
0.0017 132
0.0021 34
0.0001 N0.198 179
163 171 171 167 166 16
127 NTECHLAB -005 0.0035 0.0047 0.0058 0.0073 0.0092 0.0000 N0.334 218 11
0.0008 45
0.0011 66
0.0012 108
0.0015 120
0.0019 11
0.0000 N0.283 203
152 161 163 160 158 15
128 NTECHLAB -006 0.0030 0.0041 0.0050 0.0062 0.0078 0.0000 N0.326 217 6
0.0008 24
0.0009 46
0.0011 68
0.0013 99
0.0016 13
0.0000 N0.253 193
126 131 133 133 130 31
129 NTECHLAB -007 0.0022 0.0027 0.0031 0.0037 0.0044 0.0001 N0.245 173 63
0.0011 72
0.0012 76
0.0013 92
0.0014 91
0.0015 65
0.0003 N0.109 151
62 72 80 89 81 24
130 NTECHLAB -008 0.0014 0.0017 0.0020 0.0024 0.0027 0.0001 N0.224 153 41
0.0010 41
0.0010 40
0.0011 46
0.0011 45
0.0012 82
0.0004 N0.065 120
34 35 35 37 39 87
131 NTECHLAB -009 0.0012 0.0013 0.0014 0.0015 0.0018 0.0002 N0.140 84 32
0.0009 29
0.0009 28
0.0010 28
0.0010 26
0.0010 101
0.0005 N0.041 85
19 17 15 14 14 143
132 NTECHLAB -010 0.0011 0.0011 0.0012 0.0013 0.0014 0.0003 N0.091 43 45
0.0010 39
0.0010 30
0.0010 27
0.0010 25
0.0010 173
0.0009 N0.005 12
12 10 9 9 9 123
133 NTECHLAB -011 0.0010 0.0010 0.0011 0.0012 0.0013 0.0002 N0.103 53 19
0.0009 17
0.0009 17
0.0009 14
0.0009 12
0.0009 129
0.0007 N0.017 41
29 25 28 26 29 113
134 PANGIAM -000 0.0011 0.0012 0.0013 0.0014 0.0016 0.0002 N0.118 69 39
0.0010 37
0.0010 35
0.0010 33
0.0011 32
0.0011 126
0.0007 N0.027 66
144 144 144 140 137 73
135 PARAVISION -003 0.0026 0.0031 0.0035 0.0042 0.0048 0.0002 N0.210 143 150
0.0016 150
0.0017 149
0.0018 145
0.0020 133
0.0021 91
0.0005 N0.089 137
72 65 63 59 56 150
136 PARAVISION -004 0.0015 0.0016 0.0017 0.0019 0.0021 0.0003 N0.111 59 106
0.0013 96
0.0013 86
0.0013 79
0.0013 71
0.0014 177
0.0010 N0.020 52
66 60 54 50 46 162
137 PARAVISION -005 0.0015 0.0015 0.0016 0.0018 0.0019 0.0004 N0.094 45 111
0.0013 99
0.0013 93
0.0013 81
0.0013 70
0.0014 183
0.0011 N0.015 33
27 24 22 16 18 145
138 PARAVISION -007 0.0011 0.0012 0.0012 0.0013 0.0015 0.0003 N0.091 42 43
0.0010 33
0.0010 32
0.0010 32
0.0010 29
0.0011 154
0.0008 N0.018 45
11 8 10 10 11 97
139 PARAVISION -009 0.0010 0.0010 0.0011 0.0012 0.0014 0.0002 N0.118 67 27
0.0009 26
0.0009 26
0.0009 26
0.0010 23
0.0010 106
0.0006 N0.032 71
171 168 165 161 156 69
140 PIXELALL -002 0.0037 0.0045 0.0052 0.0062 0.0075 0.0002 N0.238 166 155
0.0017 165
0.0019 163
0.0021 161
0.0024 152
0.0027 58
0.0002 N0.154 172
105 105 106 103 101 80
141 PIXELALL -003 0.0019 0.0021 0.0024 0.0028 0.0032 0.0002 N0.182 124 122
0.0014 116
0.0014 109
0.0014 102
0.0015 95
0.0016 147
0.0007 N0.045 94
88 102 102 97 94 58
142 PIXELALL -004 0.0017 0.0020 0.0023 0.0026 0.0030 0.0001 N0.192 130 109
0.0013 105
0.0013 100
0.0014 95
0.0014 85
0.0015 127
0.0007 N0.046 95
92 90 81 74 69 174
143 PIXELALL -005 0.0018 0.0019 0.0020 0.0021 0.0024 0.0005 N0.098 48 141
0.0015 137
0.0016 126
0.0016 113
0.0016 101
0.0016 189
0.0012 N0.018 43
140 142 145 137 123 138
144 PTAKURATSATU -000 0.0025 0.0030 0.0036 0.0040 0.0040 0.0003 N0.167 112 140
0.0015 140
0.0016 153
0.0018 142
0.0020 123
0.0020 83
0.0004 N0.096 142

Table 23: Investigation-mode: Effect of N on FNIR on recent images For five enrollment population sizes, N, with T = 0 and FPIR
= 1. The left five columns are rank 1 miss rates The right five columns are rank 50 miss rates Missing entries usually apply because
another algorithm from the same developer was run instead. Some developers are missing because less accurate algorithms were
not run on galleries with N > 1 600 000. Throughout blue superscripts indicate the rank of the algorithm for that column, and yellow
highlighting indicates the most accurate value. Caution: The Power-low models are mostly intended to draw attention to the kind
of behavior, not as a model to be used for prediction.

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 68

MISSES AT GIVEN RANK ENROL MOST RECENT


FNIR ( N , T = 0, R ) RANK 1 RANK 50
# ALGORITHM N =0.64M N =1.6M N =3.0M N =6.0M N =12.0M aN b N =0.64M N =1.6M N =3.0M N =6.0M N =12.0M aN b
164 162 160 154 151 124
145 QNAP -001 0.0035 0.0041 0.0047 0.0054 0.0063 0.0002 N0.200 133 189
0.0022 184
0.0023 173
0.0024 167
0.0025 156
0.0028 160
0.0008 N0.072 126
184 174 164 156 145 208
146 QNAP -002 0.0047 0.0049 0.0052 0.0054 0.0059 0.0016 N0.079 30 232
0.0041 224
0.0042 198
0.0042 189
0.0043 175
0.0044 212
0.0032 N0.019 51
139 136 134 128 124 139
147 QNAP -003 0.0025 0.0028 0.0031 0.0035 0.0040 0.0003 N0.161 107 136
0.0014 136
0.0015 130
0.0016 123
0.0018 128
0.0020 76
0.0004 N0.104 146
305 302 237 220 226
148 QUANTASOFT-001 0.2177 0.2177 0.2177 0.2177 0.2177 N0.000 1 308
0.1116 304
0.1116 236
0.1116 220
0.1116 226
0.1116 N-0.000 1
254 254 216 210 205 193
149 RANKONE -002 0.0155 0.0194 0.0224 0.0262 0.0304 0.0007 N0.230 158 237
0.0048 240
0.0060 212
0.0071 210
0.0085 205
0.0102 48
0.0002 N0.254 195
255 253 217 211 206 194
150 RANKONE -003 0.0155 0.0194 0.0224 0.0262 0.0304 0.0007 N0.230 157 238
0.0048 241
0.0060 211
0.0071 211
0.0085 204
0.0102 47
0.0002 N0.254 196
209 216 199 195 189 130
151 RANKONE -005 0.0075 0.0094 0.0110 0.0132 0.0156 0.0003 N0.251 176 208
0.0026 209
0.0032 194
0.0036 187
0.0043 179
0.0050 43
0.0001 N0.221 186
149 148 147 142 140 82
152 RANKONE -007 0.0028 0.0034 0.0038 0.0045 0.0053 0.0002 N0.211 144 139
0.0015 142
0.0017 144
0.0018 139
0.0019 137
0.0021 69
0.0003 N0.123 159
112 120 122 124 119 43
153 RANKONE -009 0.0020 0.0024 0.0027 0.0032 0.0038 0.0001 N0.219 147 119
0.0013 114
0.0014 112
0.0015 106
0.0015 97
0.0016 108
0.0006 N0.059 112
116 113 111 110 102 111
154 RANKONE -010 0.0020 0.0022 0.0025 0.0029 0.0032 0.0002 N0.164 111 131
0.0014 122
0.0015 119
0.0015 111
0.0016 104
0.0017 115
0.0006 N0.058 110
52 57 56 55 57 90
155 RANKONE -011 0.0014 0.0015 0.0017 0.0018 0.0021 0.0002 N0.150 95 68
0.0011 60
0.0012 59
0.0012 54
0.0012 49
0.0012 162
0.0008 N0.023 56
41 42 49 46 48 92
156 RANKONE -012 0.0013 0.0014 0.0015 0.0017 0.0020 0.0002 N0.144 91 58
0.0011 55
0.0011 52
0.0011 45
0.0011 39
0.0012 166
0.0009 N0.016 35
10 13 13 17 19 61
157 RANKONE -013 0.0010 0.0011 0.0012 0.0013 0.0015 0.0001 N0.144 92 17
0.0009 15
0.0009 14
0.0009 13
0.0009 11
0.0009 125
0.0007 N0.017 39
275 271 230 224 218 204
158 REALNETWORKS -002 0.0299 0.0393 0.0470 0.0562 0.0580 0.0013 N0.236 165 242
0.0054 254
0.0076 222
0.0097 218
0.0126 212
0.0132 33
0.0001 N0.320 213
260 260 222 217 213 161
159 REALNETWORKS -003 0.0183 0.0242 0.0291 0.0352 0.0423 0.0004 N0.287 202 231
0.0041 233
0.0054 208
0.0064 208
0.0080 203
0.0101 27
0.0001 N0.307 209
257 258 221 216 210 152
160 REALNETWORKS -004 0.0175 0.0236 0.0284 0.0347 0.0416 0.0003 N0.295 205 228
0.0040 230
0.0050 206
0.0061 206
0.0078 202
0.0099 26
0.0001 N0.315 210
113 116 119 115 114 52
161 REALNETWORKS -005 0.0020 0.0023 0.0026 0.0030 0.0037 0.0001 N0.207 141 71
0.0012 75
0.0012 84
0.0013 89
0.0014 79
0.0015 81
0.0004 N0.081 132
44 46 53 51 51 63
162 REALNETWORKS -006 0.0013 0.0014 0.0016 0.0018 0.0021 0.0001 N0.163 110 36
0.0010 34
0.0010 39
0.0010 40
0.0011 40
0.0012 84
0.0004 N0.060 114
40 39 37 40 41 120
0.0002 N0.124 73 34 31 36 35 36 86
0.0004 N0.057 109
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

163 REALNETWORKS -007 0.0013 0.0013 0.0014 0.0016 0.0018 0.0010 0.0010 0.0010 0.0011 0.0011
18 22 25 23 27 91
164 REALNETWORKS -008 0.0011 0.0011 0.0013 0.0014 0.0016 0.0002 N0.131 80 15
0.0009 13
0.0009 20
0.0009 19
0.0009 18
0.0009 100
0.0005 N0.037 79
147 150 150 148 143 33
165 REMARKAI -000 0.0027 0.0034 0.0040 0.0048 0.0058 0.0001 N0.260 183 133
0.0014 135
0.0015 129
0.0016 127
0.0018 125
0.0020 72
0.0003 N0.108 148
61 61 62 63 62 81
166 RENDIP -000 0.0014 0.0015 0.0017 0.0019 0.0022 0.0002 N0.158 102 72
0.0012 70
0.0012 65
0.0012 59
0.0012 53
0.0013 161
0.0008 N0.025 59
84 86 79 81 74 135
167 REVEALMEDIA -000 0.0017 0.0019 0.0020 0.0023 0.0025 0.0003 N0.134 81 80
0.0012 76
0.0012 70
0.0012 65
0.0013 59
0.0013 148
0.0007 N0.035 75
119 122 127 125 118 59
168 S 1-000 0.0021 0.0024 0.0028 0.0032 0.0037 0.0001 N0.203 136 135
0.0014 129
0.0015 122
0.0015 116
0.0016 105
0.0017 124
0.0007 N0.055 105
157 143 141 130 125 197
169 S 1-001 0.0031 0.0031 0.0034 0.0036 0.0040 0.0009 N0.092 44 193
0.0023 183
0.0023 170
0.0024 162
0.0024 149
0.0025 201
0.0017 N0.023 57
55 48 44 44 40 166
170 S 1-002 0.0014 0.0014 0.0015 0.0016 0.0018 0.0004 N0.085 35 110
0.0013 100
0.0013 89
0.0013 76
0.0013 65
0.0013 186
0.0011 N0.011 22
54 52 51 47 45 155
171 S 1-003 0.0014 0.0015 0.0015 0.0017 0.0019 0.0003 N0.101 51 88
0.0012 79
0.0013 75
0.0013 66
0.0013 58
0.0013 163
0.0009 N0.024 58
172 175 173 168 155 84
172 SCANOVATE -000 0.0038 0.0050 0.0059 0.0073 0.0073 0.0002 N0.235 164 127
0.0014 133
0.0015 135
0.0017 150
0.0020 130
0.0020 56
0.0002 N0.142 167
175 179 174 172 170 28
173 SCANOVATE -001 0.0041 0.0053 0.0064 0.0079 0.0098 0.0001 N0.299 208 120
0.0013 132
0.0015 136
0.0017 152
0.0021 148
0.0024 36
0.0001 N0.207 183
125 118 117 109 100 154
174 SENSETIME -000 0.0022 0.0023 0.0026 0.0028 0.0032 0.0003 N0.135 82 152
0.0016 149
0.0017 147
0.0018 133
0.0018 127
0.0020 144
0.0007 N0.060 113
124 119 114 113 115 96
175 SENSETIME -001 0.0022 0.0023 0.0025 0.0029 0.0037 0.0002 N0.177 120 149
0.0016 139
0.0016 134
0.0017 130
0.0018 146
0.0024 68
0.0003 N0.125 161
250 241 207 197 185 221
176 SENSETIME -002 0.0136 0.0137 0.0137 0.0138 0.0139 0.0124 N0.007 2 281
0.0136 274
0.0136 226
0.0136 220
0.0136 213
0.0136 222
0.0135 N0.001 3
8 7 7 7 7 142
177 SENSETIME -003 0.0010 0.0010 0.0010 0.0011 0.0012 0.0003 N0.085 37 29
0.0009 27
0.0009 25
0.0009 24
0.0010 20
0.0010 152
0.0008 N0.013 27
7 6 5 6 5 144
178 SENSETIME -004 0.0010 0.0010 0.0010 0.0011 0.0012 0.0003 N0.081 32 12
0.0008 10
0.0009 10
0.0009 9
0.0009 9
0.0009 118
0.0007 N0.018 46
4 4 4 4 4 131
179 SENSETIME -005 0.0008 0.0009 0.0009 0.0010 0.0011 0.0003 N0.085 36 8
0.0008 7
0.0008 7
0.0008 3
0.0008 1
0.0008 155
0.0008 N0.002 6
3 3 3 3 3 146
180 SENSETIME -006 0.0008 0.0009 0.0009 0.0010 0.0010 0.0003 N0.069 20 9
0.0008 9
0.0008 9
0.0008 7
0.0008 4
0.0008 132
0.0007 N0.011 21
2 2 1 1 1 156
181 SENSETIME -007 0.0008 0.0008 0.0009 0.0009 0.0010 0.0004 N0.061 17 10
0.0008 8
0.0008 8
0.0008 6
0.0008 3
0.0008 143
0.0007 N0.008 15
1 1 2 2 2 147
182 SENSETIME -008 0.0008 0.0008 0.0009 0.0009 0.0010 0.0003 N0.067 19 7
0.0008 6
0.0008 6
0.0008 4
0.0008 2
0.0008 120
0.0007 N0.013 26
279 272 229 223 216 220
183 SHAMAN -007 0.0371 0.0396 0.0416 0.0443 0.0473 0.0122 N0.083 33 294
0.0308 288
0.0314 232
0.0319 224
0.0326 217
0.0337 223
0.0207 N0.029 70
81 79 83 84 82 74
184 SIAT-001 0.0017 0.0018 0.0020 0.0023 0.0027 0.0002 N0.173 118 50
0.0010 53
0.0011 57
0.0012 62
0.0013 63
0.0013 73
0.0003 N0.085 134
79 82 85 85 79 78
185 SIAT-002 0.0016 0.0018 0.0020 0.0023 0.0027 0.0002 N0.171 115 67
0.0011 74
0.0012 74
0.0013 73
0.0013 72
0.0014 95
0.0005 N0.062 116
148 163 172 175 225 2
186 SQISOFT-001 0.0028 0.0042 0.0059 0.0084 0.9207 0.0000 N1.674 225 35
0.0010 44
0.0010 49
0.0011 56
0.0012 225
0.9198 2
0.0000 N1.883 225
32 37 45 56 64 18
187 SQISOFT-002 0.0012 0.0013 0.0015 0.0019 0.0023 0.0000 N0.232 161 24
0.0009 23
0.0009 27
0.0009 25
0.0010 24
0.0010 102
0.0005 N0.037 80
299 299 235 227 221 222
188 SYNESIS -003 0.1456 0.1700 0.1876 0.2088 0.2317 0.0177 N0.158 103 301
0.0828 301
0.0869 235
0.0920 227
0.0998 219
0.1104 224
0.0218 N0.098 144
256 245 212 203 201 211
189 SYNESIS -003 0.0161 0.0162 0.0163 0.0165 0.0254 0.0027 N0.127 77 284
0.0160 279
0.0160 228
0.0160 222
0.0160 216
0.0245 174
0.0009 N0.192 178
223 208 190 178 164 219
190 SYNESIS -005 0.0085 0.0085 0.0085 0.0086 0.0088 0.0072 N0.012 3 266
0.0085 260
0.0085 220
0.0085 209
0.0085 196
0.0085 221
0.0085 N0.000 2
158 158 161 157 154 32
191 TECH 5-001 0.0032 0.0040 0.0047 0.0057 0.0071 0.0001 N0.271 187 144
0.0016 145
0.0017 148
0.0018 141
0.0020 143
0.0023 70
0.0003 N0.119 156
115 132 136 132 134 17
192 TECH 5-002 0.0020 0.0027 0.0031 0.0037 0.0047 0.0000 N0.285 201 26
0.0009 35
0.0010 43
0.0011 51
0.0012 62
0.0013 50
0.0002 N0.127 163
196 199 189 188 183 56
193 TEVIAN -005 0.0056 0.0073 0.0084 0.0105 0.0130 0.0001 N0.283 200 174
0.0020 180
0.0023 175
0.0025 169
0.0028 162
0.0034 51
0.0002 N0.178 176
132 124 121 105 97 182
194 TEVIAN -006 0.0023 0.0024 0.0026 0.0028 0.0031 0.0005 N0.106 55 145
0.0016 141
0.0017 133
0.0017 122
0.0017 113
0.0018 170
0.0009 N0.041 86
89 78 70 68 55 187
195 TEVIAN -007 0.0017 0.0018 0.0018 0.0020 0.0021 0.0006 N0.073 26 93
0.0013 85
0.0013 83
0.0013 80
0.0013 67
0.0013 164
0.0009 N0.026 61
178 182 178 179 174 30
196 TIGER -002 0.0044 0.0056 0.0068 0.0086 0.0105 0.0001 N0.299 209 102
0.0013 127
0.0015 142
0.0018 154
0.0021 153
0.0027 18
0.0000 N0.253 194
165 167 167 159 188 6
197 TOSHIBA -000 0.0035 0.0045 0.0052 0.0061 0.0154 0.0000 N0.449 221 147
0.0016 157
0.0018 157
0.0019 155
0.0021 206
0.0105 5
0.0000 N0.539 222
155 147 143 136 127 188
198 TRUEFACE -000 0.0031 0.0033 0.0035 0.0039 0.0043 0.0006 N0.115 65 204
0.0025 190
0.0026 176
0.0026 168
0.0027 158
0.0028 198
0.0015 N0.038 83
268 265 225 220 212 206
199 VD -001 0.0230 0.0276 0.0315 0.0363 0.0418 0.0015 N0.204 138 278
0.0120 273
0.0130 227
0.0140 221
0.0154 215
0.0170 210
0.0024 N0.120 157
133 135 137 135 131 42
200 VERIDAS -001 0.0023 0.0028 0.0032 0.0037 0.0045 0.0001 N0.231 160 129
0.0014 124
0.0015 118
0.0015 117
0.0016 112
0.0018 87
0.0005 N0.083 133
131 134 126 121 117 136
201 VERIDAS -002 0.0023 0.0028 0.0028 0.0032 0.0037 0.0003 N0.158 101 128
0.0014 123
0.0015 111
0.0015 107
0.0015 100
0.0016 146
0.0007 N0.047 97
82 81 78 79 77 109
202 VERIDAS -003 0.0017 0.0018 0.0020 0.0022 0.0026 0.0002 N0.150 94 94
0.0013 92
0.0013 90
0.0013 84
0.0014 77
0.0014 130
0.0007 N0.043 91
138 141 142 139 132 54
203 VIGILANTSOLUTIONS -008 0.0025 0.0029 0.0034 0.0040 0.0047 0.0001 N0.224 152 69
0.0012 86
0.0013 106
0.0014 110
0.0015 106
0.0017 55
0.0002 N0.130 164
90 91 96 239 226 1
204 VISIONBOX -000 0.0017 0.0019 0.0022 1.0000 0.9526 0.0000 N2.570 226 90
0.0012 84
0.0013 92
0.0013 233
1.0000 226
0.9525 1
0.0000 N2.719 226
127 133 139 141 153 7
205 VISIONLABS -004 0.0022 0.0027 0.0032 0.0044 0.0070 0.0000 N0.387 219 91
0.0012 112
0.0014 132
0.0017 164
0.0025 176
0.0045 6
0.0000 N0.435 218
111 121 130 131 139 11
206 VISIONLABS -005 0.0020 0.0024 0.0029 0.0037 0.0051 0.0000 N0.322 216 86
0.0012 93
0.0013 124
0.0016 140
0.0019 159
0.0029 12
0.0000 N0.298 206
80 83 97 108 126 10
207 VISIONLABS -006 0.0016 0.0018 0.0022 0.0028 0.0041 0.0000 N0.314 214 81
0.0012 87
0.0013 113
0.0015 134
0.0019 154
0.0027 14
0.0000 N0.275 199
77 77 82 87 108 20
208 VISIONLABS -007 0.0016 0.0018 0.0020 0.0023 0.0034 0.0001 N0.248 174 74
0.0012 73
0.0012 72
0.0013 72
0.0013 126
0.0020 44
0.0001 N0.152 170
102 100 92 96 95 93
209 VISIONLABS -008 0.0019 0.0020 0.0021 0.0025 0.0030 0.0002 N0.169 113 153
0.0016 148
0.0017 139
0.0017 144
0.0020 144
0.0023 74
0.0003 N0.114 153
24 20 21 28 38 50
210 VISIONLABS -009 0.0011 0.0011 0.0012 0.0014 0.0017 0.0001 N0.160 106 38
0.0010 32
0.0010 34
0.0010 44
0.0011 74
0.0014 60
0.0002 N0.109 149
51 45 48 48 52 103
211 VISIONLABS -010 0.0014 0.0014 0.0015 0.0017 0.0021 0.0002 N0.137 83 97
0.0013 81
0.0013 91
0.0013 94
0.0014 103
0.0017 78
0.0004 N0.090 139
25 26 27 30 42 49
212 VISIONLABS -011 0.0011 0.0012 0.0013 0.0014 0.0018 0.0001 N0.162 109 48
0.0010 48
0.0011 50
0.0011 55
0.0012 86
0.0015 57
0.0002 N0.114 154
108 114 120 123 112 38
213 VIXVIZION -009 0.0019 0.0023 0.0026 0.0032 0.0037 0.0001 N0.226 154 52
0.0011 71
0.0012 78
0.0013 83
0.0013 80
0.0015 66
0.0003 N0.106 147
117 106 101 94 83 181
214 VNPT-001 0.0020 0.0022 0.0023 0.0025 0.0028 0.0005 N0.101 50 160
0.0018 155
0.0018 151
0.0018 131
0.0018 115
0.0019 195
0.0014 N0.018 42
99 89 77 73 65 190
215 VNPT-002 0.0018 0.0019 0.0020 0.0021 0.0023 0.0007 N0.072 24 156
0.0017 152
0.0017 140
0.0018 126
0.0018 111
0.0018 199
0.0015 N0.009 20
197 197 186 185 178 129
216 VOCORD -005 0.0060 0.0070 0.0082 0.0097 0.0117 0.0003 N0.232 163 222
0.0033 216
0.0035 196
0.0037 184
0.0040 174
0.0043 175
0.0010 N0.090 138

Table 24: Investigation-mode: Effect of N on FNIR on recent images For five enrollment population sizes, N, with T = 0 and FPIR
= 1. The left five columns are rank 1 miss rates The right five columns are rank 50 miss rates Missing entries usually apply because
another algorithm from the same developer was run instead. Some developers are missing because less accurate algorithms were
not run on galleries with N > 1 600 000. Throughout blue superscripts indicate the rank of the algorithm for that column, and yellow
highlighting indicates the most accurate value. Caution: The Power-low models are mostly intended to draw attention to the kind
of behavior, not as a model to be used for prediction.

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 69
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

MISSES AT GIVEN RANK ENROL MOST RECENT


FNIR ( N , T = 0, R ) RANK 1 RANK 50
# ALGORITHM N =0.64M N =1.6M N =3.0M N =6.0M N =12.0M aN b N =0.64M N =1.6M N =3.0M N =6.0M N =12.0M aN b
50 59 60 65 68 51
217 VTS -001 0.0014 0.0015 0.0017 0.0019 0.0023 0.0001 N0.179 122 37
0.0010 36
0.0010 38
0.0010 34
0.0011 38
0.0011 92
0.0005 N0.051 103
83 92 99 99 98 37
218 VTS -002 0.0017 0.0019 0.0022 0.0026 0.0032 0.0001 N0.215 146 53
0.0011 56
0.0011 63
0.0012 63
0.0013 68
0.0013 80
0.0004 N0.079 131
22 21 20 21 25 100
219 VTS -003 0.0011 0.0011 0.0012 0.0013 0.0015 0.0002 N0.124 75 18
0.0009 16
0.0009 18
0.0009 15
0.0009 16
0.0009 110
0.0006 N0.026 62
122 115 105 101 91 173
220 XFORWARDAI -000 0.0021 0.0023 0.0024 0.0027 0.0029 0.0005 N0.111 60 171
0.0019 164
0.0019 156
0.0019 146
0.0020 129
0.0020 197
0.0015 N0.018 44
114 103 90 78 70 198
221 XFORWARDAI -001 0.0020 0.0020 0.0021 0.0022 0.0024 0.0009 N0.055 14 169
0.0019 163
0.0019 155
0.0019 138
0.0019 121
0.0019 205
0.0018 N0.004 9
109 95 84 72 59 202
222 XFORWARDAI -002 0.0019 0.0020 0.0020 0.0021 0.0022 0.0011 N0.038 7 168
0.0019 161
0.0019 154
0.0019 137
0.0019 117
0.0019 204
0.0018 N0.003 8
75 84 91 91 92 35
223 YITU -002 0.0016 0.0018 0.0021 0.0024 0.0029 0.0001 N0.213 145 33
0.0009 38
0.0010 37
0.0010 36
0.0011 41
0.0012 77
0.0004 N0.073 127
143 140 135 127 122 160
224 YITU -003 0.0026 0.0029 0.0031 0.0035 0.0039 0.0004 N0.141 86 179
0.0020 173
0.0021 166
0.0022 158
0.0023 147
0.0024 176
0.0010 N0.054 104
26 36 47 49 133 5
225 YITU -004 0.0011 0.0013 0.0015 0.0017 0.0047 0.0000 N0.438 220 13
0.0008 12
0.0009 13
0.0009 12
0.0009 164
0.0036 7
0.0000 N0.395 217
129 117 112 102 96 176
226 YITU -005 0.0022 0.0023 0.0025 0.0027 0.0031 0.0005 N0.113 62 172
0.0020 167
0.0020 160
0.0020 148
0.0020 131
0.0020 200
0.0017 N0.012 25

Table 25: Investigation-mode: Effect of N on FNIR on recent images For five enrollment population sizes, N, with T = 0 and FPIR
= 1. The left five columns are rank 1 miss rates The right five columns are rank 50 miss rates Missing entries usually apply because
another algorithm from the same developer was run instead. Some developers are missing because less accurate algorithms were
not run on galleries with N > 1 600 000. Throughout blue superscripts indicate the rank of the algorithm for that column, and yellow
highlighting indicates the most accurate value. Caution: The Power-low models are mostly intended to draw attention to the kind
of behavior, not as a model to be used for prediction.

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 70

MISSES OUTSIDE RANK R RESOURCE USAGE ENROL MOST RECENT, N = 1.6 M


FNIR ( N , T =0, R ) TEMPLATE FRVT 2018 MUGSHOTS
# ALGORITHM BYTES MSEC R =1 R =5 R =10 R =20 R =50 WORK -10
213 85 281 275 274 271 264 276
1 20 FACE -000 2048 247 0.0552 0.0269 0.0198 0.0146 0.0099 1.275
90 186 290 285 285 281 281 286
2 3 DIVI -003 512 625 0.0833 0.0444 0.0349 0.0270 0.0191 1.447
302 187 248 239 237 233 228 244
3 3 DIVI -004 4096 628 0.0175 0.0091 0.0075 0.0061 0.0049 1.092
295 194 249 240 235 232 229 245
4 3 DIVI -005 4096 653 0.0176 0.0091 0.0074 0.0061 0.0049 1.092
99 195 259 266 270 272 277 265
5 3 DIVI -006 528 653 0.0240 0.0171 0.0160 0.0154 0.0148 1.162
84 75 225 206 202 200 193 207
6 ACER -000 512 201 0.0106 0.0051 0.0041 0.0034 0.0026 1.053
169 66 177 179 178 177 176 178
7 ACER -001 2048 184 0.0051 0.0032 0.0028 0.0025 0.0022 1.031
195 125 183 183 188 189 196 184
8 AIZE -001 2048 403 0.0056 0.0037 0.0033 0.0030 0.0027 1.035
200 89 244 253 259 264 267 252
9 ALCHERA -000 2048 263 0.0161 0.0124 0.0117 0.0111 0.0105 1.116
210 51 319 319 319 319 318 319
10 ALCHERA -001 2048 66 0.9869 0.9782 0.9735 0.9679 0.9590 9.811
230 59 292 290 288 288 283 290
11 ALCHERA -002 2048 115 0.0949 0.0555 0.0443 0.0354 0.0254 1.544
164 170 222 209 210 209 208 211
12 ALCHERA -003 2048 548 0.0104 0.0054 0.0045 0.0038 0.0032 1.055
186 278 228 205 198 192 189 205
13 ALCHERA -004 2048 854 0.0110 0.0049 0.0038 0.0032 0.0025 1.051
194 135 231 234 240 241 245 234
14 ALLGOVISION -000 2048 425 0.0114 0.0084 0.0078 0.0073 0.0067 1.079
199 257 211 203 201 199 197 203
15 ALLGOVISION -001 2048 792 0.0090 0.0048 0.0040 0.0033 0.0027 1.048
271 137 239 225 223 222 221 230
16 ANKE -000 2072 431 0.0132 0.0073 0.0060 0.0050 0.0040 1.072
272 138 240 226 225 223 222 231
17 ANKE -001 2072 433 0.0132 0.0073 0.0061 0.0050 0.0040 1.073
263 190 138 136 134 142 144 138
18 ANKE -002 2056 641 0.0028 0.0020 0.0018 0.0018 0.0017 1.019
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

273 231 267 264 261 257 253 266


19 AWARE -003 2076 716 0.0306 0.0162 0.0127 0.0100 0.0075 1.163
47 227 285 282 279 279 276 282
20 AWARE -004 92 712 0.0679 0.0348 0.0274 0.0208 0.0145 1.354
285 265 268 265 263 260 259 267
21 AWARE -005 3100 827 0.0311 0.0167 0.0134 0.0107 0.0082 1.167
48 261 287 284 280 280 278 284
22 AWARE -006 124 818 0.0697 0.0369 0.0288 0.0223 0.0158 1.371
127 46 312 313 313 313 312 313
23 AYONIX -000 1036 10 0.4505 0.3540 0.3176 0.2834 0.2381 4.288
128 48 307 306 306 307 306 306
24 AYONIX -001 1036 12 0.3414 0.2338 0.1977 0.1652 0.1274 3.226
130 47 306 307 307 306 307 307
25 AYONIX -002 1036 11 0.3414 0.2338 0.1977 0.1652 0.1274 3.226
112 224 280 289 290 293 293 288
26 CAMVI -003 1024 707 0.0520 0.0517 0.0517 0.0517 0.0517 1.466
113 233 278 287 289 290 292 285
27 CAMVI -004 1024 718 0.0468 0.0465 0.0465 0.0464 0.0464 1.419
116 248 284 291 295 296 298 291
28 CAMVI -005 1024 769 0.0652 0.0648 0.0648 0.0648 0.0647 1.584
292 294 15 25 21 22 22 21
29 CANON -001 4096 893 0.0011 0.0010 0.0010 0.0009 0.0009 1.009
35 45 23 21 19 18 21 18
30 CANON -002 0 6 0.0012 0.0010 0.0009 0.0009 0.0009 1.009
319 203 58 61 60 61 63 62
31 CIB -000 8196 674 0.0015 0.0013 0.0012 0.0012 0.0012 1.012
291 245 16 24 25 21 20 19
32 CLEARVIEWAI -000 4096 765 0.0011 0.0010 0.0010 0.0009 0.0009 1.009
181 301 54 83 93 103 113 78
33 CLOUDWALK - HR -000 2048 908 0.0015 0.0014 0.0014 0.0014 0.0014 1.013
174 285 76 117 127 136 153 111
34 CLOUDWALK - MT-000 2048 870 0.0018 0.0018 0.0018 0.0018 0.0018 1.016
44 27 75 115 126 137 154 110
35 CLOUDWALK - MT-001 0 2 0.0018 0.0018 0.0018 0.0018 0.0018 1.016
95 171 224 246 251 193 185 242
36 COGENT-000 525 551 0.0105 0.0096 0.0095 0.0032 0.0024 1.088
96 172 223 245 250 194 186 241
37 COGENT-001 525 552 0.0105 0.0096 0.0095 0.0032 0.0024 1.088
131 319 153 148 145 140 134 149
38 COGENT-002 1043 987 0.0036 0.0022 0.0020 0.0018 0.0015 1.021
132 316 155 159 155 156 147 157
39 COGENT-003 1043 960 0.0038 0.0024 0.0021 0.0019 0.0017 1.023
257 313 97 99 100 109 111 96
40 COGENT-004 2053 952 0.0020 0.0016 0.0015 0.0015 0.0014 1.015
133 251 69 82 84 95 103 81
41 COGENT-005 1062 774 0.0017 0.0014 0.0014 0.0014 0.0013 1.013
31 6 33 39 38 45 51 38
42 COGENT-006 0 0 0.0012 0.0011 0.0011 0.0011 0.0011 1.010
238 64 261 259 257 254 243 260
43 COGNITEC -000 2052 176 0.0252 0.0136 0.0107 0.0085 0.0065 1.136
255 76 232 217 216 217 214 218
44 COGNITEC -001 2052 202 0.0117 0.0062 0.0051 0.0042 0.0034 1.062
251 81 184 182 183 185 194 183
45 COGNITEC -002 2052 227 0.0057 0.0037 0.0032 0.0029 0.0026 1.035
243 99 188 191 192 198 206 190
46 COGNITEC -003 2052 297 0.0062 0.0040 0.0036 0.0033 0.0030 1.039
247 72 146 139 125 119 115 143
47 COGNITEC -004 2052 192 0.0032 0.0020 0.0018 0.0015 0.0014 1.020
248 113 66 57 56 55 58 57
48 COGNITEC -005 2052 367 0.0016 0.0013 0.0012 0.0012 0.0011 1.012
244 147 62 56 53 53 57 54
49 COGNITEC -006 2052 463 0.0016 0.0013 0.0012 0.0012 0.0011 1.012
171 305 47 74 85 96 106 69
50 CUBOX -000 2048 918 0.0014 0.0014 0.0014 0.0014 0.0014 1.012
249 219 157 169 173 176 177 168
51 CYBERLINK -000 2052 699 0.0040 0.0028 0.0026 0.0024 0.0022 1.027
252 139 151 154 153 147 151 152
52 CYBERLINK -001 2052 433 0.0035 0.0023 0.0021 0.0018 0.0017 1.022
313 240 130 150 160 167 171 147
53 CYBERLINK -002 4140 738 0.0026 0.0023 0.0022 0.0021 0.0021 1.021
317 217 63 62 62 62 61 66
54 CYBERLINK -003 6212 696 0.0016 0.0013 0.0013 0.0012 0.0012 1.012
316 241 68 90 99 105 117 85
55 CYBERLINK -004 6212 738 0.0017 0.0015 0.0015 0.0014 0.0014 1.014
315 242 80 100 106 114 118 93
56 CYBERLINK -005 6212 739 0.0018 0.0016 0.0015 0.0015 0.0014 1.015
191 119 215 220 224 230 234 219
57 DAHUA -000 2048 378 0.0093 0.0066 0.0061 0.0057 0.0054 1.062
209 115 192 192 189 196 200 192
58 DAHUA -001 2048 371 0.0067 0.0040 0.0036 0.0033 0.0029 1.040
192 218 85 87 96 100 104 87
59 DAHUA -002 2048 699 0.0018 0.0015 0.0014 0.0014 0.0013 1.014
187 236 29 15 15 14 14 16
60 DAHUA -003 2048 725 0.0012 0.0010 0.0009 0.0009 0.0009 1.009
212 244 14 13 16 17 19 14
61 DAHUA -004 2048 759 0.0011 0.0010 0.0009 0.0009 0.0009 1.009
266 177 160 184 195 205 217 180
62 DAON -000 2069 584 0.0041 0.0038 0.0037 0.0037 0.0036 1.034
241 288 104 101 104 101 95 101
63 DECATUR -000 2052 874 0.0021 0.0016 0.0015 0.0014 0.0013 1.015
296 208 51 71 75 80 89 68
64 DEEPGLINT-001 4096 687 0.0014 0.0014 0.0013 0.0013 0.0013 1.012
207 254 165 149 132 125 109 153
65 DEEPSEA -001 2048 780 0.0043 0.0022 0.0018 0.0016 0.0014 1.022
50 78 296 295 293 292 291 295
66 DERMALOG -003 128 211 0.1259 0.0744 0.0603 0.0480 0.0347 1.731
49 77 295 294 292 291 290 294
67 DERMALOG -004 128 208 0.1251 0.0739 0.0598 0.0475 0.0343 1.727
52 164 243 256 260 266 272 253
68 DERMALOG -005 128 532 0.0149 0.0129 0.0125 0.0123 0.0122 1.118
65 162 206 224 226 234 242 222
69 DERMALOG -006 256 514 0.0081 0.0069 0.0066 0.0065 0.0063 1.063
51 132 214 221 222 229 236 220
70 DERMALOG -007 128 413 0.0092 0.0066 0.0060 0.0057 0.0054 1.062
79 114 139 135 131 128 131 137
71 DERMALOG -008 512 370 0.0029 0.0020 0.0018 0.0017 0.0015 1.019
89 110 137 157 166 170 178 151
72 DERMALOG -009 512 347 0.0028 0.0024 0.0023 0.0023 0.0022 1.022

Table 26: Rank-based accuracy for the FRVT 2018 mugshot sets. In columns 3 and 4 are template size and template generation
duration. Thereafter values are rank-based FNIR with T = 0 and FPIR = 1. This is appropriate to investigational uses but not those
with higher volumes where candidates from all searches would need review. The next column is a workload statistic, a small value
shows an algorithm front-loads mates into the first 10 candidates. Throughout, blue superscripts indicate the rank of the algorithm
for that column, and the best value is highlighted in yellow.

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 71

MISSES OUTSIDE RANK R RESOURCE USAGE ENROL MOST RECENT, N = 1.6 M


FNIR ( N , T =0, R ) TEMPLATE FRVT 2018 MUGSHOTS
# ALGORITHM BYTES MSEC R =1 R =5 R =10 R =20 R =50 WORK -10
25 7 108 144 152 165 168 136
73 DERMALOG -010 0 0 0.0022 0.0021 0.0021 0.0020 0.0020 1.019
36 35 315 316 316 316 316 316
74 DIGIDATA -000 0 2 0.5897 0.5892 0.5891 0.5891 0.5891 6.303
41 29 110 95 89 91 78 97
75 DILUSENSE -000 0 2 0.0022 0.0015 0.0014 0.0013 0.0013 1.015
129 120 289 286 286 282 282 287
76 EYEDEA -003 1036 385 0.0800 0.0451 0.0362 0.0289 0.0211 1.448
222 277 236 248 254 258 263 248
77 F 8-001 2048 851 0.0120 0.0105 0.0102 0.0100 0.0099 1.096
205 152 227 208 204 202 195 209
78 FINCORE -000 2048 477 0.0108 0.0052 0.0042 0.0034 0.0026 1.054
43 5 31 44 49 56 64 41
79 FIRSTCREDITKZ -001 0 0 0.0012 0.0012 0.0012 0.0012 0.0012 1.011
126 312 111 106 107 107 108 106
80 FUJITSULAB -000 1032 950 0.0022 0.0016 0.0015 0.0015 0.0014 1.015
33 11 87 93 103 102 110 90
81 FUJITSULAB -001 0 1 0.0019 0.0015 0.0015 0.0014 0.0014 1.014
77 60 300 302 302 302 302 301
82 GLORY-000 418 160 0.1781 0.1391 0.1266 0.1154 0.1007 2.298
156 127 297 297 297 298 299 297
83 GLORY-001 1726 405 0.1268 0.0967 0.0869 0.0778 0.0673 1.903
276 63 282 277 276 276 269 277
84 GORILLA -001 2156 169 0.0603 0.0304 0.0230 0.0174 0.0117 1.309
137 108 255 241 231 225 223 247
85 GORILLA -002 1132 341 0.0197 0.0092 0.0070 0.0054 0.0041 1.096
275 175 269 262 256 248 235 264
86 GORILLA -003 2156 563 0.0361 0.0146 0.0106 0.0078 0.0054 1.158
277 123 189 178 174 171 156 179
87 GORILLA -004 2192 395 0.0063 0.0032 0.0026 0.0023 0.0018 1.033
318 155 145 124 121 111 88 134
88 GORILLA -005 6288 483 0.0032 0.0019 0.0017 0.0015 0.0013 1.018
320 246 74 55 54 51 52 61
89 GORILLA -006 8336 768 0.0017 0.0013 0.0012 0.0012 0.0011 1.012
45 44 70 51 48 43 47 52
90 GORILLA -007 0 6 0.0017 0.0012 0.0012 0.0011 0.0011 1.012
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

5 43 55 43 43 40 46 44
91 GORILLA -008 0 4 0.0015 0.0012 0.0011 0.0011 0.0011 1.011
254 134 128 133 137 143 143 131
92 GRIAULE -000 2052 419 0.0025 0.0020 0.0019 0.0018 0.0017 1.018
19 15 28 31 34 34 42 31
93 GRIAULE -001 0 2 0.0012 0.0011 0.0011 0.0010 0.0010 1.010
145 188 233 215 214 214 205 216
94 HIK -003 1408 633 0.0117 0.0060 0.0048 0.0039 0.0030 1.061
138 160 230 213 213 207 201 215
95 HIK -004 1152 510 0.0113 0.0059 0.0047 0.0037 0.0030 1.060
144 185 169 161 148 133 126 162
96 HIK -005 1408 619 0.0046 0.0025 0.0020 0.0017 0.0015 1.025
143 181 170 162 149 132 125 163
97 HIK -006 1408 610 0.0046 0.0025 0.0020 0.0017 0.0015 1.025
114 276 43 58 65 76 94 56
98 HYPERVERGE -001 1024 846 0.0014 0.0013 0.0013 0.0013 0.0013 1.012
1 10 40 59 67 75 83 55
99 HYPERVERGE -002 0 1 0.0014 0.0013 0.0013 0.0013 0.0013 1.012
15 9 109 104 109 110 119 104
100 HZAILU -000 0 1 0.0022 0.0016 0.0015 0.0015 0.0014 1.015
39 30 94 112 117 124 130 109
101 HZAILU -001 0 2 0.0020 0.0017 0.0016 0.0016 0.0015 1.016
97 209 196 199 199 201 198 196
102 IDEMIA -003 528 689 0.0069 0.0045 0.0039 0.0034 0.0027 1.043
98 201 191 188 184 183 172 186
103 IDEMIA -004 528 669 0.0066 0.0038 0.0032 0.0027 0.0021 1.038
75 117 205 197 193 195 203 199
104 IDEMIA -005 352 374 0.0081 0.0044 0.0036 0.0032 0.0030 1.044
76 116 219 207 205 213 218 206
105 IDEMIA -006 352 373 0.0096 0.0052 0.0042 0.0039 0.0037 1.052
108 259 129 102 92 74 68 107
106 IDEMIA -007 860 807 0.0026 0.0016 0.0014 0.0013 0.0012 1.015
74 142 12 11 14 19 18 10
107 IDEMIA -008 300 451 0.0011 0.0009 0.0009 0.0009 0.0009 1.009
20 1 5 5 8 11 11 5
108 IDEMIA -009 0 0 0.0010 0.0009 0.0009 0.0009 0.0009 1.008
83 52 303 301 300 299 297 302
109 IMAGUS -002 512 76 0.2203 0.1342 0.1090 0.0871 0.0632 2.308
80 50 309 308 308 308 308 308
110 IMAGUS -003 512 57 0.3559 0.2491 0.2132 0.1791 0.1397 3.363
162 256 93 103 101 99 101 98
111 IMAGUS -005 2048 788 0.0019 0.0016 0.0015 0.0014 0.0013 1.015
217 299 98 108 111 112 121 102
112 IMAGUS -006 2048 905 0.0020 0.0016 0.0015 0.0015 0.0014 1.015
215 179 101 89 87 79 82 88
113 IMAGUS -007 2048 590 0.0020 0.0015 0.0014 0.0013 0.0013 1.014
10 21 291 292 294 294 294 292
114 IMAGUS -008 0 2 0.0860 0.0701 0.0646 0.0590 0.0518 1.648
214 197 125 126 133 141 146 126
115 IMPERIAL -000 2048 654 0.0024 0.0019 0.0018 0.0018 0.0017 1.018
118 70 279 274 275 273 270 274
116 INCODE -000 1024 190 0.0489 0.0261 0.0204 0.0160 0.0117 1.262
220 212 246 235 228 227 226 237
117 INCODE -001 2048 690 0.0166 0.0084 0.0067 0.0055 0.0043 1.086
184 96 250 238 232 228 227 243
118 INCODE -002 2048 291 0.0178 0.0090 0.0070 0.0056 0.0043 1.092
183 220 238 219 217 215 207 224
119 INCODE -003 2048 704 0.0129 0.0064 0.0051 0.0040 0.0031 1.066
168 159 152 155 157 159 159 154
120 INCODE -004 2048 508 0.0035 0.0024 0.0021 0.0020 0.0019 1.023
185 158 67 72 82 78 80 73
121 INCODE -005 2048 500 0.0017 0.0014 0.0014 0.0013 0.0013 1.013
101 86 277 279 282 284 286 281
122 INNOVATRICS -002 530 255 0.0451 0.0342 0.0322 0.0308 0.0297 1.321
100 87 263 254 249 243 232 257
123 INNOVATRICS -003 530 255 0.0263 0.0126 0.0095 0.0074 0.0053 1.129
134 129 237 218 215 216 210 223
124 INNOVATRICS -004 1076 406 0.0123 0.0063 0.0050 0.0040 0.0032 1.064
102 274 126 120 119 122 120 120
125 INNOVATRICS -005 538 842 0.0024 0.0018 0.0017 0.0016 0.0014 1.017
104 255 71 77 73 70 77 76
126 INNOVATRICS -007 538 785 0.0017 0.0014 0.0013 0.0013 0.0012 1.013
3 25 99 86 77 68 65 86
127 INTELIGENSIA -000 0 2 0.0020 0.0015 0.0013 0.0013 0.0012 1.014
30 36 270 272 268 267 262 270
128 INTELLIVISION -001 0 2 0.0365 0.0199 0.0160 0.0126 0.0095 1.199
28 38 226 211 209 206 204 212
129 INTELLIVISION -002 0 2 0.0107 0.0055 0.0044 0.0037 0.0030 1.055
8 3 19 32 33 36 43 28
130 INTEMA -000 0 0 0.0011 0.0011 0.0011 0.0010 0.0010 1.010
167 205 298 300 303 303 305 300
131 INTSYSMSU -000 2048 675 0.1457 0.1320 0.1272 0.1225 0.1163 2.203
284 321 166 193 207 219 225 191
132 IREX -000 3080 2379 0.0044 0.0043 0.0043 0.0043 0.0043 1.039
229 104 190 194 200 204 215 195
133 ISYSTEMS -002 2048 316 0.0064 0.0043 0.0039 0.0037 0.0034 1.041
188 279 178 189 194 203 211 185
134 ISYSTEMS -003 2048 856 0.0052 0.0039 0.0036 0.0034 0.0033 1.037
236 271 53 37 36 29 30 39
135 KAKAO -000 2052 840 0.0015 0.0011 0.0011 0.0010 0.0010 1.010
2 26 44 65 72 84 91 60
136 KAKAO -001 0 2 0.0014 0.0013 0.0013 0.0013 0.0013 1.012
73 166 201 227 233 240 249 225
137 KEDACOM -001 292 537 0.0077 0.0074 0.0073 0.0072 0.0072 1.067
173 163 185 214 221 231 238 208
138 KNERON -000 2048 530 0.0059 0.0059 0.0059 0.0059 0.0059 1.053
228 151 266 276 281 283 285 275
139 KNERON -001 2048 468 0.0295 0.0295 0.0295 0.0295 0.0295 1.266
189 153 112 96 86 67 62 95
140 LINE -000 2048 482 0.0022 0.0015 0.0014 0.0013 0.0012 1.015
211 303 18 23 26 25 25 22
141 LINE -001 2048 910 0.0011 0.0010 0.0010 0.0009 0.0009 1.009
6 24 38 50 51 59 67 47
142 LINECLOVA -002 0 2 0.0013 0.0012 0.0012 0.0012 0.0012 1.011
72 109 210 231 239 245 252 228
143 LOOKMAN -003 292 342 0.0088 0.0078 0.0076 0.0075 0.0074 1.071
106 105 212 232 238 244 251 229
144 LOOKMAN -004 548 325 0.0091 0.0079 0.0076 0.0075 0.0073 1.072

Table 27: Rank-based accuracy for the FRVT 2018 mugshot sets. In columns 3 and 4 are template size and template generation
duration. Thereafter values are rank-based FNIR with T = 0 and FPIR = 1. This is appropriate to investigational uses but not those
with higher volumes where candidates from all searches would need review. The next column is a workload statistic, a small value
shows an algorithm front-loads mates into the first 10 candidates. Throughout, blue superscripts indicate the rank of the algorithm
for that column, and the best value is highlighted in yellow.

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 72

MISSES OUTSIDE RANK R RESOURCE USAGE ENROL MOST RECENT, N = 1.6 M


FNIR ( N , T =0, R ) TEMPLATE FRVT 2018 MUGSHOTS
# ALGORITHM BYTES MSEC R =1 R =5 R =10 R =20 R =50 WORK -10
105 161 204 229 236 242 250 226
145 LOOKMAN -005 548 514 0.0080 0.0075 0.0074 0.0073 0.0072 1.068
237 131 73 68 68 66 69 71
146 MANTRA -000 2052 412 0.0017 0.0013 0.0013 0.0012 0.0012 1.013
22 14 123 113 112 113 107 117
147 MAXVISION -000 0 2 0.0024 0.0017 0.0016 0.0015 0.0014 1.016
23 40 30 40 42 47 50 37
148 MAXVISION -001 0 2 0.0012 0.0011 0.0011 0.0011 0.0011 1.010
301 193 234 242 242 252 258 238
149 MEGVII -001 4096 652 0.0118 0.0093 0.0087 0.0084 0.0080 1.086
294 198 235 243 244 251 257 239
150 MEGVII -002 4096 656 0.0118 0.0093 0.0088 0.0084 0.0080 1.087
60 92 317 315 315 315 315 315
151 MICROFOCUS -003 256 269 0.5942 0.4692 0.4204 0.3724 0.3095 5.361
59 93 314 314 314 314 314 314
152 MICROFOCUS -004 256 270 0.5763 0.4519 0.4026 0.3560 0.2957 5.199
66 91 310 310 309 309 310 310
153 MICROFOCUS -005 256 266 0.4242 0.3028 0.2606 0.2209 0.1724 3.861
64 90 311 311 310 311 311 311
154 MICROFOCUS -006 256 265 0.4268 0.3049 0.2623 0.2233 0.1746 3.880
111 126 64 22 9 3 2 25
155 MICROSOFT-003 1024 404 0.0016 0.0010 0.0009 0.0008 0.0006 1.009
223 250 56 10 1 1 1 23
156 MICROSOFT-004 2048 773 0.0015 0.0009 0.0008 0.0007 0.0006 1.009
115 202 88 14 6 2 3 32
157 MICROSOFT-005 1024 673 0.0019 0.0010 0.0008 0.0008 0.0006 1.010
117 216 96 38 20 4 4 48
158 MICROSOFT-006 1024 695 0.0020 0.0011 0.0010 0.0008 0.0007 1.011
14 17 262 260 258 255 248 261
159 MUKH -002 0 2 0.0258 0.0139 0.0112 0.0090 0.0070 1.140
283 53 247 237 227 224 220 240
160 NEC -000 2592 82 0.0170 0.0086 0.0066 0.0052 0.0038 1.087
282 54 256 261 262 265 268 259
161 NEC -001 2592 88 0.0209 0.0141 0.0128 0.0119 0.0113 1.135
154 196 11 6 7 7 5 6
162 NEC -002 1616 653 0.0010 0.0009 0.0008 0.0008 0.0008 1.008
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

155 211 41 54 57 64 66 50
163 NEC -003 1712 690 0.0014 0.0012 0.0012 0.0012 0.0012 1.011
135 318 49 69 79 81 90 67
164 NEC -004 1104 967 0.0014 0.0013 0.0013 0.0013 0.0013 1.012
136 317 27 36 41 44 49 36
165 NEC -005 1104 964 0.0012 0.0011 0.0011 0.0011 0.0011 1.010
34 12 34 48 52 54 59 45
166 NEC -006 0 1 0.0013 0.0012 0.0012 0.0012 0.0011 1.011
178 169 257 255 252 249 237 256
167 NEUROTECHNOLOGY-003 2048 547 0.0225 0.0126 0.0100 0.0078 0.0057 1.125
218 168 180 181 186 188 187 182
168 NEUROTECHNOLOGY-004 2048 543 0.0056 0.0036 0.0032 0.0029 0.0025 1.035
61 130 164 171 175 175 181 172
169 NEUROTECHNOLOGY-005 256 412 0.0043 0.0029 0.0027 0.0024 0.0023 1.028
62 243 251 233 220 220 212 235
170 NEUROTECHNOLOGY-006 256 746 0.0180 0.0079 0.0059 0.0046 0.0033 1.083
63 62 156 166 171 172 174 164
171 NEUROTECHNOLOGY-007 256 169 0.0039 0.0027 0.0025 0.0023 0.0022 1.026
94 258 107 91 95 97 97 94
172 NEUROTECHNOLOGY-008 514 804 0.0022 0.0015 0.0014 0.0014 0.0013 1.015
92 207 50 49 50 50 54 49
173 NEUROTECHNOLOGY-009 513 686 0.0014 0.0012 0.0012 0.0011 0.0011 1.011
67 200 32 28 31 31 40 30
174 NEUROTECHNOLOGY-010 256 663 0.0012 0.0011 0.0010 0.0010 0.0010 1.010
12 2 9 19 23 24 28 13
175 NEUROTECHNOLOGY-012 0 0 0.0010 0.0010 0.0010 0.0009 0.0009 1.009
176 284 288 288 287 287 284 289
176 NEWLAND -002 2048 868 0.0786 0.0480 0.0397 0.0332 0.0263 1.468
208 79 305 305 305 305 303 305
177 NOBLIS -001 2048 211 0.2492 0.1772 0.1542 0.1339 0.1112 2.679
314 165 301 298 298 297 296 298
178 NOBLIS -002 6144 535 0.1794 0.1108 0.0903 0.0722 0.0535 2.077
274 146 127 145 151 162 166 140
179 NOTIONTAG -000 2120 461 0.0024 0.0021 0.0021 0.0020 0.0019 1.019
288 267 186 174 167 157 138 177
180 NTECHLAB -003 3484 831 0.0062 0.0029 0.0023 0.0019 0.0016 1.030
287 306 173 152 140 127 102 160
181 NTECHLAB -004 3484 929 0.0048 0.0023 0.0019 0.0016 0.0013 1.024
159 232 171 147 124 83 45 155
182 NTECHLAB -005 1940 717 0.0047 0.0022 0.0017 0.0013 0.0011 1.023
160 272 161 125 102 60 24 142
183 NTECHLAB -006 1940 841 0.0041 0.0019 0.0015 0.0012 0.0009 1.019
286 269 131 109 94 90 72 113
184 NTECHLAB -007 3348 834 0.0027 0.0017 0.0014 0.0013 0.0012 1.016
141 173 72 47 47 46 41 53
185 NTECHLAB -008 1300 562 0.0017 0.0012 0.0012 0.0011 0.0010 1.012
142 297 35 30 29 28 29 34
186 NTECHLAB -009 1300 900 0.0013 0.0011 0.0010 0.0010 0.0009 1.010
140 289 17 26 28 30 39 24
187 NTECHLAB -010 1280 875 0.0011 0.0010 0.0010 0.0010 0.0010 1.009
139 282 10 9 13 15 17 9
188 NTECHLAB -011 1280 865 0.0010 0.0009 0.0009 0.0009 0.0009 1.008
32 34 25 33 35 35 37 33
189 PANGIAM -000 0 2 0.0012 0.0011 0.0011 0.0010 0.0010 1.010
11 19 195 222 230 236 246 217
190 PANGIAM -001 0 2 0.0069 0.0068 0.0068 0.0068 0.0068 1.061
198 141 252 267 272 275 280 263
191 PARAVISION -000 2048 438 0.0188 0.0171 0.0167 0.0165 0.0164 1.156
226 178 154 158 158 163 160 158
192 PARAVISION -001 2048 590 0.0038 0.0024 0.0022 0.0020 0.0019 1.023
224 118 159 163 163 166 162 161
193 PARAVISION -002 2048 377 0.0040 0.0025 0.0022 0.0021 0.0019 1.025
206 238 144 146 150 153 150 146
194 PARAVISION -003 2048 735 0.0031 0.0022 0.0020 0.0019 0.0017 1.021
303 235 65 75 78 88 96 75
195 PARAVISION -004 4096 720 0.0016 0.0014 0.0013 0.0013 0.0013 1.013
293 280 60 73 81 89 99 70
196 PARAVISION -005 4096 858 0.0015 0.0014 0.0013 0.0013 0.0013 1.013
290 223 24 34 32 33 33 29
197 PARAVISION -007 4096 706 0.0012 0.0011 0.0010 0.0010 0.0010 1.010
304 189 8 17 22 26 26 12
198 PARAVISION -009 4100 638 0.0010 0.0010 0.0010 0.0009 0.0009 1.009
281 73 168 172 172 169 165 173
199 PIXELALL -002 2560 198 0.0045 0.0029 0.0025 0.0022 0.0019 1.028
279 234 105 105 108 106 116 105
200 PIXELALL -003 2560 719 0.0021 0.0016 0.0015 0.0014 0.0014 1.015
280 143 102 94 98 104 105 92
201 PIXELALL -004 2560 453 0.0020 0.0015 0.0015 0.0014 0.0013 1.014
278 275 90 111 114 126 137 103
202 PIXELALL -005 2560 845 0.0019 0.0017 0.0016 0.0016 0.0016 1.015
103 304 142 143 144 139 140 144
203 PTAKURATSATU -000 538 910 0.0030 0.0021 0.0019 0.0018 0.0016 1.020
202 144 202 195 196 197 199 197
204 QNAP -000 2048 457 0.0078 0.0044 0.0037 0.0033 0.0028 1.043
193 183 162 173 176 178 184 170
205 QNAP -001 2048 615 0.0041 0.0029 0.0027 0.0025 0.0023 1.028
27 37 174 196 208 218 224 193
206 QNAP -002 0 2 0.0049 0.0044 0.0043 0.0043 0.0042 1.040
21 13 136 141 138 129 136 141
207 QNAP -003 0 2 0.0028 0.0021 0.0019 0.0017 0.0015 1.019
219 124 302 304 304 304 304 304
208 QUANTASOFT-001 2048 396 0.2177 0.1643 0.1468 0.1312 0.1116 2.539
53 57 254 250 247 246 240 250
209 RANKONE -002 133 113 0.0194 0.0112 0.0093 0.0077 0.0060 1.111
54 58 253 249 246 247 241 249
210 RANKONE -003 133 114 0.0194 0.0112 0.0093 0.0077 0.0060 1.111
46 49 276 273 273 269 265 273
211 RANKONE -004 85 36 0.0415 0.0226 0.0177 0.0141 0.0102 1.225
55 55 216 210 211 212 209 210
212 RANKONE -005 133 94 0.0094 0.0054 0.0046 0.0039 0.0032 1.054
56 88 176 177 177 173 169 176
213 RANKONE -006 165 261 0.0050 0.0030 0.0027 0.0024 0.0021 1.030
57 95 148 153 154 149 142 150
214 RANKONE -007 165 278 0.0034 0.0023 0.0021 0.0018 0.0017 1.022
68 71 120 107 110 115 114 108
215 RANKONE -009 260 191 0.0024 0.0016 0.0015 0.0015 0.0014 1.015
70 74 113 116 116 120 122 116
216 RANKONE -010 261 200 0.0022 0.0018 0.0016 0.0015 0.0015 1.016

Table 28: Rank-based accuracy for the FRVT 2018 mugshot sets. In columns 3 and 4 are template size and template generation
duration. Thereafter values are rank-based FNIR with T = 0 and FPIR = 1. This is appropriate to investigational uses but not those
with higher volumes where candidates from all searches would need review. The next column is a workload statistic, a small value
shows an algorithm front-loads mates into the first 10 candidates. Throughout, blue superscripts indicate the rank of the algorithm
for that column, and the best value is highlighted in yellow.

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 73

MISSES OUTSIDE RANK R RESOURCE USAGE ENROL MOST RECENT, N = 1.6 M


FNIR ( N , T =0, R ) TEMPLATE FRVT 2018 MUGSHOTS
# ALGORITHM BYTES MSEC R =1 R =5 R =10 R =20 R =50 WORK -10
71 176 57 53 55 57 60 51
217 RANKONE -011 261 567 0.0015 0.0012 0.0012 0.0012 0.0012 1.011
69 174 42 46 46 49 55 42
218 RANKONE -012 261 563 0.0014 0.0012 0.0011 0.0011 0.0011 1.011
4 4 13 8 11 13 15 8
219 RANKONE -013 0 0 0.0011 0.0009 0.0009 0.0009 0.0009 1.008
305 83 275 271 267 262 255 272
220 REALNETWORKS -000 4100 244 0.0402 0.0195 0.0149 0.0111 0.0077 1.201
310 82 274 270 266 263 256 271
221 REALNETWORKS -001 4104 243 0.0402 0.0195 0.0149 0.0111 0.0077 1.201
306 84 271 269 265 261 254 269
222 REALNETWORKS -002 4104 245 0.0393 0.0189 0.0142 0.0108 0.0076 1.195
157 65 260 252 245 239 233 254
223 REALNETWORKS -003 1848 178 0.0242 0.0117 0.0090 0.0070 0.0054 1.120
158 67 258 251 243 237 230 251
224 REALNETWORKS -004 1848 185 0.0236 0.0112 0.0087 0.0068 0.0050 1.116
262 107 116 98 88 93 75 100
225 REALNETWORKS -005 2056 337 0.0023 0.0016 0.0014 0.0013 0.0012 1.015
260 111 46 45 44 39 34 43
226 REALNETWORKS -006 2056 350 0.0014 0.0012 0.0011 0.0011 0.0010 1.011
42 28 39 41 39 37 31 40
227 REALNETWORKS -007 0 2 0.0013 0.0012 0.0011 0.0011 0.0010 1.011
7 23 22 16 18 20 13 15
228 REALNETWORKS -008 0 2 0.0011 0.0010 0.0009 0.0009 0.0009 1.009
233 213 150 142 136 130 135 145
229 REMARKAI -000 2048 691 0.0034 0.0021 0.0019 0.0017 0.0015 1.020
170 182 209 198 191 191 188 200
230 REMARKAI -000 2048 615 0.0086 0.0044 0.0036 0.0031 0.0025 1.045
177 140 207 190 182 179 170 194
231 REMARKAI -002 2048 434 0.0081 0.0040 0.0031 0.0026 0.0021 1.041
196 295 61 63 58 63 70 63
232 RENDIP -000 2048 894 0.0015 0.0013 0.0012 0.0012 0.0012 1.012
250 121 86 66 70 71 76 74
233 REVEALMEDIA -000 2052 385 0.0019 0.0013 0.0013 0.0013 0.0012 1.013
300 283 122 114 118 123 129 119
234 S 1-000 4096 865 0.0024 0.0018 0.0017 0.0016 0.0015 1.017
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

221 260 143 160 170 174 183 156


235 S 1-001 2048 814 0.0031 0.0025 0.0024 0.0024 0.0023 1.023
24 42 48 67 74 86 100 65
236 S 1-002 0 2 0.0014 0.0013 0.0013 0.0013 0.0013 1.012
16 16 52 64 69 73 79 64
237 S 1-003 0 2 0.0015 0.0013 0.0013 0.0013 0.0013 1.012
165 228 175 165 161 146 133 167
238 SCANOVATE -000 2048 712 0.0050 0.0026 0.0022 0.0018 0.0015 1.026
190 204 179 167 164 148 132 171
239 SCANOVATE -001 2048 675 0.0053 0.0027 0.0022 0.0018 0.0015 1.028
309 230 118 137 142 145 149 132
240 SENSETIME -000 4104 715 0.0023 0.0020 0.0019 0.0018 0.0017 1.018
307 199 119 134 139 134 139 129
241 SENSETIME -001 4104 656 0.0023 0.0020 0.0019 0.0017 0.0016 1.018
261 192 241 258 264 268 274 255
242 SENSETIME -002 2056 650 0.0137 0.0136 0.0136 0.0136 0.0136 1.122
259 310 7 18 24 23 27 11
243 SENSETIME -003 2056 940 0.0010 0.0010 0.0010 0.0009 0.0009 1.009
122 226 6 7 10 10 10 7
244 SENSETIME -004 1032 710 0.0010 0.0009 0.0009 0.0009 0.0009 1.008
123 320 4 4 3 5 7 3
245 SENSETIME -005 1032 1007 0.0009 0.0008 0.0008 0.0008 0.0008 1.008
124 314 3 3 5 9 9 4
246 SENSETIME -006 1032 956 0.0009 0.0008 0.0008 0.0008 0.0008 1.008
120 315 2 2 4 8 8 2
247 SENSETIME -007 1032 958 0.0008 0.0008 0.0008 0.0008 0.0008 1.007
18 8 1 1 2 6 6 1
248 SENSETIME -008 0 1 0.0008 0.0008 0.0008 0.0008 0.0008 1.007
180 221 293 296 296 295 295 296
249 SHAMAN -003 2048 704 0.1243 0.0823 0.0708 0.0616 0.0518 1.789
232 191 304 303 301 301 300 303
250 SHAMAN -004 2048 642 0.2221 0.1473 0.1241 0.1049 0.0825 2.411
179 222 273 281 284 286 289 280
251 SHAMAN -006 2048 706 0.0398 0.0344 0.0332 0.0323 0.0315 1.316
201 225 272 280 283 285 288 278
252 SHAMAN -007 2048 709 0.0396 0.0342 0.0331 0.0322 0.0314 1.315
242 273 79 78 64 58 53 77
253 SIAT-001 2052 842 0.0018 0.0014 0.0013 0.0012 0.0011 1.013
245 300 82 76 80 77 74 80
254 SIAT-002 2052 906 0.0018 0.0014 0.0013 0.0013 0.0012 1.013
82 61 318 318 318 318 319 318
255 SMILART-004 512 167 0.9648 0.9641 0.9640 0.9639 0.9638 9.678
204 150 320
256 SMILART-005 2048 464 10.000
258 145 163 84 61 52 44 114
257 SQISOFT-001 2056 460 0.0042 0.0014 0.0013 0.0012 0.0010 1.016
9 20 37 27 27 27 23 27
258 SQISOFT-002 0 2 0.0013 0.0010 0.0010 0.0010 0.0009 1.010
299 264 198 216 218 226 231 213
259 STAQU -000 4096 827 0.0071 0.0060 0.0057 0.0055 0.0053 1.056
297 56 299 299 299 300 301 299
260 SYNESIS -003 4096 103 0.1700 0.1172 0.1047 0.0953 0.0869 2.120
163 80 245 263 269 274 279 262
261 SYNESIS -003 2048 215 0.0162 0.0160 0.0160 0.0160 0.0160 1.144
308 249 208 236 241 253 260 233
262 SYNESIS -005 4104 772 0.0085 0.0085 0.0085 0.0085 0.0085 1.076
37 33 221 247 255 259 266 246
263 T 4 ISB -000 0 2 0.0104 0.0103 0.0103 0.0103 0.0103 1.093
146 296 158 156 156 150 145 159
264 TECH 5-001 1536 898 0.0040 0.0024 0.0021 0.0018 0.0017 1.024
93 311 132 81 59 48 35 91
265 TECH 5-002 513 941 0.0027 0.0014 0.0012 0.0011 0.0010 1.014
166 101 242 228 219 221 219 232
266 TEVIAN -003 2048 300 0.0147 0.0074 0.0059 0.0047 0.0037 1.075
227 100 229 212 212 208 202 214
267 TEVIAN -004 2048 299 0.0113 0.0057 0.0047 0.0037 0.0030 1.058
175 133 199 187 181 182 180 189
268 TEVIAN -005 2048 416 0.0073 0.0038 0.0031 0.0027 0.0023 1.038
121 180 124 121 128 131 141 121
269 TEVIAN -006 1032 599 0.0024 0.0018 0.0018 0.0017 0.0017 1.017
125 253 78 70 76 87 85 72
270 TEVIAN -007 1032 779 0.0018 0.0014 0.0013 0.0013 0.0013 1.013
240 136 283 278 277 277 271 279
271 TIGER -000 2052 428 0.0616 0.0310 0.0236 0.0178 0.0120 1.315
235 149 182 175 169 152 127 174
272 TIGER -002 2052 464 0.0056 0.0029 0.0024 0.0019 0.0015 1.030
246 148 181 176 168 151 128 175
273 TIGER -003 2052 464 0.0056 0.0029 0.0024 0.0019 0.0015 1.030
268 69 193 185 187 186 192 187
274 TONGYITRANS -000 2070 190 0.0069 0.0038 0.0032 0.0029 0.0026 1.038
267 68 194 186 185 187 191 188
275 TONGYITRANS -001 2070 189 0.0069 0.0038 0.0032 0.0029 0.0026 1.038
153 307 167 164 162 160 157 165
276 TOSHIBA -000 1548 930 0.0045 0.0026 0.0022 0.0020 0.0018 1.026
265 308 172 168 165 164 158 169
277 TOSHIBA -001 2060 931 0.0048 0.0027 0.0023 0.0020 0.0018 1.027
161 112 147 170 179 181 190 166
278 TRUEFACE -000 2000 365 0.0033 0.0028 0.0028 0.0026 0.0026 1.026
13 22 217 244 248 256 261 236
279 TURINGTECHVIP -001 0 2 0.0095 0.0093 0.0093 0.0093 0.0093 1.084
119 106 313 312 311 310 309 312
280 VD -000 1028 337 0.4737 0.3204 0.2695 0.2215 0.1678 4.058
256 215 265 268 271 270 273 268
281 VD -001 2052 695 0.0276 0.0181 0.0162 0.0146 0.0130 1.174
253 210 218 230 234 238 247 227
282 VD -002 2052 689 0.0095 0.0077 0.0073 0.0070 0.0068 1.071
239 214 200 223 229 235 244 221
283 VD -003 2052 693 0.0076 0.0069 0.0067 0.0066 0.0066 1.063
203 291 135 130 122 121 124 130
284 VERIDAS -001 2048 885 0.0028 0.0019 0.0017 0.0015 0.0015 1.018
225 292 134 129 120 118 123 128
285 VERIDAS -002 2048 888 0.0028 0.0019 0.0017 0.0015 0.0015 1.018
182 290 81 88 91 92 92 84
286 VERIDAS -003 2048 877 0.0018 0.0015 0.0014 0.0013 0.0013 1.014
26 41 308 309 312 312 313 309
287 VERIJELAS -000 0 2 0.3547 0.2975 0.2805 0.2655 0.2489 3.744
147 268 286 283 278 278 275 283
288 VIGILANTSOLUTIONS -003 1544 832 0.0694 0.0349 0.0262 0.0201 0.0140 1.355

Table 29: Rank-based accuracy for the FRVT 2018 mugshot sets. In columns 3 and 4 are template size and template generation
duration. Thereafter values are rank-based FNIR with T = 0 and FPIR = 1. This is appropriate to investigational uses but not those
with higher volumes where candidates from all searches would need review. The next column is a workload statistic, a small value
shows an algorithm front-loads mates into the first 10 candidates. Throughout, blue superscripts indicate the rank of the algorithm
for that column, and the best value is highlighted in yellow.

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 74

MISSES OUTSIDE RANK R RESOURCE USAGE ENROL MOST RECENT, N = 1.6 M


This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

FNIR ( N , T =0, R ) TEMPLATE FRVT 2018 MUGSHOTS


# ALGORITHM BYTES MSEC R =1 R =5 R =10 R =20 R =50 WORK -10
149 266 294 293 291 289 287 293
289 VIGILANTSOLUTIONS -004 1544 830 0.1249 0.0706 0.0557 0.0434 0.0305 1.699
151 252 213 200 190 184 175 201
290 VIGILANTSOLUTIONS -005 1544 778 0.0092 0.0045 0.0036 0.0029 0.0022 1.046
150 270 220 202 197 190 179 204
291 VIGILANTSOLUTIONS -006 1544 834 0.0099 0.0048 0.0038 0.0030 0.0022 1.049
152 184 149 132 123 117 98 139
292 VIGILANTSOLUTIONS -007 1544 618 0.0034 0.0020 0.0017 0.0015 0.0013 1.019
148 128 141 123 113 108 86 127
293 VIGILANTSOLUTIONS -008 1544 405 0.0029 0.0018 0.0016 0.0015 0.0013 1.018
264 154 91 92 97 94 84 89
294 VISIONBOX -000 2059 482 0.0019 0.0015 0.0014 0.0013 0.0013 1.014
58 103 133 119 115 116 112 123
295 VISIONLABS -004 256 315 0.0027 0.0018 0.0016 0.0015 0.0014 1.017
85 102 121 110 105 98 93 112
296 VISIONLABS -005 512 300 0.0024 0.0017 0.0015 0.0014 0.0013 1.016
91 97 83 85 83 85 87 83
297 VISIONLABS -006 512 292 0.0018 0.0015 0.0014 0.0013 0.0013 1.014
81 98 77 80 71 69 73 79
298 VISIONLABS -007 512 293 0.0018 0.0014 0.0013 0.0013 0.0012 1.013
78 94 100 122 130 135 148 118
299 VISIONLABS -008 512 277 0.0020 0.0018 0.0018 0.0018 0.0017 1.017
86 157 20 29 30 32 32 26
300 VISIONLABS -009 512 494 0.0011 0.0011 0.0010 0.0010 0.0010 1.010
88 237 45 60 66 72 81 59
301 VISIONLABS -010 512 732 0.0014 0.0013 0.0013 0.0013 0.0013 1.012
87 239 26 35 40 42 48 35
302 VISIONLABS -011 512 736 0.0012 0.0011 0.0011 0.0011 0.0011 1.010
38 32 114 97 90 82 71 99
303 VIXVIZION -009 0 2 0.0023 0.0016 0.0014 0.0013 0.0012 1.015
17 18 106 127 135 144 155 124
304 VNPT-001 0 2 0.0022 0.0019 0.0018 0.0018 0.0018 1.017
29 39 89 118 129 138 152 115
305 VNPT-002 0 2 0.0019 0.0018 0.0018 0.0018 0.0017 1.016
110 229 187 180 180 180 182 181
306 VOCORD -003 896 714 0.0062 0.0035 0.0030 0.0026 0.0023 1.035
109 167 203 204 206 211 213 202
307 VOCORD -004 896 538 0.0079 0.0049 0.0043 0.0038 0.0034 1.048
107 262 197 201 203 210 216 198
308 VOCORD -005 768 822 0.0070 0.0046 0.0041 0.0038 0.0035 1.044
321 263 321 321 321 320 320 321
309 VOCORD -006 10240 825 1.0000 1.0000 1.0000 1.0000 1.0000 10.000
234 156 316 317 317 317 317 317
310 VTS -000 2048 492 0.5937 0.5936 0.5936 0.5936 0.5936 6.343
216 293 59 42 37 38 36 46
311 VTS -001 2048 891 0.0015 0.0012 0.0011 0.0011 0.0010 1.011
172 298 92 79 63 65 56 82
312 VTS -002 2048 903 0.0019 0.0014 0.0013 0.0012 0.0011 1.013
40 31 21 20 17 16 16 17
313 VTS -003 0 2 0.0011 0.0010 0.0009 0.0009 0.0009 1.009
197 247 115 138 146 158 164 133
314 XFORWARDAI -000 2048 768 0.0023 0.0020 0.0020 0.0019 0.0019 1.018
231 206 103 131 143 155 163 125
315 XFORWARDAI -001 2048 681 0.0020 0.0019 0.0019 0.0019 0.0019 1.018
298 309 95 128 141 154 161 122
316 XFORWARDAI -002 4096 935 0.0020 0.0019 0.0019 0.0019 0.0019 1.017
289 122 264 257 253 250 239 258
317 YISHENG -001 3704 387 0.0265 0.0130 0.0102 0.0080 0.0059 1.134
312 286 84 52 45 41 38 58
318 YITU -002 4138 870 0.0018 0.0012 0.0011 0.0011 0.0010 1.012
311 287 140 151 159 168 173 148
319 YITU -003 4138 871 0.0029 0.0023 0.0022 0.0021 0.0021 1.021
269 302 36 12 12 12 12 20
320 YITU -004 2070 910 0.0013 0.0009 0.0009 0.0009 0.0009 1.009
270 281 117 140 147 161 167 135
321 YITU -005 2070 861 0.0023 0.0021 0.0020 0.0020 0.0020 1.019

Table 30: Rank-based accuracy for the FRVT 2018 mugshot sets. In columns 3 and 4 are template size and template generation
duration. Thereafter values are rank-based FNIR with T = 0 and FPIR = 1. This is appropriate to investigational uses but not those
with higher volumes where candidates from all searches would need review. The next column is a workload statistic, a small value
shows an algorithm front-loads mates into the first 10 candidates. Throughout, blue superscripts indicate the rank of the algorithm
for that column, and the best value is highlighted in yellow.

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

MISSES BELOW THRESHOLD , T ENROL RECENT MUGSHOT, N = 1.6 M ENROL APPLICATION PORTRAIT, N = 1.6 M
11:12:06
2022/12/18

ENROL : MUGSHOT ENROL : MUGSHOT ENROL : MUGSHOT ENROL : VISA ENROL : BORDER ENROL : VISA
PROBE : MUGSHOT PROBE : WEBCAM PROBE : PROFILE PROBE : BORDER PROBE : BORDER 10+ YR PROBE : KIOSK
# ALGORITHM FPIR =0.0003 FPIR =0.001 FPIR =0.01 FPIR =0.0003 FPIR =0.001 FPIR =0.01 FPIR =0.0003 FPIR =0.001 FPIR =0.01 FPIR =0.001 FPIR =0.01 FPIR =0.001 FPIR =0.01 FPIR =0.001 FPIR =0.01
266 276 284 277 270 272 221 255 254 207 209 117 125 203 217
1 20 FACE -000 0.462 0.348 0.230 0.763 0.450 0.301 1.000 1.000 1.000 0.424 0.255 0.772 0.599 0.938 0.836
268 285 290 272 284 288 220 221 184 206
2 3 DIVI -003 0.482 0.400 0.282 0.685 0.626 0.497 0.605 0.445 0.821 0.717
238 255 260 239 260 266 196 202 159 187
3 3 DIVI -004 0.256 0.169 0.093 0.400 0.343 0.237 0.277 0.172 0.607 0.485
237 252 259 238 258 265 159 169 186 227 230 158 186
4 3 DIVI -005 0.255 0.166 0.093 0.395 0.339 0.234 0.998 0.996 0.990 0.864 0.846 0.597 0.484
236 254 262 242 259 267 197 203 162 188
5 3 DIVI -006 0.253 0.168 0.096 0.403 0.342 0.238 0.283 0.174 0.615 0.490
222 244 249 222 237 240 105 118 149 191 198 142 169
6 ACER -000 0.208 0.146 0.074 0.300 0.246 0.157 0.987 0.981 0.955 0.201 0.114 0.490 0.363
168 186 191 157 164 171 194 205 233 149 151 104 112 141 116
7 ACER -001 0.109 0.056 0.026 0.136 0.109 0.069 1.000 0.999 0.998 0.068 0.036 0.406 0.250 0.479 0.206
179 206 205 187 192 194 132 147 174 163 167 97 107 120 151
8 AIZE -001 0.127 0.077 0.034 0.187 0.143 0.087 0.995 0.994 0.983 0.101 0.052 0.364 0.216 0.387 0.289
229 241 243 211 221 233 171 185 215 185 196 180 181
9 ALCHERA -000 0.231 0.138 0.070 0.259 0.216 0.146 0.999 0.999 0.996 0.176 0.111 0.803 0.456
316 316 318 316 319 311 317 313 279 266
FNIR(N, R, T) =

10 ALCHERA -001 1.000 0.999 0.999 1.000 1.000 1.000 1.000 1.000 1.000 1.000
291 292 293 271 281 283 212 214 238 226 227 181 203
11 ALCHERA -002 0.807 0.486 0.302 0.685 0.591 0.442 1.000 1.000 0.999 0.827 0.770 0.811 0.705
FPIR(N, T) =

262 246 244 223 233 238 206 198 220 184 189 136 168
12 ALCHERA -003 0.450 0.155 0.070 0.304 0.239 0.152 1.000 0.999 0.997 0.172 0.097 0.464 0.362
273 284 283 268 276 277 133 139 111 208 207 111 120 153 178
13 ALCHERA -004 0.520 0.394 0.211 0.642 0.529 0.327 0.995 0.991 0.813 0.424 0.232 0.708 0.515 0.546 0.398
188 218 225 198 208 218 118 136 173 166 173 150 177
14 ALLGOVISION -000 0.138 0.088 0.045 0.202 0.166 0.106 0.993 0.990 0.982 0.117 0.066 0.526 0.396
197 224 231 216 225 232 123 125 132 178 184 143 176

FRVT
15 ALLGOVISION -001 0.155 0.102 0.053 0.275 0.221 0.141 0.993 0.986 0.933 0.150 0.081 0.491 0.389
208 228 240 209 224 236 129 146 184 279 277 226 301
16 ANKE -000 0.184 0.117 0.063 0.256 0.220 0.151 0.995 0.994 0.990 1.000 1.000 1.000 1.000
206 232 241 210 223 237 135 153 195 268 275 235 306
17 ANKE -001 0.183 0.119 0.063 0.256 0.220 0.151 0.995 0.994 0.992 1.000 1.000 1.000 1.000

-
131 147 147 121 128 133 82 88 106 108 112 86 110
False pos. identification rate
False neg. identification rate

18 ANKE -002 0.062 0.032 0.014 0.103 0.079 0.050 0.975 0.948 0.795 0.034 0.018 0.245 0.190

FACE RECOGNITION VENDOR TEST


205 236 252 234 252 259 102 122 168 209 215 151 180
19 AWARE -003 0.174 0.128 0.082 0.351 0.298 0.204 0.987 0.984 0.977 0.428 0.378 0.530 0.443
254 268 279 264 275 281 210 216 241 204 211 182 198
20 AWARE -004 0.355 0.269 0.175 0.619 0.509 0.375 1.000 1.000 0.999 0.397 0.279 0.816 0.631
279 279 254 229 239 242 204 222 240 195 200 197 205
21 AWARE -005 0.608 0.364 0.085 0.342 0.253 0.163 1.000 1.000 0.999 0.255 0.122 0.916 0.714
267 269 280 252 263 270 189 209 234 202 208 174 195
22 AWARE -006 0.475 0.276 0.175 0.466 0.398 0.283 1.000 0.999 0.999 0.368 0.254 0.749 0.623
295 304 310 291 303 305 160 175 212 231 232 212 223
23 AYONIX -000 0.846 0.811 0.724 0.956 0.939 0.892 0.998 0.998 0.995 0.954 0.891 0.982 0.959
297 306 308 286 298 301 202 204 217 236 236 208 220
24 AYONIX -001 0.875 0.824 0.701 0.946 0.920 0.845 1.000 0.999 0.996 0.999 0.998 0.969 0.926
298 305 309 287 299 300 201 207 216 228 228 209 219
25 AYONIX -002 0.876 0.824 0.702 0.946 0.920 0.845 1.000 0.999 0.996 0.915 0.821 0.969 0.926
157 201 236 167 184 219 89 99 139 165 190 124 173
26 CAMVI -003 0.094 0.071 0.058 0.152 0.132 0.108 0.979 0.970 0.940 0.114 0.100 0.402 0.377
166 202 234 206 186 208 191 203 224 162 183 178 189
27 CAMVI -004 0.107 0.072 0.054 0.240 0.136 0.100 1.000 0.999 0.998 0.100 0.081 0.787 0.507
189 222 251 249 215 227 197 213 231 179 197 220 229
28 CAMVI -005 0.139 0.099 0.076 0.451 0.179 0.132 1.000 1.000 0.998 0.156 0.112 0.999 0.983
36 43 43 30 30 31 34 22 37 30 34 29 33 40 33
29 CANON -001 0.012 0.005 0.002 0.031 0.023 0.015 0.633 0.365 0.217 0.008 0.004 0.068 0.034 0.139 0.092
24 36 37 26 27 19 19 26 39 54 36 34 43 71 44
30 CANON -002 0.010 0.005 0.002 0.027 0.020 0.013 0.487 0.407 0.253 0.013 0.004 0.075 0.046 0.188 0.106
R = Num. candidates examined
N = Num. enrolled subjects

97 76 70 90 71 70 224 230 250 65 58 51 55 194 190


31 CIB -000 0.044 0.012 0.005 0.077 0.045 0.025 1.000 1.000 1.000 0.017 0.008 0.141 0.068 0.894 0.521
40 45 41 39 35 36 174 104 20 31 26 23 23 98 19
32 CLEARVIEWAI -000 0.013 0.006 0.002 0.036 0.025 0.016 0.999 0.974 0.149 0.008 0.004 0.057 0.027 0.268 0.080
10 13 18 10 12 15 3 3 6 15 18 10 14 20 11
33 CLOUDWALK - HR -000 0.004 0.002 0.002 0.015 0.013 0.012 0.188 0.133 0.095 0.005 0.003 0.033 0.018 0.099 0.075
6 12 23 7 11 17 2 2 2 3 6 2 3 2 5
34 CLOUDWALK - MT-000 0.003 0.002 0.002 0.015 0.013 0.012 0.169 0.109 0.077 0.002 0.002 0.018 0.009 0.072 0.063
3 10 21 4 4 14 1 1 1 1 1 1 1 1 1
35 CLOUDWALK - MT-001 0.003 0.002 0.002 0.013 0.012 0.011 0.104 0.070 0.060 0.001 0.001 0.015 0.006 0.056 0.049
192 176 196 178 188 212 140 158 192
36 COGENT-000 0.143 0.053 0.029 0.175 0.140 0.100 0.996 0.995 0.991

-
193 177 197 179 189 211 139 159 190
37 COGENT-001 0.143 0.053 0.029 0.175 0.140 0.100 0.996 0.995 0.991

IDENTIFICATION
203 163 158 142 154 162 166 174 201
38 COGENT-002 0.159 0.044 0.017 0.124 0.098 0.063 0.998 0.998 0.994
220 168 152 140 148 159 167 177 210
39 COGENT-003 0.203 0.046 0.016 0.121 0.095 0.061 0.999 0.998 0.995
224 148 80 76 77 83 158 172 213 79 83 47 58 133 101
40 COGENT-004 0.209 0.033 0.006 0.067 0.051 0.031 0.998 0.997 0.995 0.022 0.012 0.126 0.072 0.456 0.178
113 60 61 55 60 64 137 132 49 49 52 37 42 195 112
41 COGENT-005 0.050 0.009 0.004 0.050 0.037 0.023 0.996 0.989 0.323 0.011 0.006 0.082 0.043 0.905 0.202
32 32 33 33 31 32 17 15 17 20 21 105 25 36 55
42 COGENT-006 0.010 0.004 0.002 0.033 0.023 0.015 0.383 0.238 0.145 0.006 0.003 0.422 0.028 0.130 0.120
227 250 261 247 253 257 136 140 157
43 COGNITEC -000 0.226 0.161 0.095 0.439 0.303 0.200 0.996 0.992 0.971
218 223 232 304 229 229 263 309 153
44 COGNITEC -001 0.192 0.102 0.053 0.997 0.230 0.135 1.000 1.000 0.965
175 179 186 301 214 215 283 229 150
T = Threshold

45 COGNITEC -002 0.122 0.053 0.025 0.990 0.178 0.101 1.000 1.000 0.956
161 175 188 202 206 209 287 232 141
46 COGNITEC -003 0.099 0.053 0.025 0.222 0.162 0.100 1.000 1.000 0.946

Table 31: Threshold-based accuracy. Values are FNIR(N, T, L) with N = 1.6 million with thresholds set to produce FPIR = 0.0003, 0.001, and 0.01 in non-mate searches.
Throughout blue superscripts indicate the rank of the algorithm for that column. Caution: The Power-low models are mostly intended to draw attention to the kind of
behavior, not as a model to be used for prediction.
T > 0 → Identification
T = 0 → Investigation

75
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

MISSES BELOW THRESHOLD , T ENROL RECENT MUGSHOT, N = 1.6 M ENROL APPLICATION PORTRAIT, N = 1.6 M
11:12:06
2022/12/18

ENROL : MUGSHOT ENROL : MUGSHOT ENROL : MUGSHOT ENROL : VISA ENROL : BORDER ENROL : VISA
PROBE : MUGSHOT PROBE : WEBCAM PROBE : PROFILE PROBE : BORDER PROBE : BORDER 10+ YR PROBE : KIOSK
# ALGORITHM FPIR =0.0003 FPIR =0.001 FPIR =0.01 FPIR =0.0003 FPIR =0.001 FPIR =0.01 FPIR =0.0003 FPIR =0.001 FPIR =0.01 FPIR =0.001 FPIR =0.01 FPIR =0.001 FPIR =0.01 FPIR =0.001 FPIR =0.01
124 146 148 145 152 151 131 133 127 148 152 94 106 100 126
47 COGNITEC -004 0.055 0.031 0.014 0.127 0.097 0.058 0.995 0.990 0.919 0.068 0.038 0.316 0.196 0.288 0.218
125 62 58 64 67 60 271 301 121 120 137 61 75 66 79
48 COGNITEC -005 0.055 0.010 0.004 0.058 0.041 0.022 1.000 1.000 0.878 0.041 0.028 0.157 0.092 0.179 0.145
73 55 52 70 63 57 295 283 235 96 92 67 67 168 123
49 COGNITEC -006 0.029 0.008 0.003 0.065 0.040 0.022 1.000 1.000 0.999 0.030 0.013 0.171 0.081 0.681 0.214
13 21 25 20 22 27 8 6 11 11 12 9 9 3 3
50 CUBOX -000 0.005 0.003 0.002 0.022 0.019 0.014 0.276 0.168 0.104 0.004 0.003 0.028 0.014 0.073 0.062
187 187 178 173 168 174 151 162 172 146 145 114 133
51 CYBERLINK -000 0.137 0.056 0.023 0.162 0.116 0.070 0.997 0.995 0.981 0.063 0.032 0.339 0.232
158 180 176 160 165 167 150 157 175 143 141 164 135
52 CYBERLINK -001 0.096 0.054 0.022 0.138 0.109 0.067 0.997 0.995 0.984 0.062 0.031 0.652 0.239
88 86 86 80 87 88 125 131 151 83 86 101 89
53 CYBERLINK -002 0.038 0.015 0.006 0.068 0.053 0.032 0.994 0.988 0.957 0.024 0.013 0.288 0.157
101 56 57 48 56 53 126 102 115 51 56 42 47 117 57
54 CYBERLINK -003 0.045 0.008 0.004 0.045 0.035 0.021 0.995 0.972 0.845 0.012 0.007 0.100 0.051 0.368 0.120
216 52 51 68 57 59 247 249 243 53 55 43 44 207 154
55 CYBERLINK -004 0.188 0.007 0.003 0.063 0.036 0.022 1.000 1.000 0.999 0.013 0.007 0.109 0.050 0.954 0.291
223 66 63 59 64 72 219 218 122 56 57 38 40 202 144
56 CYBERLINK -005 0.208 0.010 0.004 0.054 0.041 0.026 1.000 1.000 0.888 0.014 0.007 0.089 0.043 0.926 0.266
FNIR(N, R, T) =

181 214 222 182 185 192


57 DAHUA -000 0.128 0.086 0.045 0.179 0.135 0.083
FPIR(N, T) =

165 204 209 166 176 183 104 114 133


58 DAHUA -001 0.106 0.073 0.037 0.151 0.122 0.075 0.987 0.980 0.933
66 87 84 65 73 77 40 48 72 62 63 54 61
59 DAHUA -002 0.026 0.015 0.006 0.060 0.046 0.029 0.681 0.638 0.522 0.017 0.008 0.159 0.125
64 81 74 58 66 67 35 43 60 52 54 36 41 39 45
60 DAHUA -003 0.025 0.014 0.005 0.054 0.041 0.024 0.647 0.579 0.447 0.013 0.006 0.081 0.043 0.134 0.109
43 51 48 34 37 37 24 36 51 37 38 19 21 31 36
61 DAHUA -004 0.014 0.007 0.003 0.033 0.026 0.016 0.552 0.485 0.345 0.008 0.004 0.051 0.027 0.113 0.094

FRVT
185 117 116 94 101 106 211 219 230 84 87 69 74 189 97
62 DAON -000 0.135 0.023 0.009 0.079 0.061 0.039 1.000 1.000 0.998 0.025 0.013 0.173 0.091 0.846 0.172
93 120 121 99 107 108 43 53 67 88 96 68 81 84 87
63 DECATUR -000 0.043 0.023 0.010 0.085 0.066 0.040 0.757 0.675 0.509 0.027 0.014 0.173 0.098 0.239 0.156
29 26 27 13 13 9 240 211 65 21 37 53 37
64 DEEPGLINT-001 0.010 0.003 0.002 0.018 0.014 0.010 1.000 1.000 0.503 0.006 0.004 0.159 0.097

-
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


143 167 174 148 157 154 108 123 162 153 157 111 139
65 DEEPSEA -001 0.073 0.046 0.022 0.129 0.101 0.059 0.988 0.985 0.973 0.077 0.041 0.326 0.251
275 291 296 275 286 292 225 223 192 213
66 DERMALOG -003 0.550 0.482 0.360 0.715 0.655 0.526 0.677 0.554 0.870 0.791
277 290 295 274 287 290 145 163 194 219 222 190 209
67 DERMALOG -004 0.554 0.480 0.358 0.711 0.657 0.526 0.997 0.995 0.991 0.603 0.458 0.856 0.751
217 217 217 194 199 205 142 134 144 198 210 161 182
68 DERMALOG -005 0.189 0.088 0.043 0.201 0.154 0.096 0.996 0.990 0.950 0.300 0.267 0.614 0.459
160 174 190 159 159 166 110 117 134 141 143 110 131
69 DERMALOG -006 0.098 0.052 0.026 0.137 0.105 0.067 0.989 0.981 0.933 0.059 0.031 0.318 0.230
214 215 215 193 197 202 141 135 143 161 166 156 158
70 DERMALOG -007 0.188 0.086 0.040 0.200 0.152 0.093 0.996 0.990 0.950 0.099 0.052 0.557 0.299
241 165 157 203 145 144 244 237 251 138 132 101 101 204 201
71 DERMALOG -008 0.268 0.045 0.017 0.231 0.094 0.054 1.000 1.000 1.000 0.057 0.025 0.382 0.158 0.940 0.678
92 110 113 101 108 111 215 234 248 98 103 128 133 187 127
72 DERMALOG -009 0.041 0.021 0.009 0.086 0.066 0.040 1.000 1.000 1.000 0.031 0.016 0.999 0.999 0.840 0.222
49 50 56 155 91 62 190 194 146 159 159 129 131 148 122
73 DERMALOG -010 0.019 0.007 0.004 0.134 0.055 0.023 1.000 0.999 0.952 0.089 0.041 1.000 0.971 0.522 0.211
280 297 305 262 280 293 175 151 140 223 225 118 128 185 215
74 DIGIDATA -000 0.620 0.610 0.598 0.588 0.577 0.560 0.999 0.994 0.942 0.646 0.643 0.789 0.722 0.824 0.816
118 144 142 117 127 127 55 49 63 115 127 110 110 75 86
75 DILUSENSE -000 0.053 0.030 0.012 0.100 0.078 0.047 0.852 0.655 0.488 0.039 0.022 0.664 0.242 0.203 0.154
272 281 288 266 278 282 144 154 183 217 217 179 200
76 EYEDEA -003 0.509 0.388 0.265 0.625 0.543 0.404 0.997 0.994 0.990 0.570 0.392 0.792 0.658
264 251 206 173 184 214
77 F 8-001 0.458 0.166 0.036 0.999 0.998 0.995
R = Num. candidates examined
N = Num. enrolled subjects

213 240 247 215 222 230 205 223 206 186 195 109 118 134 164
78 FINCORE -000 0.187 0.134 0.071 0.267 0.217 0.140 1.000 1.000 0.995 0.187 0.108 0.598 0.418 0.458 0.349
17 23 17 24 23 22 16 18 28 28 33 26 26 16 17
79 FIRSTCREDITKZ -001 0.007 0.003 0.002 0.025 0.019 0.013 0.379 0.291 0.177 0.007 0.004 0.061 0.028 0.097 0.079
233 111 107 83 94 99 81 91 70 79 85 88
80 FUJITSULAB -000 0.246 0.021 0.008 0.070 0.056 0.035 0.024 0.013 0.177 0.093 0.240 0.156
283 100 90 129 96 90 138 142 138 82 81 112 116 88 82
81 FUJITSULAB -001 0.655 0.018 0.007 0.112 0.058 0.033 0.996 0.992 0.940 0.024 0.011 0.739 0.310 0.247 0.146
261 280 292 261 279 286 128 156 198 211 216 186 214
82 GLORY-000 0.441 0.367 0.295 0.586 0.547 0.470 0.995 0.995 0.993 0.453 0.381 0.839 0.795

-
253 271 285 260 277 284 124 144 188 206 213 183 210
83 GLORY-001 0.355 0.305 0.236 0.582 0.537 0.448 0.994 0.993 0.991 0.408 0.336 0.819 0.753

IDENTIFICATION
289 286 286 263 271 275 232 246 260 212 212 292 204
84 GORILLA -001 0.747 0.406 0.246 0.590 0.453 0.314 1.000 1.000 1.000 0.468 0.299 1.000 0.710
240 258 266 231 246 248 229 248 199 194 201 225 185
85 GORILLA -002 0.266 0.188 0.106 0.342 0.268 0.170 1.000 1.000 0.993 0.250 0.137 1.000 0.466
287 273 277 270 268 268 270 302 256 205 204 288 192
86 GORILLA -003 0.694 0.318 0.157 0.684 0.434 0.247 1.000 1.000 1.000 0.407 0.213 1.000 0.562
184 220 218 195 205 213 78 90 124 172 178 130 160
87 GORILLA -004 0.135 0.089 0.043 0.202 0.160 0.101 0.972 0.959 0.903 0.135 0.072 0.438 0.309
154 191 192 181 191 196 46 55 76 158 156 108 128
88 GORILLA -005 0.086 0.058 0.026 0.179 0.142 0.088 0.770 0.700 0.553 0.088 0.040 0.315 0.223
105 135 129 135 139 140 30 41 53 89 89 65 78 78 85
89 GORILLA -006 0.046 0.027 0.011 0.118 0.089 0.053 0.602 0.531 0.369 0.028 0.013 0.166 0.093 0.218 0.154
102 133 126 118 126 122 33 42 54 85 82 84 84 64 74
90 GORILLA -007 0.046 0.027 0.010 0.101 0.077 0.045 0.626 0.534 0.369 0.026 0.012 0.264 0.108 0.178 0.138
95 121 108 127 133 130 23 31 45 95 80 96 73 62 68
91 GORILLA -008 0.044 0.024 0.009 0.111 0.083 0.048 0.541 0.463 0.295 0.030 0.011 0.319 0.090 0.178 0.132
T = Threshold

96 108 111 97 104 105 154 160 145 105 120 74 83 73 93


92 GRIAULE -000 0.044 0.020 0.009 0.082 0.063 0.038 0.997 0.995 0.952 0.033 0.020 0.185 0.107 0.198 0.166

Table 32: Threshold-based accuracy. Values are FNIR(N, T, L) with N = 1.6 million with thresholds set to produce FPIR = 0.0003, 0.001, and 0.01 in non-mate searches.
Throughout blue superscripts indicate the rank of the algorithm for that column. Caution: The Power-low models are mostly intended to draw attention to the kind of
behavior, not as a model to be used for prediction.
T > 0 → Identification
T = 0 → Investigation

76
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

MISSES BELOW THRESHOLD , T ENROL RECENT MUGSHOT, N = 1.6 M ENROL APPLICATION PORTRAIT, N = 1.6 M
11:12:06
2022/12/18

ENROL : MUGSHOT ENROL : MUGSHOT ENROL : MUGSHOT ENROL : VISA ENROL : BORDER ENROL : VISA
PROBE : MUGSHOT PROBE : WEBCAM PROBE : PROFILE PROBE : BORDER PROBE : BORDER 10+ YR PROBE : KIOSK
# ALGORITHM FPIR =0.0003 FPIR =0.001 FPIR =0.01 FPIR =0.0003 FPIR =0.001 FPIR =0.01 FPIR =0.0003 FPIR =0.001 FPIR =0.01 FPIR =0.001 FPIR =0.01 FPIR =0.001 FPIR =0.01 FPIR =0.001 FPIR =0.01
39 37 32 56 42 38 67 75 84 25 17 126 126 19 20
93 GRIAULE -001 0.013 0.005 0.002 0.051 0.028 0.016 0.928 0.865 0.625 0.007 0.003 0.995 0.610 0.099 0.081
202 225 235 190 201 217 91 97 129 175 182 132 166
94 HIK -003 0.159 0.103 0.057 0.190 0.158 0.105 0.980 0.969 0.925 0.142 0.080 0.445 0.359
199 221 233 185 198 214 98 105 142 173 181 129 165
95 HIK -004 0.156 0.099 0.054 0.182 0.153 0.101 0.983 0.976 0.947 0.137 0.077 0.434 0.353
163 160 162 112 125 131 196 208 226 147 149 152 141
96 HIK -005 0.102 0.044 0.019 0.098 0.077 0.048 1.000 0.999 0.998 0.068 0.036 0.541 0.258
191 169 165 126 136 138 237 241 245
97 HIK -006 0.142 0.047 0.020 0.111 0.086 0.052 1.000 1.000 0.999
23 33 38 42 49 52 7 12 18 26 32 21 24 24 22
98 HYPERVERGE -001 0.009 0.004 0.002 0.039 0.031 0.020 0.275 0.220 0.146 0.007 0.004 0.053 0.027 0.101 0.083
21 27 29 35 40 43 10 10 13 18 15 18 18 13 14
99 HYPERVERGE -002 0.008 0.004 0.002 0.034 0.027 0.018 0.278 0.210 0.131 0.006 0.003 0.048 0.023 0.093 0.077
82 107 112 69 78 82 96 95 112 72 69 93 62 48 60
100 HZAILU -000 0.035 0.020 0.009 0.064 0.051 0.031 0.983 0.967 0.813 0.020 0.010 0.316 0.077 0.153 0.120
47 57 66 243 216 69 163 128 43 190 123 263 132 167 167
101 HZAILU -001 0.016 0.009 0.004 0.414 0.183 0.024 0.998 0.986 0.282 0.196 0.021 1.000 0.997 0.679 0.360
276 170 169 308 207 187 316 168 170 176 197
102 IDEMIA -003 0.552 0.047 0.021 1.000 0.165 0.079 1.000 0.123 0.061 0.766 0.630
FNIR(N, R, T) =

123 156 168 162 172 186 85 103 154 167 169 177 196
103 IDEMIA -004 0.055 0.037 0.021 0.144 0.118 0.079 0.976 0.973 0.968 0.123 0.061 0.766 0.630
FPIR(N, T) =

138 162 189 184 196 216 90 109 161 169 176 193 207
104 IDEMIA -005 0.066 0.044 0.026 0.181 0.150 0.102 0.979 0.978 0.973 0.130 0.070 0.879 0.743
136 159 187 214 227 241 101 119 171 176 187 172 191
105 IDEMIA -006 0.065 0.043 0.025 0.266 0.226 0.161 0.984 0.982 0.980 0.144 0.090 0.733 0.531
83 99 101 88 90 91 299 279 305 132 129 72 85 254 227
106 IDEMIA -007 0.035 0.018 0.008 0.073 0.055 0.033 1.000 1.000 1.000 0.052 0.022 0.182 0.109 1.000 0.982
8 9 10 12 10 6 9 9 15 14 14 14 16 27 32
107 IDEMIA -008 0.004 0.002 0.001 0.016 0.013 0.009 0.276 0.204 0.136 0.005 0.003 0.036 0.019 0.106 0.092

FRVT
7 3 4 3 3 3 4 4 7 5 5 8 8 6 6
108 IDEMIA -009 0.004 0.002 0.001 0.012 0.011 0.008 0.202 0.141 0.099 0.003 0.002 0.027 0.013 0.074 0.064
301 301 303 285 293 296 242 239 257
109 IMAGUS -002 0.908 0.749 0.564 0.944 0.816 0.645 1.000 1.000 1.000
300 303 306 290 297 299 245 231 252
110 IMAGUS -003 0.898 0.807 0.669 0.954 0.909 0.809 1.000 1.000 1.000

-
False pos. identification rate
False neg. identification rate

80 103 102 102 106 109 66 73 91 91 104 64 80 81 108

FACE RECOGNITION VENDOR TEST


111 IMAGUS -005 0.034 0.018 0.008 0.088 0.066 0.040 0.926 0.838 0.647 0.029 0.016 0.161 0.094 0.231 0.189
89 106 104 106 112 117 93 82 82 90 98 63 76 92 104
112 IMAGUS -006 0.039 0.019 0.008 0.093 0.069 0.042 0.980 0.897 0.621 0.028 0.015 0.161 0.092 0.260 0.181
94 119 120 116 118 121 79 80 92 99 101 66 82 97 103
113 IMAGUS -007 0.044 0.023 0.010 0.100 0.073 0.045 0.973 0.893 0.651 0.031 0.016 0.169 0.098 0.265 0.181
310 311 302 292 290 271 199 165 97 215 177 130 122 147 137
114 IMAGUS -008 0.995 0.974 0.523 0.958 0.774 0.285 1.000 0.996 0.700 0.520 0.071 1.000 0.540 0.518 0.246
196 127 117 104 110 114 220 189 205 121 121 87 94
115 IMPERIAL -000 0.154 0.026 0.009 0.089 0.068 0.041 1.000 0.999 0.995 0.042 0.020 0.245 0.168
260 272 281 255 266 274 200 181 203
116 INCODE -000 0.423 0.310 0.199 0.486 0.420 0.304 1.000 0.998 0.994
249 261 268 233 249 255 223 253 249
117 INCODE -001 0.319 0.212 0.112 0.348 0.296 0.198 1.000 1.000 1.000
246 257 265 227 247 251 155 145 167
118 INCODE -002 0.285 0.184 0.100 0.333 0.269 0.176 0.998 0.993 0.976
247 253 253 237 243 244 209 206 218
119 INCODE -003 0.286 0.167 0.084 0.372 0.264 0.164 1.000 0.999 0.996
162 183 180 175 175 175 152 155 131 145 142 106 129
120 INCODE -004 0.099 0.054 0.023 0.167 0.120 0.070 0.997 0.995 0.929 0.063 0.031 0.313 0.226
53 70 68 61 69 73 32 39 55 64 66 55 59 51 50
121 INCODE -005 0.021 0.011 0.005 0.055 0.043 0.026 0.614 0.528 0.372 0.017 0.009 0.145 0.073 0.155 0.116
258 266 276 241 254 261 233 245 246
122 INNOVATRICS -002 0.379 0.234 0.139 0.403 0.310 0.209 1.000 1.000 0.999
248 262 272 235 250 258 208 217 229
123 INNOVATRICS -003 0.297 0.221 0.132 0.351 0.297 0.203 1.000 1.000 0.998
R = Num. candidates examined
N = Num. enrolled subjects

211 238 250 212 226 234 100 112 159


124 INNOVATRICS -004 0.184 0.132 0.074 0.262 0.222 0.149 0.984 0.980 0.973
127 149 149 131 140 139 60 74 98 128 128 89 105
125 INNOVATRICS -005 0.057 0.034 0.014 0.114 0.089 0.052 0.890 0.846 0.723 0.047 0.022 0.251 0.182
59 77 75 71 79 84 50 58 77 63 67 40 49 49 59
126 INNOVATRICS -007 0.024 0.013 0.005 0.065 0.051 0.032 0.806 0.743 0.567 0.017 0.009 0.093 0.053 0.154 0.120
116 122 128 110 124 132 94 65 66 136 148 82 94 90 119
127 INTELIGENSIA -000 0.051 0.024 0.011 0.097 0.077 0.049 0.982 0.786 0.507 0.053 0.036 0.235 0.145 0.255 0.208
271 270 278 251 264 273 217 221 244 200 206 114 124 169 193
128 INTELLIVISION -001 0.508 0.279 0.158 0.459 0.404 0.302 1.000 1.000 0.999 0.328 0.219 0.749 0.598 0.685 0.562
245 245 248 204 217 226 178 192 222 171 179 106 115 135 163
129 INTELLIVISION -002 0.282 0.154 0.072 0.236 0.196 0.127 0.999 0.999 0.997 0.134 0.073 0.437 0.297 0.460 0.348

-
IDENTIFICATION
22 16 11 21 18 18 236 8 16 9 90 39 8 7
130 INTEMA -000 0.009 0.002 0.001 0.022 0.017 0.012 1.000 0.100 0.005 0.002 0.288 0.042 0.081 0.067
311 314 316 306 308 310 207 215 225 235 235 221 232
131 INTSYSMSU -000 0.999 0.998 0.990 1.000 1.000 0.998 1.000 1.000 0.998 0.999 0.989 0.999 0.988
140 139 100 115 100 86 107 89 96 124 77 92 52 60 72
132 IREX -000 0.068 0.028 0.008 0.099 0.060 0.032 0.988 0.957 0.680 0.044 0.011 0.302 0.062 0.170 0.135
198 208 202 171 179 189 164 173 197
133 ISYSTEMS -002 0.155 0.078 0.032 0.161 0.126 0.080 0.998 0.998 0.993
221 192 183 156 163 169 218 220 223
134 ISYSTEMS -003 0.204 0.059 0.024 0.135 0.107 0.068 1.000 1.000 0.997
71 89 88 86 93 97 22 33 50 69 68 50 60 52 58
135 KAKAO -000 0.028 0.015 0.006 0.071 0.056 0.034 0.539 0.468 0.327 0.019 0.010 0.141 0.075 0.158 0.120
16 18 20 19 20 21 5 5 9 10 8 17 11 5 4
136 KAKAO -001 0.006 0.003 0.002 0.022 0.017 0.013 0.226 0.159 0.101 0.004 0.002 0.042 0.016 0.074 0.063
91 118 146 109 116 143 111 127 163 137 160 104 143
137 KEDACOM -001 0.041 0.023 0.013 0.096 0.072 0.054 0.989 0.986 0.973 0.055 0.043 0.305 0.264
T = Threshold

204 207
138 KNERON -000 0.033 0.099

Table 33: Threshold-based accuracy. Values are FNIR(N, T, L) with N = 1.6 million with thresholds set to produce FPIR = 0.0003, 0.001, and 0.01 in non-mate searches.
Throughout blue superscripts indicate the rank of the algorithm for that column. Caution: The Power-low models are mostly intended to draw attention to the kind of
behavior, not as a model to be used for prediction.
T > 0 → Identification
T = 0 → Investigation

77
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

MISSES BELOW THRESHOLD , T ENROL RECENT MUGSHOT, N = 1.6 M ENROL APPLICATION PORTRAIT, N = 1.6 M
11:12:06
2022/12/18

ENROL : MUGSHOT ENROL : MUGSHOT ENROL : MUGSHOT ENROL : VISA ENROL : BORDER ENROL : VISA
PROBE : MUGSHOT PROBE : WEBCAM PROBE : PROFILE PROBE : BORDER PROBE : BORDER 10+ YR PROBE : KIOSK
# ALGORITHM FPIR =0.0003 FPIR =0.001 FPIR =0.01 FPIR =0.0003 FPIR =0.001 FPIR =0.01 FPIR =0.0003 FPIR =0.001 FPIR =0.01 FPIR =0.001 FPIR =0.01 FPIR =0.001 FPIR =0.01 FPIR =0.001 FPIR =0.01
229
139 KNERON -001 0.052
132 145 143 152 149 145 266 126 124 87 99 260 145
140 LINE -000 0.062 0.031 0.012 0.132 0.095 0.054 1.000 0.046 0.021 0.278 0.151 1.000 0.268
74 38 31 72 38 35 236 242 262 44 35 31 34 286 218
141 LINE -001 0.030 0.005 0.002 0.066 0.027 0.015 1.000 1.000 1.000 0.009 0.004 0.072 0.034 1.000 0.858
27 28 26 256 181 28 117 116 79 118 22 131 127 170 31
142 LINECLOVA -002 0.010 0.004 0.002 0.508 0.130 0.014 0.992 0.981 0.577 0.040 0.004 1.000 0.690 0.700 0.091
137 161 185 151 167 190 157 171 115 159
143 LOOKMAN -003 0.066 0.044 0.025 0.131 0.112 0.082 0.084 0.061 0.355 0.304
144 164 181 141 161 181 88 106 165
144 LOOKMAN -004 0.074 0.045 0.024 0.123 0.105 0.075 0.979 0.977 0.974
112 143 155 119 135 163 92 108 160 144 162 105 147
145 LOOKMAN -005 0.050 0.030 0.017 0.102 0.086 0.063 0.980 0.978 0.973 0.062 0.047 0.308 0.273
139 67 59 66 65 58 311 268 242 92 93 59 68 224 83
146 MANTRA -000 0.066 0.010 0.004 0.063 0.041 0.022 1.000 1.000 0.999 0.029 0.014 0.152 0.081 1.000 0.151
108 140 144 253 230 146 53 62 87 177 125 127 130 155 136
147 MAXVISION -000 0.048 0.028 0.013 0.468 0.237 0.054 0.827 0.767 0.631 0.149 0.022 0.997 0.872 0.557 0.245
25 31 30 47 34 34 11 11 14 24 16 122 119 23 15
148 MAXVISION -001 0.010 0.004 0.002 0.044 0.025 0.015 0.282 0.219 0.136 0.007 0.003 0.951 0.485 0.100 0.078
FNIR(N, R, T) =

225 203 208 137 153 156


149 MEGVII -001 0.210 0.072 0.037 0.119 0.097 0.061
FPIR(N, T) =

239 207 210 138 151 153 172 183 120


150 MEGVII -002 0.258 0.077 0.037 0.120 0.096 0.059 0.999 0.998 0.872
304 309 313 300 307 308 234 234 216 226
151 MICROFOCUS -003 0.958 0.931 0.866 0.988 0.979 0.948 0.982 0.945 0.991 0.977
313 315 319 299 305 307 232 233 214 225
152 MICROFOCUS -004 0.999 0.999 0.999 0.984 0.975 0.940 0.974 0.935 0.989 0.976
299 307 311 289 301 303 230 231 213 224
153 MICROFOCUS -005 0.883 0.835 0.736 0.951 0.928 0.865 0.935 0.848 0.985 0.965

FRVT
308 312 314 288 300 302 229 229 210 221
154 MICROFOCUS -006 0.983 0.978 0.963 0.950 0.923 0.858 0.923 0.843 0.971 0.939
110 137 136 132 143 149 111 118 83 99
155 MICROSOFT-003 0.049 0.028 0.012 0.117 0.091 0.056 0.036 0.019 0.233 0.176
104 128 127 128 137 142 106 113 79 96

-
156 MICROSOFT-004 0.046 0.026 0.011 0.111 0.087 0.053 0.033 0.018 0.222 0.170
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


106 125 124 105 114 115 184 44 52 86 90 67 71
157 MICROSOFT-005 0.047 0.026 0.010 0.090 0.070 0.041 0.999 0.587 0.354 0.027 0.013 0.180 0.134
63 72 82 52 58 68 18 24 42 102 99 63 73
158 MICROSOFT-006 0.025 0.012 0.006 0.048 0.037 0.024 0.452 0.386 0.281 0.032 0.015 0.178 0.138
314 295 267 225 236 239 239 228 178 182 186 113 117 121 149
159 MUKH -002 0.999 0.594 0.110 0.326 0.242 0.153 1.000 1.000 0.987 0.170 0.089 0.741 0.382 0.389 0.286
171 210 226 176 190 200 97 111 156 140 174
160 NEC -000 0.113 0.079 0.047 0.171 0.140 0.093 0.983 0.979 0.969 0.474 0.377
195 227 239 205 219 228 113 126 158 170 185 138 175
161 NEC -001 0.148 0.106 0.060 0.238 0.197 0.133 0.991 0.986 0.972 0.133 0.082 0.468 0.378
48 19 15 29 26 24 193 202 208 33 49 166 155
162 NEC -002 0.018 0.003 0.002 0.029 0.020 0.013 1.000 0.999 0.995 0.008 0.005 0.676 0.292
12 15 22 18 19 23 62 70 86 36 51 15 17 165 142
163 NEC -003 0.005 0.002 0.002 0.021 0.017 0.013 0.902 0.824 0.628 0.008 0.006 0.036 0.023 0.668 0.261
2 6 14 9 9 11 37 47 78 12 23 4 6 22 26
164 NEC -004 0.003 0.002 0.002 0.015 0.013 0.010 0.654 0.622 0.575 0.004 0.004 0.019 0.012 0.100 0.088
19 4 9 5 5 7 61 52 29 7 10 3 4 18 25
165 NEC -005 0.007 0.002 0.001 0.014 0.012 0.009 0.901 0.673 0.177 0.003 0.002 0.019 0.011 0.099 0.087
26 11 13 23 21 20 56 32 12 8 13 7 7 14 21
166 NEC -006 0.010 0.002 0.002 0.024 0.018 0.013 0.857 0.463 0.122 0.004 0.003 0.026 0.013 0.094 0.081
312 299 264 278 245 243 286 289 300
167 NEUROTECHNOLOGY-003 0.999 0.636 0.099 0.773 0.266 0.164 1.000 1.000 1.000
173 197 194 163 169 177 143 150 185
168 NEUROTECHNOLOGY-004 0.120 0.063 0.028 0.146 0.117 0.073 0.996 0.994 0.990
172 184 175 208 182 180 170 176 181
169 NEUROTECHNOLOGY-005 0.117 0.054 0.022 0.252 0.130 0.074 0.999 0.998 0.989
R = Num. candidates examined
N = Num. enrolled subjects

309 267 270 312 265 260


170 NEUROTECHNOLOGY-006 0.987 0.249 0.121 1.000 0.418 0.206
235 196 172 303 211 168 241 227 221 201 150 252 233
171 NEUROTECHNOLOGY-007 0.252 0.062 0.021 0.996 0.173 0.068 1.000 1.000 0.997 0.339 0.036 1.000 0.989
290 178 140 125 130 126 231 247 261 110 109 91 97 74 84
172 NEUROTECHNOLOGY-008 0.797 0.053 0.012 0.110 0.080 0.047 1.000 1.000 1.000 0.035 0.017 0.293 0.149 0.203 0.152
67 90 81 74 83 85 38 45 59 71 71 60 70 57 66
173 NEUROTECHNOLOGY-009 0.027 0.015 0.006 0.066 0.052 0.032 0.661 0.588 0.436 0.020 0.010 0.153 0.082 0.165 0.129
251 65 54 51 61 65 15 17 26 47 43 35 38 34 38
174 NEUROTECHNOLOGY-010 0.346 0.010 0.003 0.047 0.037 0.023 0.377 0.277 0.170 0.010 0.005 0.075 0.039 0.126 0.097
156 49 40 49 53 47 213 91 19 40 27 25 27 198 27
175 NEUROTECHNOLOGY-012 0.092 0.007 0.002 0.045 0.032 0.019 1.000 0.959 0.149 0.008 0.004 0.061 0.028 0.916 0.088

-
IDENTIFICATION
274 288 291 257 272 278 181 195 228
176 NEWLAND -002 0.523 0.438 0.294 0.535 0.466 0.335 0.999 0.999 0.998
317 317 317 314 321 314 234 244 264
177 NOBLIS -001 1.000 1.000 0.991 1.000 1.000 1.000 1.000 1.000 1.000
315 313 300 317 316 315 227 250 259
178 NOBLIS -002 1.000 0.997 0.488 1.000 1.000 1.000 1.000 1.000 1.000
78 94 98 89 99 103 39 46 62 76 79 58 71 61 75
179 NOTIONTAG -000 0.032 0.017 0.007 0.076 0.059 0.036 0.671 0.611 0.467 0.021 0.011 0.150 0.084 0.176 0.140
148 182 195 164 170 182 58 72 105
180 NTECHLAB -003 0.080 0.054 0.028 0.148 0.118 0.075 0.873 0.837 0.752
134 157 170 150 160 165 57 71 103 135 140 94 125
181 NTECHLAB -004 0.063 0.041 0.021 0.131 0.105 0.065 0.868 0.833 0.746 0.053 0.030 0.263 0.214
133 158 171 149 158 164 51 63 94 151 154 102 130
182 NTECHLAB -005 0.062 0.042 0.021 0.130 0.102 0.063 0.816 0.771 0.661 0.073 0.039 0.294 0.227
126 152 160 139 146 152 49 61 89 139 144 93 118
183 NTECHLAB -006 0.056 0.037 0.018 0.121 0.094 0.059 0.802 0.754 0.635 0.057 0.032 0.260 0.207
T = Threshold

90 124 137 98 109 113 48 60 90 103 110 80 98


184 NTECHLAB -007 0.040 0.026 0.012 0.085 0.067 0.041 0.796 0.750 0.642 0.032 0.017 0.223 0.176

Table 34: Threshold-based accuracy. Values are FNIR(N, T, L) with N = 1.6 million with thresholds set to produce FPIR = 0.0003, 0.001, and 0.01 in non-mate searches.
Throughout blue superscripts indicate the rank of the algorithm for that column. Caution: The Power-low models are mostly intended to draw attention to the kind of
behavior, not as a model to be used for prediction.
T > 0 → Identification
T = 0 → Investigation

78
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

MISSES BELOW THRESHOLD , T ENROL RECENT MUGSHOT, N = 1.6 M ENROL APPLICATION PORTRAIT, N = 1.6 M
11:12:06
2022/12/18

ENROL : MUGSHOT ENROL : MUGSHOT ENROL : MUGSHOT ENROL : VISA ENROL : BORDER ENROL : VISA
PROBE : MUGSHOT PROBE : WEBCAM PROBE : PROFILE PROBE : BORDER PROBE : BORDER 10+ YR PROBE : KIOSK
# ALGORITHM FPIR =0.0003 FPIR =0.001 FPIR =0.01 FPIR =0.0003 FPIR =0.001 FPIR =0.01 FPIR =0.0003 FPIR =0.001 FPIR =0.01 FPIR =0.001 FPIR =0.01 FPIR =0.001 FPIR =0.01 FPIR =0.001 FPIR =0.01
61 83 91 63 72 78 29 40 58 107 114 68 76
185 NTECHLAB -008 0.024 0.014 0.007 0.057 0.045 0.029 0.601 0.529 0.391 0.033 0.018 0.183 0.140
28 40 45 28 29 29 21 28 46 58 61 44 51 43 47
186 NTECHLAB -009 0.010 0.005 0.003 0.028 0.022 0.014 0.522 0.430 0.311 0.015 0.008 0.109 0.061 0.142 0.114
14 17 12 15 16 13 14 16 25 22 31 24 29 17 13
187 NTECHLAB -010 0.005 0.003 0.002 0.018 0.015 0.011 0.334 0.252 0.169 0.007 0.004 0.059 0.031 0.098 0.077
15 22 16 14 15 12 12 13 22 43 40 33 37 11 10
188 NTECHLAB -011 0.006 0.003 0.002 0.018 0.015 0.010 0.291 0.228 0.150 0.009 0.004 0.074 0.038 0.091 0.075
42 46 44 44 47 44 81 21 27 46 42 49 32 26 23
189 PANGIAM -000 0.014 0.006 0.003 0.039 0.030 0.018 0.974 0.318 0.175 0.009 0.005 0.136 0.033 0.105 0.083
55 71 103 41 46 51 36 23 30 42 30 120 66 42 24
190 PANGIAM -001 0.023 0.011 0.008 0.039 0.030 0.020 0.650 0.383 0.180 0.009 0.004 0.860 0.081 0.141 0.085
244 219 224 248 209 210 230 197 219 213 220 201 212
191 PARAVISION -000 0.278 0.089 0.045 0.447 0.170 0.100 1.000 0.999 0.997 0.470 0.443 0.926 0.779
190 171 167 199 180 179 228 187 202 210 219 173 194
192 PARAVISION -001 0.140 0.049 0.020 0.207 0.128 0.074 1.000 0.999 0.994 0.444 0.428 0.739 0.573
153 172 177 168 173 184 116 120 104 154 161 144 146
193 PARAVISION -002 0.085 0.050 0.022 0.152 0.119 0.076 0.992 0.983 0.748 0.080 0.043 0.497 0.268
135 150 151 144 150 155 149 152 99 140 147 103 132
194 PARAVISION -003 0.063 0.035 0.016 0.124 0.096 0.060 0.997 0.994 0.733 0.058 0.034 0.296 0.232
FNIR(N, R, T) =

62 69 67 54 62 66 225 251 109 66 75 196 121


195 PARAVISION -004 0.025 0.010 0.004 0.049 0.038 0.024 1.000 1.000 0.797 0.018 0.011 0.908 0.211
FPIR(N, T) =

45 30 36 31 32 39 147 113 31 48 62 37 56
196 PARAVISION -005 0.014 0.004 0.002 0.031 0.024 0.016 0.997 0.980 0.181 0.011 0.008 0.132 0.120
107 29 24 258 33 33 243 238 267 41 53 45 20 242 315
197 PARAVISION -007 0.048 0.004 0.002 0.560 0.025 0.015 1.000 1.000 1.000 0.009 0.006 0.113 0.024 1.000 1.000
20 20 8 25 24 16 47 57 75 6 4 11 10 4 2
198 PARAVISION -009 0.007 0.003 0.001 0.026 0.019 0.012 0.778 0.735 0.550 0.003 0.002 0.033 0.015 0.073 0.061
284 226 198 295 262 191 252 255 218 164 320 235
199 PIXELALL -002 0.664 0.105 0.030 0.974 0.388 0.083 1.000 1.000 0.602 0.047 1.000 1.000

FRVT
109 114 114 120 117 118 212 227 114 119 154 140
200 PIXELALL -003 0.049 0.022 0.009 0.102 0.073 0.043 1.000 0.998 0.037 0.020 0.554 0.255
174 102 95 279 129 104 233 237 130 100 217 222
201 PIXELALL -004 0.120 0.018 0.007 0.783 0.079 0.037 1.000 0.999 0.051 0.015 0.994 0.942
147 74 69 250 76 75 240 247 87 106 76 56 223 228
202 PIXELALL -005 0.079 0.012 0.005 0.456 0.050 0.027 1.000 0.999 0.027 0.017 0.203 0.071 1.000 0.983

-
False pos. identification rate
False neg. identification rate

128 151 156 174 178 176 75 85 118 127 126 78 89 82 102

FACE RECOGNITION VENDOR TEST


203 PTAKURATSATU -000 0.057 0.037 0.017 0.165 0.124 0.071 0.947 0.924 0.868 0.046 0.022 0.206 0.120 0.232 0.179
307 237 230 305 232 222 248 257 265 187 175 108 114 219 231
204 QNAP -000 0.972 0.129 0.052 0.998 0.238 0.117 1.000 1.000 1.000 0.191 0.068 0.539 0.263 0.998 0.985
152 181 184 180 187 193 71 86 119 155 158 99 109 113 138
205 QNAP -001 0.083 0.054 0.024 0.176 0.137 0.085 0.943 0.928 0.870 0.081 0.041 0.368 0.227 0.331 0.248
100 129 145 158 162 170 52 64 83 134 133 88 103 99 124
206 QNAP -002 0.045 0.026 0.013 0.136 0.106 0.068 0.820 0.772 0.622 0.052 0.025 0.281 0.171 0.272 0.214
85 105 109 297 295 150 214 141 56 214 174 160 134 191 172
207 QNAP -003 0.036 0.019 0.009 0.980 0.835 0.057 1.000 0.992 0.372 0.502 0.066 1.000 1.000 0.865 0.373
288 300 301
208 QUANTASOFT-001 0.713 0.639 0.493
209 230 245 224 241 254
209 RANKONE -002 0.184 0.118 0.071 0.308 0.261 0.190
210 231 246 221 240 252
210 RANKONE -003 0.184 0.118 0.071 0.300 0.255 0.187
234 259 271 254 267 276
211 RANKONE -004 0.250 0.193 0.124 0.482 0.426 0.324
159 193 203 200 212 223 182 178 204
212 RANKONE -005 0.096 0.059 0.033 0.212 0.173 0.119 0.999 0.998 0.994
130 153 163 103 107 135
213 RANKONE -006 0.061 0.037 0.020 0.987 0.977 0.937
79 116 133 136 147 157 84 94 128
214 RANKONE -007 0.034 0.022 0.011 0.118 0.095 0.061 0.975 0.967 0.924
75 97 106 111 121 120 95 98 117 142 139 112 117
215 RANKONE -009 0.031 0.018 0.008 0.098 0.076 0.045 0.983 0.969 0.859 0.062 0.029 0.328 0.206
R = Num. candidates examined
N = Num. enrolled subjects

56 80 96 91 97 102 63 68 93 133 136 79 88 91 111


216 RANKONE -010 0.023 0.014 0.007 0.077 0.058 0.036 0.905 0.802 0.652 0.052 0.027 0.208 0.119 0.259 0.194
169 58 64 95 74 79 113 111 71 77 211 184
217 RANKONE -011 0.109 0.009 0.004 0.079 0.048 0.029 0.037 0.017 0.182 0.092 0.977 0.465
52 54 60 87 88 80 93 95 54 57 137 65
218 RANKONE -012 0.020 0.008 0.004 0.072 0.053 0.030 0.029 0.014 0.144 0.072 0.465 0.128
31 34 34 50 55 50 161 168 36 67 59 52 46 44 39
219 RANKONE -013 0.010 0.005 0.002 0.046 0.034 0.020 0.998 0.996 0.214 0.018 0.008 0.141 0.050 0.142 0.097
256 264 274 245 256 263
220 REALNETWORKS -000 0.374 0.234 0.138 0.433 0.319 0.209
257 265 275 246 257 262
221 REALNETWORKS -001 0.374 0.234 0.138 0.433 0.319 0.209

-
255 263 273 244 255 264

IDENTIFICATION
222 REALNETWORKS -002 0.370 0.231 0.137 0.416 0.315 0.209
242 249 256 230 244 249 179 182 177 181 191 145 170
223 REALNETWORKS -003 0.273 0.159 0.090 0.342 0.266 0.172 0.999 0.998 0.987 0.164 0.103 0.500 0.364
232 248 255 236 242 246 195 199 196 183 192 160 171
224 REALNETWORKS -004 0.242 0.158 0.090 0.353 0.263 0.169 1.000 0.999 0.992 0.170 0.103 0.613 0.370
117 136 139 108 119 125 99 100 123 112 107 80 91 77 92
225 REALNETWORKS -005 0.052 0.028 0.012 0.094 0.074 0.047 0.984 0.971 0.896 0.037 0.017 0.223 0.123 0.215 0.165
65 84 78 81 85 89 119 115 114 59 64 46 53 50 49
226 REALNETWORKS -006 0.025 0.015 0.006 0.068 0.053 0.032 0.993 0.980 0.838 0.016 0.008 0.120 0.063 0.154 0.116
51 63 62 62 70 74 114 110 116 50 48 107 54 41 41
227 REALNETWORKS -007 0.019 0.010 0.004 0.057 0.043 0.027 0.992 0.979 0.855 0.012 0.005 0.463 0.063 0.140 0.100
38 47 49 40 45 45 109 96 41 38 41 30 35 35 46
228 REALNETWORKS -008 0.012 0.006 0.003 0.037 0.029 0.018 0.988 0.968 0.271 0.008 0.004 0.068 0.035 0.129 0.110
178 185 179 177 174 173 186 196 207 150 146 171 161
229 REMARKAI -000 0.125 0.055 0.023 0.173 0.120 0.070 0.999 0.999 0.995 0.069 0.033 0.717 0.315
T = Threshold

219 235 238 213 220 225


230 REMARKAI -000 0.197 0.128 0.059 0.263 0.203 0.123

Table 35: Threshold-based accuracy. Values are FNIR(N, T, L) with N = 1.6 million with thresholds set to produce FPIR = 0.0003, 0.001, and 0.01 in non-mate searches.
Throughout blue superscripts indicate the rank of the algorithm for that column. Caution: The Power-low models are mostly intended to draw attention to the kind of
behavior, not as a model to be used for prediction.
T > 0 → Identification
T = 0 → Investigation

79
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

MISSES BELOW THRESHOLD , T ENROL RECENT MUGSHOT, N = 1.6 M ENROL APPLICATION PORTRAIT, N = 1.6 M
11:12:06
2022/12/18

ENROL : MUGSHOT ENROL : MUGSHOT ENROL : MUGSHOT ENROL : VISA ENROL : BORDER ENROL : VISA
PROBE : MUGSHOT PROBE : WEBCAM PROBE : PROFILE PROBE : BORDER PROBE : BORDER 10+ YR PROBE : KIOSK
# ALGORITHM FPIR =0.0003 FPIR =0.001 FPIR =0.01 FPIR =0.0003 FPIR =0.001 FPIR =0.01 FPIR =0.0003 FPIR =0.001 FPIR =0.01 FPIR =0.001 FPIR =0.01 FPIR =0.001 FPIR =0.01 FPIR =0.001 FPIR =0.01
215 234 237 207 218 224 122 138 170
231 REMARKAI -002 0.188 0.124 0.059 0.248 0.196 0.122 0.993 0.991 0.980
57 73 73 189 98 95 74 81 102 77 88 73 72 58 67
232 RENDIP -000 0.023 0.012 0.005 0.189 0.059 0.034 0.945 0.894 0.744 0.022 0.013 0.185 0.089 0.167 0.130
58 75 77 60 68 71 42 54 74 75 76 41 48 45 52
233 REVEALMEDIA -000 0.024 0.012 0.006 0.054 0.042 0.025 0.755 0.680 0.539 0.021 0.011 0.093 0.051 0.143 0.118
186 138 131 147 134 129 249 256 80 129 115 314 90 276 199
234 S 1-000 0.137 0.028 0.011 0.129 0.085 0.048 1.000 1.000 0.596 0.047 0.018 1.000 0.123 1.000 0.632
122 91 93 73 82 94 115 124 147 68 70 48 61 46 53
235 S 1-001 0.054 0.016 0.007 0.066 0.052 0.033 0.992 0.985 0.952 0.019 0.010 0.136 0.075 0.148 0.119
129 44 42 100 48 42 65 8 5 27 19 119 100 188 78
236 S 1-002 0.060 0.006 0.002 0.085 0.031 0.018 0.924 0.196 0.095 0.007 0.003 0.792 0.151 0.841 0.144
114 59 55 57 59 56 238 225 182 57 50 102 36 271 120
237 S 1-003 0.050 0.009 0.003 0.052 0.037 0.022 1.000 1.000 0.989 0.014 0.006 0.396 0.037 1.000 0.209
164 200 199 220 235 235 69 79 110 192 199 123 157
238 SCANOVATE -000 0.103 0.067 0.030 0.296 0.240 0.150 0.931 0.893 0.803 0.215 0.118 0.400 0.299
180 211 211 219 228 231 70 84 113 188 193 126 153
239 SCANOVATE -001 0.128 0.081 0.037 0.281 0.227 0.140 0.935 0.911 0.834 0.192 0.103 0.404 0.290
84 112 115 92 103 110 296 282 179
240 SENSETIME -000 0.036 0.021 0.009 0.078 0.063 0.040 1.000 1.000 0.988
FNIR(N, R, T) =

86 115 119 96 105 116


241 SENSETIME -001 0.036 0.022 0.010 0.080 0.064 0.041
FPIR(N, T) =

87 85 150 143 43 63 146 148 169 101 108 149 91


242 SENSETIME -002 0.037 0.015 0.014 0.124 0.028 0.023 0.997 0.994 0.979 0.032 0.017 0.523 0.160
9 8 7 6 6 5 31 34 47 34 47 38 48
243 SENSETIME -003 0.004 0.002 0.001 0.014 0.012 0.009 0.607 0.477 0.311 0.008 0.005 0.133 0.115
5 5 6 8 8 10 13 14 21 19 25 30 40
244 SENSETIME -004 0.003 0.002 0.001 0.015 0.013 0.010 0.301 0.229 0.149 0.006 0.004 0.113 0.100
33 14 5 16 14 8 6 7 10 23 28 20 19 25 34
245 SENSETIME -005 0.011 0.002 0.001 0.018 0.014 0.010 0.259 0.173 0.103 0.007 0.004 0.051 0.023 0.104 0.093

FRVT
11 7 3 11 7 4 168 180 95 9 7 12 12 12 18
246 SENSETIME -006 0.005 0.002 0.001 0.016 0.012 0.009 0.999 0.998 0.680 0.004 0.002 0.034 0.016 0.093 0.079
4 2 2 2 2 2 198 201 73 4 3 6 5 9 9
247 SENSETIME -007 0.003 0.001 0.001 0.012 0.009 0.007 1.000 0.999 0.538 0.003 0.001 0.024 0.011 0.085 0.074
1 1 1 1 1 1 112 25 3 2 2 5 2 7 8
248 SENSETIME -008 0.002 0.001 0.001 0.011 0.009 0.007 0.990 0.405 0.086 0.002 0.001 0.021 0.009 0.080 0.074

-
False pos. identification rate
False neg. identification rate

270 289 294 269 282 287

FACE RECOGNITION VENDOR TEST


249 SHAMAN -003 0.506 0.451 0.347 0.650 0.597 0.472
285 298 299 280 289 295
250 SHAMAN -004 0.679 0.615 0.488 0.812 0.754 0.639
212 242 258 217 231 245 87 101 152
251 SHAMAN -006 0.185 0.141 0.092 0.278 0.237 0.168 0.978 0.972 0.960
207 243 257 218 234 247
252 SHAMAN -007 0.183 0.141 0.092 0.280 0.240 0.169
183 95 92 267 261 279 97 94
253 SIAT-001 0.132 0.018 0.007 0.641 0.365 0.348 0.031 0.014
259 113 97 284 273 285 203 214 200 95
254 SIAT-002 0.417 0.022 0.007 0.942 0.478 0.460 0.372 0.356 0.923 0.169
306 310 315 296 306 309
255 SMILART-004 0.970 0.968 0.965 0.977 0.976 0.973
256 SMILART-005
228 239 221 228 238 220 77 66 81 117 117 95 98 128 109
257 SQISOFT-001 0.226 0.132 0.044 0.340 0.252 0.111 0.956 0.797 0.608 0.040 0.019 0.317 0.150 0.420 0.189
145 142 83 306 83 40 222 180 206 179
258 SQISOFT-002 0.074 0.029 0.006 0.908 0.904 0.266 0.621 0.074 0.953 0.435
250 194 173 281 269 158 216 224 239 216 155 124 105 297 234
259 STAQU -000 0.334 0.062 0.022 0.848 0.443 0.061 1.000 1.000 0.999 0.535 0.039 0.961 0.183 1.000 0.999
282 294 298 273 285 289
260 SYNESIS -003 0.648 0.582 0.443 0.708 0.646 0.524
170 198 200 169 177 185 80 92 125 152 153 107 134
261 SYNESIS -003 0.111 0.065 0.032 0.155 0.123 0.078 0.973 0.960 0.911 0.075 0.039 0.314 0.235
R = Num. candidates examined
N = Num. enrolled subjects

111 123 135 103 115 119 130 121 107 104 102 76 90
262 SYNESIS -005 0.050 0.025 0.011 0.088 0.072 0.043 0.995 0.984 0.795 0.032 0.016 0.214 0.158
68 92 130 79 86 96 26 37 61 73 72 115 104 55 62
263 T 4 ISB -000 0.027 0.016 0.011 0.068 0.053 0.034 0.566 0.510 0.463 0.021 0.010 0.759 0.177 0.161 0.125
292 188 159 302 302 148 272 258 253 193 138 218 216
264 TECH 5-001 0.807 0.057 0.018 0.994 0.935 0.055 1.000 1.000 1.000 0.244 0.028 0.994 0.817
121 132 138 107 113 112 59 69 85 116 116 77 86 131 106
265 TECH 5-002 0.053 0.027 0.012 0.094 0.070 0.040 0.874 0.805 0.627 0.039 0.019 0.205 0.111 0.440 0.182
231 256 263 232 251 256
266 TEVIAN -003 0.239 0.177 0.096 0.346 0.298 0.198
204 229 242 201 213 221
267 TEVIAN -004 0.170 0.117 0.063 0.216 0.176 0.115

-
IDENTIFICATION
182 216 223 183 193 197 106 93 108
268 TEVIAN -005 0.129 0.087 0.045 0.180 0.144 0.089 0.988 0.962 0.796
60 64 72 46 51 55 25 27 44 60 65 39 45 205 51
269 TEVIAN -006 0.024 0.010 0.005 0.041 0.032 0.021 0.562 0.425 0.291 0.016 0.009 0.093 0.050 0.951 0.117
34 42 46 27 28 30 20 20 32 45 44 28 31 32 43
270 TEVIAN -007 0.011 0.005 0.003 0.028 0.022 0.015 0.504 0.301 0.183 0.009 0.005 0.065 0.033 0.122 0.102
265 283 287 259 274 280
271 TIGER -000 0.462 0.390 0.261 0.565 0.500 0.366
200 213 214 196 203 204 188 191 166
272 TIGER -002 0.158 0.086 0.039 0.202 0.158 0.095 0.999 0.999 0.975
201 212 213 197 202 203
273 TIGER -003 0.158 0.086 0.039 0.202 0.158 0.095
167 205 212 161 166 172
274 TONGYITRANS -000 0.107 0.074 0.038 0.141 0.112 0.069
177 199 201 146 156 161
275 TONGYITRANS -001 0.124 0.066 0.032 0.128 0.101 0.062
T = Threshold

176 195 193 165 171 178 148 161 180


276 TOSHIBA -000 0.123 0.062 0.027 0.150 0.118 0.074 0.997 0.995 0.988

Table 36: Threshold-based accuracy. Values are FNIR(N, T, L) with N = 1.6 million with thresholds set to produce FPIR = 0.0003, 0.001, and 0.01 in non-mate searches.
Throughout blue superscripts indicate the rank of the algorithm for that column. Caution: The Power-low models are mostly intended to draw attention to the kind of
behavior, not as a model to be used for prediction.
T > 0 → Identification
T = 0 → Investigation

80
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

MISSES BELOW THRESHOLD , T ENROL RECENT MUGSHOT, N = 1.6 M ENROL APPLICATION PORTRAIT, N = 1.6 M
11:12:06
2022/12/18

ENROL : MUGSHOT ENROL : MUGSHOT ENROL : MUGSHOT ENROL : VISA ENROL : BORDER ENROL : VISA
PROBE : MUGSHOT PROBE : WEBCAM PROBE : PROFILE PROBE : BORDER PROBE : BORDER 10+ YR PROBE : KIOSK
# ALGORITHM FPIR =0.0003 FPIR =0.001 FPIR =0.01 FPIR =0.0003 FPIR =0.001 FPIR =0.01 FPIR =0.0003 FPIR =0.001 FPIR =0.01 FPIR =0.001 FPIR =0.01 FPIR =0.001 FPIR =0.01 FPIR =0.001 FPIR =0.01
226 190 161 154 144 147
277 TOSHIBA -001 0.225 0.058 0.019 0.133 0.092 0.054
103 101 105 93 102 107 134 76 64 94 105 75 87 70 80
278 TRUEFACE -000 0.046 0.018 0.008 0.079 0.062 0.039 0.995 0.882 0.499 0.030 0.016 0.194 0.111 0.188 0.145
296 274 132 293 296 250 187 143 35 233 226 147 135 222 230
279 TURINGTECHVIP -001 0.865 0.345 0.011 0.967 0.850 0.173 0.999 0.993 0.205 0.978 0.754 1.000 1.000 0.999 0.984
303 308 312 294 304 304
280 VD -000 0.950 0.917 0.827 0.968 0.946 0.871
243 260 269 226 248 253
281 VD -001 0.278 0.201 0.116 0.331 0.281 0.188
194 209 207 188 195 198 157 164 176 160 165 98 108 118 148
282 VD -002 0.144 0.079 0.036 0.188 0.148 0.092 0.998 0.996 0.987 0.095 0.048 0.367 0.220 0.372 0.280
230 166 164 153 155 160 185 193 200 131 135 83 92 109 113
283 VD -003 0.234 0.046 0.020 0.133 0.100 0.061 0.999 0.999 0.994 0.051 0.027 0.244 0.133 0.315 0.203
149 155 153 123 132 134 121 129 136 123 131 85 95 95 115
284 VERIDAS -001 0.080 0.037 0.016 0.106 0.082 0.051 0.993 0.987 0.938 0.044 0.023 0.266 0.146 0.264 0.204
150 154 154 124 131 135 120 130 137 122 130 86 96 96 114
285 VERIDAS -002 0.080 0.037 0.016 0.106 0.082 0.051 0.993 0.987 0.938 0.044 0.023 0.266 0.146 0.264 0.204
141 93 85 84 92 92 165 170 130 70 74 57 63 65 77
286 VERIDAS -003 0.072 0.017 0.006 0.071 0.055 0.033 0.998 0.997 0.927 0.020 0.011 0.150 0.078 0.178 0.142
FNIR(N, R, T) =

294 302 307 283 292 297 177 188 211 199 205 121 123 157 183
287 VERIJELAS -000 0.846 0.799 0.681 0.868 0.813 0.697 0.999 0.999 0.995 0.324 0.216 0.933 0.561 0.589 0.462
FPIR(N, T) =

269 287 289 276 288 291 180 186 209


288 VIGILANTSOLUTIONS -003 0.482 0.408 0.282 0.730 0.660 0.526 0.999 0.999 0.995
281 293 297 282 294 298 162 167 191
289 VIGILANTSOLUTIONS -004 0.624 0.549 0.422 0.858 0.817 0.709 0.998 0.996 0.991
302 282 216 222 254 258
290 VIGILANTSOLUTIONS -005 0.936 0.388 0.043 1.000 1.000 1.000
305 277 219 235 243 263
291 VIGILANTSOLUTIONS -006 0.959 0.353 0.043 1.000 1.000 1.000

FRVT
146 141 134 130 138 141 153 166 193 156 163 100 111 122 156
292 VIGILANTSOLUTIONS -007 0.076 0.028 0.011 0.113 0.088 0.053 0.997 0.996 0.991 0.081 0.047 0.371 0.242 0.391 0.295
115 109 118 122 123 123 192 190 189 164 168 103 113 146 162
293 VIGILANTSOLUTIONS -008 0.051 0.021 0.010 0.105 0.077 0.046 1.000 0.999 0.991 0.104 0.054 0.398 0.259 0.511 0.316
142 98 94 85 95 100 127 137 164 80 84 56 65 56 63
294 VISIONBOX -000 0.073 0.018 0.007 0.071 0.057 0.035 0.995 0.990 0.974 0.023 0.012 0.146 0.081 0.162 0.126

-
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


155 189 182 192 204 206 72 78 101
295 VISIONLABS -004 0.091 0.058 0.024 0.199 0.159 0.097 0.944 0.890 0.742
151 173 166 186 194 195 73 77 100
296 VISIONLABS -005 0.080 0.050 0.020 0.183 0.147 0.087 0.945 0.888 0.736
99 131 123 134 142 137 44 50 70
297 VISIONLABS -006 0.044 0.027 0.010 0.117 0.090 0.051 0.764 0.672 0.511
98 130 122 133 141 136 45 51 69 100 97 69 81
298 VISIONLABS -007 0.044 0.027 0.010 0.117 0.090 0.051 0.764 0.672 0.511 0.031 0.014 0.185 0.145
70 78 76 78 81 87 28 35 48 61 60 47 54
299 VISIONLABS -008 0.028 0.013 0.006 0.068 0.051 0.032 0.574 0.481 0.317 0.017 0.008 0.151 0.119
37 35 28 32 36 40 68 67 34 39 39 29 35
300 VISIONLABS -009 0.012 0.005 0.002 0.032 0.025 0.017 0.930 0.799 0.196 0.008 0.004 0.113 0.093
44 41 39 36 41 46 24 32 29 22 22 28 29
301 VISIONLABS -010 0.014 0.005 0.002 0.034 0.027 0.019 0.169 0.008 0.004 0.055 0.027 0.109 0.089
35 25 19 22 25 26 33 13 11 13 13 10 16
302 VISIONLABS -011 0.011 0.003 0.002 0.024 0.020 0.014 0.194 0.004 0.002 0.034 0.017 0.090 0.079
119 134 141 113 122 128 86 59 71 119 122 89 102 139 100
303 VIXVIZION -009 0.053 0.027 0.012 0.098 0.077 0.048 0.976 0.745 0.519 0.041 0.021 0.286 0.165 0.472 0.178
69 82 87 170 111 101 64 56 57 109 78 125 121 116 70
304 VNPT-001 0.027 0.014 0.006 0.158 0.068 0.036 0.922 0.718 0.373 0.035 0.011 0.990 0.537 0.362 0.134
41 48 53 45 50 54 27 19 23 29 24 32 30 15 12
305 VNPT-002 0.013 0.007 0.003 0.040 0.032 0.021 0.568 0.292 0.154 0.007 0.004 0.072 0.031 0.096 0.075
252 233 227 191 200 201 169 179 187 180 194 125 152
306 VOCORD -003 0.354 0.122 0.048 0.195 0.155 0.093 0.999 0.998 0.991 0.157 0.105 0.404 0.289
293 278 228 240 210 199 246 226 236 189 172 215 211
307 VOCORD -004 0.826 0.355 0.051 0.401 0.173 0.093 1.000 1.000 0.999 0.193 0.065 0.991 0.776
R = Num. candidates examined
N = Num. enrolled subjects

286 247 220 172 183 188 176 171 155 174 188 119 150
308 VOCORD -005 0.689 0.158 0.044 0.161 0.130 0.080 0.999 0.997 0.968 0.138 0.090 0.381 0.287
318 319 320 307 312 320 317 263 317 274 269 233 310
309 VOCORD -006 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000
278 296 304 265 283 294 183 200 232 221 224 116 129 175 208
310 VTS -000 0.605 0.598 0.595 0.624 0.619 0.613 0.999 0.999 0.998 0.613 0.609 0.760 0.739 0.761 0.749
81 79 79 77 80 81 156 149 68 78 85 53 64 72 64
311 VTS -001 0.035 0.013 0.006 0.067 0.051 0.031 0.998 0.994 0.510 0.022 0.012 0.141 0.079 0.192 0.126
120 126 125 114 120 124 203 210 148 125 134 81 93 127 107
312 VTS -002 0.053 0.026 0.010 0.098 0.075 0.046 1.000 1.000 0.953 0.045 0.026 0.231 0.133 0.417 0.187
46 53 50 53 54 48 226 235 88 55 45 123 50 163 30

-
313 VTS -003 0.015 0.007 0.003 0.048 0.033 0.019 1.000 1.000 0.632 0.014 0.005 0.954 0.060 0.635 0.089

IDENTIFICATION
72 88 89 82 89 98 41 29 38 74 73 62 69 59 69
314 XFORWARDAI -000 0.029 0.015 0.006 0.070 0.053 0.034 0.698 0.440 0.250 0.021 0.011 0.159 0.082 0.169 0.134
30 39 47 38 44 49 54 30 16 35 46 27 28 33 42
315 XFORWARDAI -001 0.010 0.005 0.003 0.036 0.028 0.020 0.838 0.448 0.143 0.008 0.005 0.062 0.030 0.123 0.102
18 24 35 17 17 25 83 38 4 17 20 16 15 21 28
316 XFORWARDAI -002 0.007 0.003 0.002 0.018 0.016 0.014 0.975 0.525 0.095 0.005 0.003 0.041 0.018 0.099 0.089
263 275 282 298 291 269 224 218 199 202
317 YISHENG -001 0.452 0.346 0.206 0.983 0.808 0.269 0.666 0.396 0.919 0.695
76 96 99 67 75 76
318 YITU -002 0.031 0.018 0.008 0.063 0.049 0.028
77 104 110 75 84 93
319 YITU -003 0.032 0.019 0.009 0.067 0.052 0.033
50 61 65 37 39 41 76 87 126
320 YITU -004 0.019 0.010 0.004 0.035 0.027 0.017 0.948 0.936 0.913
54 68 71 43 52 61
321 YITU -005 0.022 0.010 0.005 0.039 0.032 0.023
T = Threshold

Table 37: Threshold-based accuracy. Values are FNIR(N, T, L) with N = 1.6 million with thresholds set to produce FPIR = 0.0003, 0.001, and 0.01 in non-mate searches.
Throughout blue superscripts indicate the rank of the algorithm for that column. Caution: The Power-low models are mostly intended to draw attention to the kind of
behavior, not as a model to be used for prediction.
T > 0 → Identification
T = 0 → Investigation

81
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 82

Appendices

Appendix A Accuracy on large-population FRVT 2018 mugshots


This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 83
2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

11:12:06
2022/12/18

sensetime_008 sensetime_007 sensetime_006 sensetime_005 nec_2 idemia_009 sensetime_004


0.007
0.005
0.003
0.002
0.001

microsoft_4 ntechlab_011 idemia_008 rankone_013 neurotechnology_012 dahua_004 paravision_009


0.007
0.005
0.003
0.002
FNIR(N, R, T) =

0.001
FPIR(N, T) =

canon_001 vts_003 clearviewai_000 realnetworks_008 microsoft_3 canon_002 ntechlab_010

FRVT
0.007
0.005
0.003

-
False pos. identification rate
False neg. identification rate

False negative identification rate, FNIR(N, T = 0)

0.002

FACE RECOGNITION VENDOR TEST


0.001

enrollment_style
dahua_003 paravision_007 griaule_001 pangiam_000 neurotechnology_010 nec_005 ntechlab_009 lifetime_consolidated
0.007 recent
0.005
0.003
0.002
Dataset: 2018 Mugshots
0.001 Tier: 1

Rank 1
R = Num. candidates examined
N = Num. enrolled subjects

Rank 10
intema_000 maxvision_001 cogent_006 visionlabs_009 sqisoft_002 firstcreditkz_001 yitu_4
Rank 50
0.007
0.005
0.003
0.002

-
0.001

IDENTIFICATION
realnetworks_007 nec_3 nec_006 visionlabs_011 lineclova_002 kakao_000 realnetworks_006
0.007
0.005
0.003
0.002
T = Threshold

0.001

05 000
0 06 06 06 07
rankone_012 gorilla_008 neurotechnology_009 hyperverge_001 sensetime_003 line_001 6e+ 160 3e+ 6e+ 9e1+.2e+
0.007
0.005
0.003
0.002
0.001
T > 0 → Identification
T = 0 → Investigation

05 0 06 06 06 07 05 0 6 06 06 07 05 0 06 06 06 07 05 0 06 06 06 07 05 0 06 06 06 07 05 0 06 06 06 07
6e+ 000 3e+ 6e+ 9e1+.2e+ 6e+ 000 3e+0 6e+ 9e1+.2e+ 6e+ 000 3e+ 6e+ 9e1+.2e+ 6e+ 000 3e+ 6e+ 9e1+.2e+ 6e+ 000 3e+ 6e+ 9e1+.2e+ 6e+ 000 3e+ 6e+ 9e1+.2e+
160 160 160 160 160 160
Enrolled population size, N

Figure 20: [FRVT-2018 Mugshot Dataset] Rank-based identification miss rates vs. number of enrolled subjects. The figure shows false negative identification rates,
FNIR(N, R), across various gallery sizes and ranks 1, 10 and 50. The threshold is set to zero, so this metric rewards even weak scoring rank 1 mates. This also means

84
FPIR = 1, so any search without an enrolled mate will return non-mated candidates. For clarity, results are sorted and reported into tiers spanning multiple pages, the
tiering criteria being rank 1 hit rate on a gallery size of 640 000.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

microsoft_5 microsoft_6 yitu_2 vts_001 hyperverge_002 kakao_001 rankone_011


0.700
0.500
11:12:06
2022/12/18

0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
0.003
0.002
0.001

nec_004 s1_003 cognitec_006 s1_002 cubox_000 cyberlink_003 deepglint_001


0.700
0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
FNIR(N, R, T) =

0.003
0.002
0.001
FPIR(N, T) =

rendip_000 cognitec_005 gorilla_007 visionlabs_7 visionlabs_010 cloudwalk_hr_000 gorilla_006


0.700
0.500

FRVT
0.300
0.200
0.100
0.070
0.050
False negative identification rate, FNIR(N, T = 0)

0.030
0.020

-
False pos. identification rate
False neg. identification rate

0.010
0.007
0.005

FACE RECOGNITION VENDOR TEST


0.003
0.002
0.001
enrollment_style
incode_005 tevian_007 mantra_000 innovatrics_007 cogent_005 revealmedia_000 siat_1 lifetime_consolidated
0.700 recent
0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010 Dataset: 2018 Mugshots
0.007
0.005 Tier: 2
0.003
0.002
0.001
Rank 1
Rank 10
cyberlink_004 verihubs−inteligensia_000 siat_2 veridas_003 vts_002 cyberlink_005 fujitsulab_001
R = Num. candidates examined
N = Num. enrolled subjects

Rank 50
0.700
0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
0.003
0.002

-
0.001

IDENTIFICATION
imagus_007 imagus_005 visionlabs_6 pixelall_005 cloudwalk_mt_001 cloudwalk_mt_000 paravision_005
0.700
0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
0.003
T = Threshold

0.002
0.001

05 000
0 06 06 06 07
ntechlab_008 everai_paravision_004 dahua_002 pixelall_004 cib_000 visionbox_000 6e+ 160 3e+ 6e+ 9e1+.2e+

0.700
0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
0.003
T > 0 → Identification
T = 0 → Investigation

0.002
0.001

05 000
0 06 06 06 07 05 000
0 06 06 06 07 05 000
0 06 06 06 07 05 000
0 06 06 06 07 05 000
0 06 06 06 07 05 0
000 3e+0
6 06 06 07
6e+ 160 3e+ 6e+ 9e1+.2e+ 6e+ 160 3e+ 6e+ 9e1+.2e+ 6e+ 160 3e+ 6e+ 9e1+.2e+ 6e+ 160 3e+ 6e+ 9e1+.2e+ 6e+ 160 3e+ 6e+ 9e1+.2e+ 6e+ 160 6e+ 9e1+.2e+

Enrolled population size, N


Figure 21: [FRVT-2018 Mugshot Dataset] Rank-based identification miss rates vs. number of enrolled subjects. The figure shows false negative identification rates,
FNIR(N, R), across various gallery sizes and ranks 1, 10 and 50. The threshold is set to zero, so this metric rewards even weak scoring rank 1 mates. This also means
FPIR = 1, so any search without an enrolled mate will return non-mated candidates. For clarity, results are sorted and reported into tiers spanning multiple pages, the
tiering criteria being rank 1 hit rate on a gallery size of 640 000.

85
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

line_000 cogent_004 hzailu_001 imagus_006 dilusense_000 neurotechnology_008 decatur_000


0.007
11:12:06
2022/12/18

0.005

0.003
0.002

0.001

vnpt_002 vixvizion_009 innovatrics_005 realnetworks_005 fujitsulab_000 hzailu_000 sensetime_0


0.007
0.005

0.003
0.002
FNIR(N, R, T) =
FPIR(N, T) =

0.001

yitu_5 sensetime_1 xforwardai_002 tech5_002 rankone_010 xforwardai_001 vnpt_001


0.007

FRVT
0.005

0.003
False negative identification rate, FNIR(N, T = 0)

-
False pos. identification rate
False neg. identification rate

0.002

FACE RECOGNITION VENDOR TEST


0.001
enrollment_style
maxvision_000 tevian_006 s1_000 visionlabs_5 veridas_002 dermalog_010 xforwardai_000 lifetime_consolidated
0.007 recent
0.005

0.003
Dataset: 2018 Mugshots
0.002
Tier: 3

0.001 Rank 1
Rank 10
griaule_000 everai_3 vigilantsolutions_008 veridas_001 notiontag_000 qnap_003 yitu_3
R = Num. candidates examined
N = Num. enrolled subjects

Rank 50
0.007
0.005

0.003
0.002

-
IDENTIFICATION
0.001

dermalog_008 cyberlink_002 ptakuratsatu_000 visionlabs_4 gorilla_005 pixelall_003 visionlabs_008


0.007
0.005

0.003
0.002
T = Threshold

0.001
05 000
0 06 06 06 07
dermalog_009 anke_002 rankone_009 idemia_007 ntechlab_007 imperial_000 6e+ 160 3e+ 6e+ 9e1+.2e+
0.007
0.005

0.003
0.002
T > 0 → Identification
T = 0 → Investigation

0.001
05 000
0 06 06 06 07 05 000
0 06 06 06 07 05 000
0 06 06 06 07 05 000
0 06 06 06 07 05 000
0 06 06 06 07 05 0
000 3e+0
6 06 06 07
6e+ 160 3e+ 6e+ 9e1+.2e+ 6e+ 160 3e+ 6e+ 9e1+.2e+ 6e+ 160 3e+ 6e+ 9e1+.2e+ 6e+ 160 3e+ 6e+ 9e1+.2e+ 6e+ 160 3e+ 6e+ 9e1+.2e+ 6e+ 160 6e+ 9e1+.2e+

Enrolled population size, N


Figure 22: [FRVT-2018 Mugshot Dataset] Rank-based identification miss rates vs. number of enrolled subjects. The figure shows false negative identification rates,
FNIR(N, R), across various gallery sizes and ranks 1, 10 and 50. The threshold is set to zero, so this metric rewards even weak scoring rank 1 mates. This also means
FPIR = 1, so any search without an enrolled mate will return non-mated candidates. For clarity, results are sorted and reported into tiers spanning multiple pages, the
tiering criteria being rank 1 hit rate on a gallery size of 640 000.

86
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

microsoft_1 microsoft_2 hik_6 everai_2 toshiba_1 everai_1 cogent_2


0.700
0.500
11:12:06
2022/12/18

0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
0.003
0.002
0.001

vigilantsolutions_007 microsoft_0 ntechlab_6 cognitec_004 hik_5 cogent_3 ntechlab_5


0.700
0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
FNIR(N, R, T) =

0.005
0.003
0.002
FPIR(N, T) =

0.001

s1_001 tiger_3 ntechlab_4 remarkai_000 trueface_000 acer_001 neurotechnology_5


0.700
0.500

FRVT
0.300
0.200
0.100
0.070
0.050
False negative identification rate, FNIR(N, T = 0)

0.030
0.020

-
False pos. identification rate
False neg. identification rate

0.010
0.007

FACE RECOGNITION VENDOR TEST


0.005
0.003
0.002
0.001
enrollment_style
cyberlink_001 incode_004 deepsea_001 scanovate_000 tech5_001 qnap_001 yitu_1 lifetime_consolidated
0.700 recent
0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020 Dataset: 2018 Mugshots
0.010
0.007 Tier: 4
0.005
0.003
0.002
0.001
Rank 1
Rank 10
tiger_2 rankone_007 aize_001 cyberlink_000 ntechlab_3 yitu_0 rankone_006
R = Num. candidates examined
N = Num. enrolled subjects

Rank 50
0.700
0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
0.003

-
0.002
0.001

IDENTIFICATION
scanovate_001 cognitec_2 isystems_3 daon_000 toshiba_0 pixelall_002 neurotechnology_4
0.700
0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
T = Threshold

0.003
0.002
0.001

05 000
0 06 06 06 07
irex_000 qnap_002 gorilla_004 kneron_000 neurotechnology_007 sqisoft_001 6e+ 160 3e+ 6e+ 9e1+.2e+

0.700
0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
T > 0 → Identification
T = 0 → Investigation

0.003
0.002
0.001

05 000
0 06 06 06 07 05 000
0 06 06 06 07 05 000
0 06 06 06 07 05 000
0 06 06 06 07 05 000
0 06 06 06 07 05 0
000 3e+0
6 06 06 07
6e+ 160 3e+ 6e+ 9e1+.2e+ 6e+ 160 3e+ 6e+ 9e1+.2e+ 6e+ 160 3e+ 6e+ 9e1+.2e+ 6e+ 160 3e+ 6e+ 9e1+.2e+ 6e+ 160 3e+ 6e+ 9e1+.2e+ 6e+ 160 6e+ 9e1+.2e+

Enrolled population size, N


Figure 23: [FRVT-2018 Mugshot Dataset] Rank-based identification miss rates vs. number of enrolled subjects. The figure shows false negative identification rates,
FNIR(N, R), across various gallery sizes and ranks 1, 10 and 50. The threshold is set to zero, so this metric rewards even weak scoring rank 1 mates. This also means
FPIR = 1, so any search without an enrolled mate will return non-mated candidates. For clarity, results are sorted and reported into tiers spanning multiple pages, the
tiering criteria being rank 1 hit rate on a gallery size of 640 000.

87
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

tongyitrans_0 remarkai_0 siat_0 vocord_3 idemia_4 cognitec_3 remarkai_2


0.030
11:12:06
2022/12/18

0.020
0.010
0.007
0.005
0.003
0.002
0.001
qnap_000 vigilantsolutions_5 isystems_2 tevian_5 dahua_1 vigilantsolutions_6 idemia_3
0.030
0.020
0.010
0.007
0.005
FNIR(N, R, T) =

0.003
0.002
FPIR(N, T) =

0.001
tevian_4 vocord_4 vocord_5 alchera_004 idemia_0 idemia_5 fincore_000
0.030

FRVT
0.020
0.010
False negative identification rate, FNIR(N, T = 0)

0.007
0.005

-
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


0.003
0.002
0.001 enrollment_style
intellivision_002 ntechlab_0 dahua_0 megvii_0 idemia_1 rankone_5 idemia_6 lifetime_consolidated
0.030 recent
0.020
0.010
0.007
0.005 Dataset: 2018 Mugshots
0.003 Tier: 5
0.002
0.001 Rank 1
Rank 10
allgovision_001 dermalog_6 tongyitrans_1 pangiam_001 vd_003 staqu_000 idemia_2
R = Num. candidates examined
N = Num. enrolled subjects

Rank 50
0.030
0.020
0.010
0.007
0.005
0.003

-
0.002

IDENTIFICATION
0.001
acer_000 alchera_3 hik_4 vd_002 dermalog_007 visionlabs_3 lookman_4
0.030
0.020
0.010
0.007
0.005
0.003
T = Threshold

0.002
0.001
05 0
000 3e+0
6 06 06 07 05 000
0 06 06 06 07
lookman_3 kedacom_001 lookman_005 synesis_005 everai_0 6e+ 160 6e+ 9e1+.2e+ 6e+ 160 3e+ 6e+ 9e1+.2e+
0.030
0.020
0.010
0.007
0.005
0.003
T > 0 → Identification
T = 0 → Investigation

0.002
0.001
05 000
0 06 06 06 07 05 000
0 06 06 06 07 05 000
0 06 06 06 07 05 000
0 06 06 06 07 05 000
0 06 06 06 07
6e+ 160 3e+ 6e+ 9e1+.2e+ 6e+ 160 3e+ 6e+ 9e1+.2e+ 6e+ 160 3e+ 6e+ 9e1+.2e+ 6e+ 160 3e+ 6e+ 9e1+.2e+ 6e+ 160 3e+ 6e+ 9e1+.2e+

Enrolled population size, N


Figure 24: [FRVT-2018 Mugshot Dataset] Rank-based identification miss rates vs. number of enrolled subjects. The figure shows false negative identification rates,
FNIR(N, R), across various gallery sizes and ranks 1, 10 and 50. The threshold is set to zero, so this metric rewards even weak scoring rank 1 mates. This also means
FPIR = 1, so any search without an enrolled mate will return non-mated candidates. For clarity, results are sorted and reported into tiers spanning multiple pages, the
tiering criteria being rank 1 hit rate on a gallery size of 640 000.

88
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

innovatrics_4 anke_1 ntechlab_1 tevian_3 isystems_1 cogent_1 isystems_0


0.700
0.500
11:12:06
2022/12/18

0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
0.003
0.002
0.001
incode_2 cognitec_1 cogent_0 neurotechnology_6 hik_3 incode_3 3divi_4
0.700
0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
FNIR(N, R, T) =

0.007
0.005
0.003
FPIR(N, T) =

0.002
0.001
anke_0 tevian_0 tevian_1 allgovision_000 megvii_2 turingtechvip_001 tevian_2
0.700
0.500

FRVT
0.300
0.200
0.100
0.070
False negative identification rate, FNIR(N, T = 0)

0.050
0.030

-
False pos. identification rate
False neg. identification rate

0.020

FACE RECOGNITION VENDOR TEST


0.010
0.007
0.005
0.003
0.002
0.001 enrollment_style
hik_1 nec_0 megvii_1 t4isb_000 yisheng_0 3divi_5 alchera_0 lifetime_consolidated
0.700 recent
0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020 Dataset: 2018 Mugshots
0.010 Tier: 6
0.007
0.005
0.003
0.002
0.001 Rank 1
Rank 10
gorilla_2 rankone_2 rankone_3 innovatrics_3 rankone_1 sensetime_002 hik_2
R = Num. candidates examined
N = Num. enrolled subjects

Rank 50
0.700
0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005

-
0.003
0.002

IDENTIFICATION
0.001
neurotechnology_3 realnetworks_004 incode_1 nec_1 realnetworks_003 cognitec_0 hik_0
0.700
0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
T = Threshold

0.005
0.003
0.002
0.001
05 000
0 06 06 06 07
dermalog_5 camvi_5 camvi_3 camvi_4 synesis_003 f8_001 6e+ 160 3e+ 6e+ 9e1+.2e+

0.700
0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
T > 0 → Identification
T = 0 → Investigation

0.005
0.003
0.002
0.001
05 000
0 06 06 06 07 05 000
0 06 06 06 07 05 000
0 06 06 06 07 05 000
0 06 06 06 07 05 000
0 06 06 06 07 05 0
000 3e+0
6 06 06 07
6e+ 160 3e+ 6e+ 9e1+.2e+ 6e+ 160 3e+ 6e+ 9e1+.2e+ 6e+ 160 3e+ 6e+ 9e1+.2e+ 6e+ 160 3e+ 6e+ 9e1+.2e+ 6e+ 160 3e+ 6e+ 9e1+.2e+ 6e+ 160 6e+ 9e1+.2e+

Enrolled population size, N


Figure 25: [FRVT-2018 Mugshot Dataset] Rank-based identification miss rates vs. number of enrolled subjects. The figure shows false negative identification rates,
FNIR(N, R), across various gallery sizes and ranks 1, 10 and 50. The threshold is set to zero, so this metric rewards even weak scoring rank 1 mates. This also means
FPIR = 1, so any search without an enrolled mate will return non-mated candidates. For clarity, results are sorted and reported into tiers spanning multiple pages, the
tiering criteria being rank 1 hit rate on a gallery size of 640 000.

89
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

yisheng_1 gorilla_3 3divi_0 3divi_6 realnetworks_1 aware_5 aware_3

0.100
11:12:06
2022/12/18

0.070
0.050
0.030
0.020
0.010
0.007
0.005
0.003

neurotechnology_2 neurotechnology_1 mukh_002 innovatrics_1 intellivision_001 innovatrics_0 3divi_1

0.100
0.070
0.050
0.030
0.020
0.010
FNIR(N, R, T) =

0.007
0.005
FPIR(N, T) =

0.003

vd_1 rankone_4 realnetworks_0 incode_0 neurotechnology_0 realnetworks_2 3divi_2

0.100

FRVT
0.070
0.050
0.030
False negative identification rate, FNIR(N, T = 0)

0.020

-
False pos. identification rate
False neg. identification rate

0.010

FACE RECOGNITION VENDOR TEST


0.007
0.005
0.003
enrollment_style
gorilla_1 20face_000 rankone_0 tiger_0 aware_4 innovatrics_2 vocord_2 lifetime_consolidated
0.100 recent
0.070
0.050
0.030
0.020
Dataset: 2018 Mugshots
0.010 Tier: 7
0.007
0.005
0.003 Rank 1
Rank 10
kneron_001 aware_6 vocord_1 vocord_0 shaman_6 vigilantsolutions_3 aware_1
R = Num. candidates examined
N = Num. enrolled subjects

Rank 50
0.100
0.070
0.050
0.030
0.020
0.010
0.007

-
0.005

IDENTIFICATION
0.003

shaman_7 aware_2 eyedea_3 3divi_3 aware_0 alchera_2 newland_2

0.100
0.070
0.050
0.030
0.020
0.010
0.007
T = Threshold

0.005
0.003
05 000
0 06 06 06 07
vigilantsolutions_0 vigilantsolutions_4 dermalog_3 camvi_2 dermalog_4 imagus_008 6e+ 160 3e+ 6e+ 9e1+.2e+

0.100
0.070
0.050
0.030
0.020
0.010
0.007
T > 0 → Identification
T = 0 → Investigation

0.005
0.003
05 000
0 06 06 06 07 05 000
0 06 06 06 07 05 000
0 06 06 06 07 05 000
0 06 06 06 07 05 000
0 06 06 06 07 05 0
000 3e+0
6 06 06 07
6e+ 160 3e+ 6e+ 9e1+.2e+ 6e+ 160 3e+ 6e+ 9e1+.2e+ 6e+ 160 3e+ 6e+ 9e1+.2e+ 6e+ 160 3e+ 6e+ 9e1+.2e+ 6e+ 160 3e+ 6e+ 9e1+.2e+ 6e+ 160 6e+ 9e1+.2e+

Enrolled population size, N


Figure 26: [FRVT-2018 Mugshot Dataset] Rank-based identification miss rates vs. number of enrolled subjects. The figure shows false negative identification rates,
FNIR(N, R), across various gallery sizes and ranks 1, 10 and 50. The threshold is set to zero, so this metric rewards even weak scoring rank 1 mates. This also means
FPIR = 1, so any search without an enrolled mate will return non-mated candidates. For clarity, results are sorted and reported into tiers spanning multiple pages, the
tiering criteria being rank 1 hit rate on a gallery size of 640 000.

90
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

dermalog_0 dermalog_2 shaman_3 dermalog_1 glory_1 shaman_1 shaman_0

0.70
11:12:06
2022/12/18

0.50
0.30
0.20
0.10
0.07
0.05
0.03
0.02
vigilantsolutions_1 eyedea_1 eyedea_2 smilart_2 smilart_0 noblis_2 camvi_1

0.70
0.50
0.30
0.20
0.10
0.07
FNIR(N, R, T) =

0.05
FPIR(N, T) =

0.03
0.02
glory_0 imagus_2 gorilla_0 shaman_4 synesis_3 vigilantsolutions_2 intsysmsu_000

0.70

FRVT
0.50
0.30
False negative identification rate, FNIR(N, T = 0)

0.20

-
False pos. identification rate
False neg. identification rate

0.10

FACE RECOGNITION VENDOR TEST


0.07
0.05
0.03
0.02 Dataset: 2018 Mugshots
synesis_0 shaman_2 noblis_1 eyedea_0 hbinno_0 imagus_0 ayonix_1 Tier: 8

0.70 Rank 1
0.50
0.30 Rank 10
0.20 Rank 50
0.10
0.07
0.05
0.03 enrollment_style
0.02
lifetime_consolidated
imagus_3 ayonix_2 microfocus_6 verijelas_000 vd_0 microfocus_5 ayonix_0
R = Num. candidates examined
N = Num. enrolled subjects

recent
0.70
0.50
0.30
0.20
0.10
0.07
0.05

-
0.03

IDENTIFICATION
0.02
smilart_1 microfocus_4 microfocus_0 microfocus_1 microfocus_3 quantasoft_1 microfocus_2

0.70
0.50
0.30
0.20
0.10
0.07
T = Threshold

0.05
0.03
0.02
05 000 3e+06 06 06 07
digidata_000 vts_000 smilart_5 smilart_4 alchera_1 vocord_6 6e+ 160
0 6e+ 9e1+.2e+

0.70
0.50
0.30
0.20
0.10
0.07
T > 0 → Identification
T = 0 → Investigation

0.05
0.03
0.02
05 000
0 06 06 06 07 05 000
0 06 06 06 07 05 000
0 06 06 06 07 05 000
0 06 06 06 07 05 000
0 06 06 06 07 05 000 3e+06 06 06 07
6e+ 160 3e+ 6e+ 9e1+.2e+ 6e+ 160 3e+ 6e+ 9e1+.2e+ 6e+ 160 3e+ 6e+ 9e1+.2e+ 6e+ 160 3e+ 6e+ 9e1+.2e+ 6e+ 160 3e+ 6e+ 9e1+.2e+ 6e+ 160
0 6e+ 9e1+.2e+

Enrolled population size, N


Figure 27: [FRVT-2018 Mugshot Dataset] Rank-based identification miss rates vs. number of enrolled subjects. The figure shows false negative identification rates,
FNIR(N, R), across various gallery sizes and ranks 1, 10 and 50. The threshold is set to zero, so this metric rewards even weak scoring rank 1 mates. This also means
FPIR = 1, so any search without an enrolled mate will return non-mated candidates. For clarity, results are sorted and reported into tiers spanning multiple pages, the
tiering criteria being rank 1 hit rate on a gallery size of 640 000.

91
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 92
2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

11:12:06
2022/12/18

canon_001 canon_002 clearviewai_000 cogent_006 dahua_003 dahua_004 firstcreditkz_001


0.007
0.005
0.003
0.002
0.001

gorilla_008 griaule_001 hyperverge_001 idemia_008 idemia_009 intema_000 kakao_000


0.007
0.005
0.003
0.002
FNIR(N, R, T) =
FPIR(N, T) =

0.001

line_001 lineclova_002 maxvision_001 microsoft_3 microsoft_4 nec_005 nec_006

FRVT
0.007
0.005
0.003

-
False pos. identification rate
False neg. identification rate

0.002

FACE RECOGNITION VENDOR TEST


0.001
False negative identification rate (FNIR)

enrollment_style
lifetime_consolidated
nec_2 nec_3 neurotechnology_009 neurotechnology_010 neurotechnology_012 ntechlab_009 ntechlab_010 recent
0.007
0.005
0.003
0.002 Dataset: 2018 Mugshots
Tier: 1
0.001
00640000
01600000
R = Num. candidates examined
N = Num. enrolled subjects

03000000
ntechlab_011 pangiam_000 paravision_007 paravision_009 rankone_012 rankone_013 realnetworks_006
06000000
0.007
0.005 12000000

0.003
0.002

-
0.001

IDENTIFICATION
realnetworks_007 realnetworks_008 sensetime_003 sensetime_004 sensetime_005 sensetime_006 sensetime_007
0.007
0.005
0.003
0.002
T = Threshold

0.001

sensetime_008 sqisoft_002 visionlabs_009 visionlabs_011 vts_003 yitu_4 1 3 10 30 50


0.007
0.005
0.003
0.002
T > 0 → Identification
T = 0 → Investigation

0.001

1 3 10 30 50 1 3 10 30 50 1 3 10 30 50 1 3 10 30 50 1 3 10 30 50 1 3 10 30 50
Rank

Figure 28: [FRVT-2018 Mugshot Dataset] Rank-based identification miss rates vs. rank. The figure shows false negative identification rates (FNIR) for ranks up to 50.
This metric is appropriate to investigational applications where human reviewers will adjudicate sorted candidate lists. Note that with threshold set to zero, FPIR = 1,

93
i.e. any search without an enrolled mate will return non-mated candidates. Results are sorted and reported into tiers for clarity, with the tiering criteria being rank 1 hit
rate on a gallery size of N = 640 000 subjects.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

cib_000 cloudwalk_hr_000 cloudwalk_mt_000 cloudwalk_mt_001 cogent_005 cognitec_005 cognitec_006


0.700
0.500
11:12:06
2022/12/18

0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
0.003
0.002
0.001

cubox_000 cyberlink_003 cyberlink_004 cyberlink_005 dahua_002 deepglint_001 everai_paravision_004


0.700
0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
FNIR(N, R, T) =

0.003
0.002
FPIR(N, T) =

0.001

fujitsulab_001 gorilla_006 gorilla_007 hyperverge_002 imagus_005 imagus_007 incode_005


0.700
0.500

FRVT
0.300
0.200
0.100
0.070
0.050
0.030
0.020

-
False pos. identification rate
False neg. identification rate

0.010
0.007

FACE RECOGNITION VENDOR TEST


0.005
False negative identification rate (FNIR)

0.003
0.002
0.001 Dataset: 2018 Mugshots
Tier: 2
innovatrics_007 kakao_001 mantra_000 microsoft_5 microsoft_6 nec_004 ntechlab_008
00640000
0.700
0.500
0.300 01600000
0.200
0.100
0.070 03000000
0.050
0.030
0.020
06000000
0.010
0.007 12000000
0.005
0.003
0.002
0.001
enrollment_style
R = Num. candidates examined
N = Num. enrolled subjects

paravision_005 pixelall_004 pixelall_005 rankone_011 rendip_000 revealmedia_000 s1_002


lifetime_consolidated
0.700
0.500 recent
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
0.003

-
0.002

IDENTIFICATION
0.001

s1_003 siat_1 siat_2 tevian_007 veridas_003 verihubs−inteligensia_000 visionbox_000


0.700
0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
T = Threshold

0.003
0.002
0.001

visionlabs_010 visionlabs_6 visionlabs_7 vts_001 vts_002 yitu_2 1 3 10 30 50


0.700
0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
T > 0 → Identification
T = 0 → Investigation

0.003
0.002
0.001

1 3 10 30 50 1 3 10 30 50 1 3 10 30 50 1 3 10 30 50 1 3 10 30 50 1 3 10 30 50
Rank
Figure 29: [FRVT-2018 Mugshot Dataset] Rank-based identification miss rates vs. rank. The figure shows false negative identification rates (FNIR) for ranks up to 50.
This metric is appropriate to investigational applications where human reviewers will adjudicate sorted candidate lists. Note that with threshold set to zero, FPIR = 1,
i.e. any search without an enrolled mate will return non-mated candidates. Results are sorted and reported into tiers for clarity, with the tiering criteria being rank 1 hit
rate on a gallery size of N = 640 000 subjects.

94
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

anke_002 cogent_004 cyberlink_002 decatur_000 dermalog_008 dermalog_009 dermalog_010


0.007
11:12:06
2022/12/18

0.005

0.003
0.002

0.001

dilusense_000 everai_3 fujitsulab_000 gorilla_005 griaule_000 hzailu_000 hzailu_001


0.007
0.005

0.003
0.002
FNIR(N, R, T) =
FPIR(N, T) =

0.001

idemia_007 imagus_006 imperial_000 innovatrics_005 line_000 maxvision_000 neurotechnology_008


0.007

FRVT
0.005

0.003

-
False pos. identification rate
False neg. identification rate

0.002

FACE RECOGNITION VENDOR TEST


False negative identification rate (FNIR)

0.001 enrollment_style
lifetime_consolidated
notiontag_000 ntechlab_007 pixelall_003 ptakuratsatu_000 qnap_003 rankone_009 rankone_010 recent
0.007
0.005

0.003 Dataset: 2018 Mugshots


Tier: 3
0.002
00640000
0.001 01600000
03000000
R = Num. candidates examined
N = Num. enrolled subjects

realnetworks_005 s1_000 sensetime_0 sensetime_1 tech5_002 tevian_006 veridas_001


06000000
0.007
12000000
0.005

0.003
0.002

-
IDENTIFICATION
0.001

veridas_002 vigilantsolutions_008 visionlabs_008 visionlabs_4 visionlabs_5 vixvizion_009 vnpt_001


0.007
0.005

0.003
0.002
T = Threshold

0.001

vnpt_002 xforwardai_000 xforwardai_001 xforwardai_002 yitu_3 yitu_5 1 3 10 30 50


0.007
0.005

0.003
0.002
T > 0 → Identification
T = 0 → Investigation

0.001
1 3 10 30 50 1 3 10 30 50 1 3 10 30 50 1 3 10 30 50 1 3 10 30 50 1 3 10 30 50
Rank
Figure 30: [FRVT-2018 Mugshot Dataset] Rank-based identification miss rates vs. rank. The figure shows false negative identification rates (FNIR) for ranks up to 50.
This metric is appropriate to investigational applications where human reviewers will adjudicate sorted candidate lists. Note that with threshold set to zero, FPIR = 1,
i.e. any search without an enrolled mate will return non-mated candidates. Results are sorted and reported into tiers for clarity, with the tiering criteria being rank 1 hit
rate on a gallery size of N = 640 000 subjects.

95
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

acer_001 aize_001 cogent_2 cogent_3 cognitec_004 cognitec_2 cyberlink_000


0.700
0.500
11:12:06
2022/12/18

0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
0.003
0.002
0.001

cyberlink_001 daon_000 deepsea_001 everai_1 everai_2 gorilla_004 hik_5


0.700
0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
FNIR(N, R, T) =

0.005
0.003
0.002
FPIR(N, T) =

0.001

hik_6 incode_004 irex_000 isystems_3 kneron_000 microsoft_0 microsoft_1


0.700

FRVT
0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020

-
False pos. identification rate
False neg. identification rate

0.010

FACE RECOGNITION VENDOR TEST


0.007
False negative identification rate (FNIR)

0.005
0.003
0.002
0.001 Dataset: 2018 Mugshots
Tier: 4
microsoft_2 neurotechnology_007 neurotechnology_4 neurotechnology_5 ntechlab_3 ntechlab_4 ntechlab_5
00640000
0.700
0.500
0.300 01600000
0.200
0.100 03000000
0.070
0.050
0.030 06000000
0.020 12000000
0.010
0.007
0.005
0.003
0.002
0.001
enrollment_style
R = Num. candidates examined
N = Num. enrolled subjects

ntechlab_6 pixelall_002 qnap_001 qnap_002 rankone_006 rankone_007 remarkai_000


lifetime_consolidated
0.700
0.500 recent
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005

-
0.003
0.002

IDENTIFICATION
0.001

s1_001 scanovate_000 scanovate_001 sqisoft_001 tech5_001 tiger_2 tiger_3


0.700
0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
T = Threshold

0.005
0.003
0.002
0.001

toshiba_0 toshiba_1 trueface_000 vigilantsolutions_007 yitu_0 yitu_1 1 3 10 30 50

0.700
0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
T > 0 → Identification
T = 0 → Investigation

0.005
0.003
0.002
0.001

1 3 10 30 50 1 3 10 30 50 1 3 10 30 50 1 3 10 30 50 1 3 10 30 50 1 3 10 30 50
Rank
Figure 31: [FRVT-2018 Mugshot Dataset] Rank-based identification miss rates vs. rank. The figure shows false negative identification rates (FNIR) for ranks up to 50.
This metric is appropriate to investigational applications where human reviewers will adjudicate sorted candidate lists. Note that with threshold set to zero, FPIR = 1,
i.e. any search without an enrolled mate will return non-mated candidates. Results are sorted and reported into tiers for clarity, with the tiering criteria being rank 1 hit
rate on a gallery size of N = 640 000 subjects.

96
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

acer_000 alchera_004 alchera_3 allgovision_001 cognitec_3 dahua_0 dahua_1


0.030
11:12:06
2022/12/18

0.020
0.010
0.007
0.005
0.003
0.002
0.001
dermalog_007 dermalog_6 everai_0 fincore_000 hik_4 idemia_0 idemia_1
0.030
0.020
0.010
0.007
0.005
FNIR(N, R, T) =

0.003
0.002
FPIR(N, T) =

0.001
idemia_2 idemia_3 idemia_4 idemia_5 idemia_6 intellivision_002 isystems_2
0.030

FRVT
0.020
0.010
0.007

-
False pos. identification rate
False neg. identification rate

0.005

FACE RECOGNITION VENDOR TEST


False negative identification rate (FNIR)

0.003
0.002
enrollment_style
0.001
lifetime_consolidated
kedacom_001 lookman_005 lookman_3 lookman_4 megvii_0 ntechlab_0 pangiam_001 recent
0.030
0.020
0.010 Dataset: 2018 Mugshots
0.007
0.005 Tier: 5
0.003
0.002 00640000
0.001 01600000
03000000
R = Num. candidates examined
N = Num. enrolled subjects

qnap_000 rankone_5 remarkai_0 remarkai_2 siat_0 staqu_000 synesis_005


06000000
0.030 12000000
0.020
0.010
0.007
0.005
0.003

-
0.002

IDENTIFICATION
0.001
tevian_4 tevian_5 tongyitrans_0 tongyitrans_1 vd_002 vd_003 vigilantsolutions_5
0.030
0.020
0.010
0.007
0.005
0.003
T = Threshold

0.002
0.001
vigilantsolutions_6 visionlabs_3 vocord_3 vocord_4 vocord_5 1 3 10 30 50 1 3 10 30 50
0.030
0.020
0.010
0.007
0.005
T > 0 → Identification
T = 0 → Investigation

0.003
0.002
0.001
1 3 10 30 50 1 3 10 30 50 1 3 10 30 50 1 3 10 30 50 1 3 10 30 50
Rank
Figure 32: [FRVT-2018 Mugshot Dataset] Rank-based identification miss rates vs. rank. The figure shows false negative identification rates (FNIR) for ranks up to 50.
This metric is appropriate to investigational applications where human reviewers will adjudicate sorted candidate lists. Note that with threshold set to zero, FPIR = 1,
i.e. any search without an enrolled mate will return non-mated candidates. Results are sorted and reported into tiers for clarity, with the tiering criteria being rank 1 hit
rate on a gallery size of N = 640 000 subjects.

97
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

3divi_4 3divi_5 alchera_0 allgovision_000 anke_0 anke_1 camvi_3


0.700
0.500
11:12:06
2022/12/18

0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
0.003
0.002
0.001
camvi_4 camvi_5 cogent_0 cogent_1 cognitec_0 cognitec_1 dermalog_5
0.700
0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
FNIR(N, R, T) =

0.010
0.007
0.005
0.003
FPIR(N, T) =

0.002
0.001
f8_001 gorilla_2 hik_0 hik_1 hik_2 hik_3 incode_1
0.700

FRVT
0.500
0.300
0.200
0.100
0.070
0.050
0.030

-
False pos. identification rate
False neg. identification rate

0.020

FACE RECOGNITION VENDOR TEST


0.010
False negative identification rate (FNIR)

0.007
0.005
0.003
0.002 enrollment_style
0.001
lifetime_consolidated
incode_2 incode_3 innovatrics_3 innovatrics_4 isystems_0 isystems_1 megvii_1 recent
0.700
0.500
0.300
0.200
0.100
0.070 Dataset: 2018 Mugshots
0.050
0.030
0.020 Tier: 6
0.010
0.007
0.005 00640000
0.003
0.002
0.001 01600000
03000000
R = Num. candidates examined
N = Num. enrolled subjects

megvii_2 nec_0 nec_1 neurotechnology_3 neurotechnology_6 ntechlab_1 rankone_1


06000000
0.700
0.500 12000000
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007

-
0.005
0.003

IDENTIFICATION
0.002
0.001
rankone_2 rankone_3 realnetworks_003 realnetworks_004 sensetime_002 synesis_003 t4isb_000
0.700
0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
T = Threshold

0.007
0.005
0.003
0.002
0.001
tevian_0 tevian_1 tevian_2 tevian_3 turingtechvip_001 yisheng_0 1 3 10 30 50

0.700
0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
T > 0 → Identification
T = 0 → Investigation

0.007
0.005
0.003
0.002
0.001
1 3 10 30 50 1 3 10 30 50 1 3 10 30 50 1 3 10 30 50 1 3 10 30 50 1 3 10 30 50
Rank
Figure 33: [FRVT-2018 Mugshot Dataset] Rank-based identification miss rates vs. rank. The figure shows false negative identification rates (FNIR) for ranks up to 50.
This metric is appropriate to investigational applications where human reviewers will adjudicate sorted candidate lists. Note that with threshold set to zero, FPIR = 1,
i.e. any search without an enrolled mate will return non-mated candidates. Results are sorted and reported into tiers for clarity, with the tiering criteria being rank 1 hit
rate on a gallery size of N = 640 000 subjects.

98
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

20face_000 3divi_0 3divi_1 3divi_2 3divi_3 3divi_6 alchera_2

0.100
11:12:06
2022/12/18

0.070
0.050
0.030
0.020
0.010
0.007
0.005
0.003

aware_0 aware_1 aware_2 aware_3 aware_4 aware_5 aware_6

0.100
0.070
0.050
0.030
0.020
0.010
FNIR(N, R, T) =

0.007
0.005
FPIR(N, T) =

0.003

camvi_2 dermalog_3 dermalog_4 eyedea_3 gorilla_1 gorilla_3 imagus_008

0.100

FRVT
0.070
0.050
0.030
0.020

-
False pos. identification rate
False neg. identification rate

0.010

FACE RECOGNITION VENDOR TEST


False negative identification rate (FNIR)

0.007
0.005
0.003 enrollment_style
lifetime_consolidated
incode_0 innovatrics_0 innovatrics_1 innovatrics_2 intellivision_001 kneron_001 mukh_002 recent
0.100
0.070
0.050
0.030 Dataset: 2018 Mugshots
0.020 Tier: 7
0.010
0.007
0.005 00640000
0.003 01600000
03000000
R = Num. candidates examined
N = Num. enrolled subjects

neurotechnology_0 neurotechnology_1 neurotechnology_2 newland_2 rankone_0 rankone_4 realnetworks_0


06000000
0.100 12000000
0.070
0.050
0.030
0.020
0.010
0.007

-
0.005

IDENTIFICATION
0.003

realnetworks_1 realnetworks_2 shaman_6 shaman_7 tiger_0 vd_1 vigilantsolutions_0

0.100
0.070
0.050
0.030
0.020
0.010
T = Threshold

0.007
0.005
0.003

vigilantsolutions_3 vigilantsolutions_4 vocord_0 vocord_1 vocord_2 yisheng_1 1 3 10 30 50

0.100
0.070
0.050
0.030
0.020
0.010
T > 0 → Identification
T = 0 → Investigation

0.007
0.005
0.003
1 3 10 30 50 1 3 10 30 50 1 3 10 30 50 1 3 10 30 50 1 3 10 30 50 1 3 10 30 50
Rank
Figure 34: [FRVT-2018 Mugshot Dataset] Rank-based identification miss rates vs. rank. The figure shows false negative identification rates (FNIR) for ranks up to 50.
This metric is appropriate to investigational applications where human reviewers will adjudicate sorted candidate lists. Note that with threshold set to zero, FPIR = 1,
i.e. any search without an enrolled mate will return non-mated candidates. Results are sorted and reported into tiers for clarity, with the tiering criteria being rank 1 hit
rate on a gallery size of N = 640 000 subjects.

99
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

alchera_1 ayonix_0 ayonix_1 ayonix_2 camvi_1 dermalog_0 dermalog_1

0.70
11:12:06
2022/12/18

0.50
0.30
0.20
0.10
0.07
0.05
0.03
0.02
dermalog_2 digidata_000 eyedea_0 eyedea_1 eyedea_2 glory_0 glory_1

0.70
0.50
0.30
0.20
0.10
FNIR(N, R, T) =

0.07
0.05
FPIR(N, T) =

0.03
0.02
gorilla_0 hbinno_0 imagus_0 imagus_2 imagus_3 intsysmsu_000 microfocus_0

FRVT
0.70
0.50
0.30
0.20

-
False pos. identification rate
False neg. identification rate

0.10

FACE RECOGNITION VENDOR TEST


0.07
False negative identification rate (FNIR)

0.05
0.03 enrollment_style
0.02
lifetime_consolidated
microfocus_1 microfocus_2 microfocus_3 microfocus_4 microfocus_5 microfocus_6 noblis_1 recent

0.70
0.50
0.30
0.20 Dataset: 2018 Mugshots
Tier: 8
0.10
0.07
0.05
00640000
0.03
0.02 01600000
03000000
R = Num. candidates examined
N = Num. enrolled subjects

noblis_2 quantasoft_1 shaman_0 shaman_1 shaman_2 shaman_3 shaman_4


06000000
0.70 12000000
0.50
0.30
0.20
0.10
0.07

-
0.05

IDENTIFICATION
0.03
0.02
smilart_0 smilart_1 smilart_2 smilart_4 smilart_5 synesis_0 synesis_3

0.70
0.50
0.30
0.20
0.10
0.07
T = Threshold

0.05
0.03
0.02
vd_0 verijelas_000 vigilantsolutions_1 vigilantsolutions_2 vocord_6 vts_000 1 3 10 30 50

0.70
0.50
0.30
0.20
0.10
0.07
T > 0 → Identification
T = 0 → Investigation

0.05
0.03
0.02
1 3 10 30 50 1 3 10 30 50 1 3 10 30 50 1 3 10 30 50 1 3 10 30 50 1 3 10 30 50
Rank
Figure 35: [FRVT-2018 Mugshot Dataset] Rank-based identification miss rates vs. rank. The figure shows false negative identification rates (FNIR) for ranks up to 50.
This metric is appropriate to investigational applications where human reviewers will adjudicate sorted candidate lists. Note that with threshold set to zero, FPIR = 1,
i.e. any search without an enrolled mate will return non-mated candidates. Results are sorted and reported into tiers for clarity, with the tiering criteria being rank 1 hit

100
rate on a gallery size of N = 640 000 subjects.
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 101
2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

11:12:06
2022/12/18

sensetime_008 sensetime_007 sensetime_006 idemia_009 sensetime_005 sensetime_004


0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
0.003
0.002
0.001

nec_005 paravision_009 idemia_008 nec_2 intema_000 ntechlab_010


0.200
0.100
0.070
FNIR(N, R, T) =

0.050
0.030
FPIR(N, T) =

0.020
0.010
0.007
0.005
0.003

FRVT
0.002
0.001

ntechlab_011 visionlabs_011 paravision_007 maxvision_001 visionlabs_009 griaule_001

-
False pos. identification rate
False neg. identification rate

False negative identification rate, FNIR(N, T > 0)

0.200

FACE RECOGNITION VENDOR TEST


0.100
0.070
0.050
0.030
0.020
0.010
Dataset: 2018 Mugshot
0.007 Tier: 1
0.005
0.003
0.002 FPIR=0.001
0.001 FPIR=0.010
FPIR=0.100
rankone_013 canon_002 neurotechnology_012 clearviewai_000 line_001 canon_001
0.200
0.100
0.070
0.050 enrollment_style
R = Num. candidates examined
N = Num. enrolled subjects

0.030
0.020 lifetime_consolidated
0.010 recent
0.007
0.005
0.003
0.002
0.001

-
pangiam_000 sensetime_003 dahua_004 realnetworks_008 vts_003 neurotechnology_010

IDENTIFICATION
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
0.003
0.002
T = Threshold

0.001

05 000
0 06 06 +06e+07 e+05 000
0 06 06 +06e+07 e+05 000
0 06 06 +06e+07
yitu_4 dahua_003 sqisoft_002 6e+ 160 3e+ 6e+ 9e1 .2 6 160 3e+ 6e+ 9e1 .2 6 160 3e+ 6e+ 9e1 .2
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
T > 0 → Identification
T = 0 → Investigation

0.003
0.002
0.001

05 000
0 06 06 +06e+07 e+05 000 06 06 +06e+07 e+05 000
0 06 06 +06e+07
6e+ 160 3e+ 6e+ 9e1.2 6 160
0 3e+ 6e+ 9e1 .2 6 160 3e+ 6e+ 9e1 .2

Enrolled population size, N

Figure 36: [FRVT-2018 Mugshot Dataset] Threshold-based identification miss rates vs. number of enrolled subjects. The figure shows FNIR(N, T) across various
gallery sizes when the threshold is set to achieve the given FPIRs. The rank criterion is irrelevant at high thresholds as mates are always at rank 1. The results are

102
computed from the trials listed in rows 1-10 of Table 1. Less accurate algorithms were not run on large N, so results are missing. For clarity, results are sorted and
reported into tiers spanning multiple pages. The tiering criteria is complicated: First paging by FNIR(Nb , 1, 0), then sorting by median FNIR(Nb , T), Nb = 640 000.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

nec_006 nec_004 nec_3 firstcreditkz_001 kakao_001 cubox_000


0.200
11:12:06
2022/12/18

0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
0.003
0.002
0.001

deepglint_001 hyperverge_002 lineclova_002 cogent_006 hyperverge_001 s1_002


0.200
0.100
0.070
0.050
0.030
FNIR(N, R, T) =

0.020
FPIR(N, T) =

0.010
0.007
0.005
0.003
0.002

FRVT
0.001

visionlabs_010 ntechlab_009 cognitec_006 s1_003 rankone_012 cognitec_005


False negative identification rate, FNIR(N, T > 0)

0.200

-
False pos. identification rate
False neg. identification rate

0.100

FACE RECOGNITION VENDOR TEST


0.070
0.050
0.030
0.020
enrollment_style
0.010
0.007 lifetime_consolidated
0.005
0.003
recent
0.002
0.001

rankone_011 realnetworks_007 rendip_000 realnetworks_006 vts_001 cib_000 Dataset: 2018 Mugshot


0.200
Tier: 2
0.100
0.070
0.050 FPIR=0.001
0.030
0.020
FPIR=0.010
R = Num. candidates examined
N = Num. enrolled subjects

FPIR=0.100
0.010
0.007
0.005
0.003
0.002
0.001

neurotechnology_009 kakao_000 ntechlab_008 gorilla_008 microsoft_4 microsoft_3

-
0.200

IDENTIFICATION
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
0.003
0.002
T = Threshold

0.001
05 000
0 06 06 +06e+07 e+05 000
0 06 06 +06e+07 e+05 000
0 06 06 +06e+07 e+05 000
0 06 06 +06e+07 e+05 000
0 06 06 +06e+07
gorilla_007 6e+ 160 3e+ 6e+ 9e1 .2 6 160 3e+ 6e+ 9e1 .2 6 160 3e+ 6e+ 9e1 .2 6 160 3e+ 6e+ 9e1.2 6 160 3e+ 6e+ 9e1 .2
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
T > 0 → Identification
T = 0 → Investigation

0.003
0.002
0.001
05 000
0 06 06 +06e+07
6e+ 160 3e+ 6e+ 9e1 .2

Enrolled population size, N


Figure 37: [FRVT-2018 Mugshot Dataset] Threshold-based identification miss rates vs. number of enrolled subjects. The figure shows FNIR(N, T) across various
gallery sizes when the threshold is set to achieve the given FPIRs. The rank criterion is irrelevant at high thresholds as mates are always at rank 1. The results are
computed from the trials listed in rows 1-10 of Table 1. Less accurate algorithms were not run on large N, so results are missing. For clarity, results are sorted and

103
reported into tiers spanning multiple pages. The tiering criteria is complicated: First paging by FNIR(Nb , 1, 0), then sorting by median FNIR(Nb , T), Nb = 640 000.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

cloudwalk_hr_000 cloudwalk_mt_001 cloudwalk_mt_000 tevian_007 cyberlink_004 paravision_005

0.700
11:12:06
2022/12/18

0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
0.003
0.002

cyberlink_003 cogent_005 mantra_000 cyberlink_005 pixelall_005 incode_005

0.700
0.500
0.300
0.200
0.100
FNIR(N, R, T) =

0.070
0.050
FPIR(N, T) =

0.030
0.020
0.010
0.007
0.005
0.003
0.002

FRVT
microsoft_6 everai_paravision_004 innovatrics_007 revealmedia_000 veridas_003 dahua_002
False negative identification rate, FNIR(N, T > 0)

0.700

-
False pos. identification rate
False neg. identification rate

0.500

FACE RECOGNITION VENDOR TEST


0.300
0.200
0.100
0.070 enrollment_style
0.050
0.030
0.020 lifetime_consolidated
0.010 recent
0.007
0.005
0.003
0.002

fujitsulab_001 yitu_2 siat_1 siat_2 imagus_005 visionlabs_6


Dataset: 2018 Mugshot
Tier: 3
0.700
0.500
0.300 FPIR=0.001
0.200
0.100 FPIR=0.010
0.070 FPIR=0.100
R = Num. candidates examined
N = Num. enrolled subjects

0.050
0.030 FPIR=1.000
0.020
0.010
0.007
0.005
0.003
0.002

microsoft_5 visionlabs_7 pixelall_004 imagus_007 verihubs−inteligensia_000 gorilla_006

-
IDENTIFICATION
0.700
0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
0.003
0.002
T = Threshold

05 000
0 06 06 +06e+07 e+05 000
0 06 06 +06e+07 e+05 000
0 06 06 +06e+07 e+05 000
0 06 06 +06e+07
vts_002 visionbox_000 6e+ 160 3e+ 6e+ 9e1 .2 6 160 3e+ 6e+ 9e1 .2 6 160 3e+ 6e+ 9e1.2 6 160 3e+ 6e+ 9e1 .2

0.700
0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
T > 0 → Identification
T = 0 → Investigation

0.005
0.003
0.002

05 000
0 06 06 +06e+07 e+05 000
0 06 06 +06e+07
6e+ 160 3e+ 6e+ 9e1 .2 6 160 3e+ 6e+ 9e1 .2

Enrolled population size, N


Figure 38: [FRVT-2018 Mugshot Dataset] Threshold-based identification miss rates vs. number of enrolled subjects. The figure shows FNIR(N, T) across various
gallery sizes when the threshold is set to achieve the given FPIRs. The rank criterion is irrelevant at high thresholds as mates are always at rank 1. The results are
computed from the trials listed in rows 1-10 of Table 1. Less accurate algorithms were not run on large N, so results are missing. For clarity, results are sorted and

104
reported into tiers spanning multiple pages. The tiering criteria is complicated: First paging by FNIR(Nb , 1, 0), then sorting by median FNIR(Nb , T), Nb = 640 000.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

xforwardai_002 xforwardai_001 vnpt_002 dermalog_010 hzailu_001 visionlabs_008


0.300
11:12:06
2022/12/18

0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
0.003
0.002

cogent_004 vnpt_001 xforwardai_000 rankone_010 sensetime_0 idemia_007


0.300
0.200
0.100
0.070
FNIR(N, R, T) =

0.050
0.030
FPIR(N, T) =

0.020
0.010
0.007
0.005
0.003
0.002

FRVT
sensetime_1 imagus_006 rankone_009 fujitsulab_000 hzailu_000 pixelall_003
False negative identification rate, FNIR(N, T > 0)

0.300

-
False pos. identification rate
False neg. identification rate

0.200

FACE RECOGNITION VENDOR TEST


0.100
0.070
0.050
0.030 enrollment_style
0.020
0.010 lifetime_consolidated
0.007 recent
0.005
0.003
0.002

decatur_000 line_000 s1_000 ntechlab_007 innovatrics_005 neurotechnology_008 Dataset: 2018 Mugshot


0.300 Tier: 4
0.200
0.100 FPIR=0.001
0.070
0.050 FPIR=0.010
R = Num. candidates examined
N = Num. enrolled subjects

0.030
0.020
FPIR=0.100

0.010
0.007
0.005
0.003
0.002

tech5_002 vixvizion_009 realnetworks_005 dilusense_000 maxvision_000 visionlabs_5

-
IDENTIFICATION
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
0.003
T = Threshold

0.002

05 000
0 06 06 +06e+07 e+05 000
0 06 06 +06e+07 e+05 000
0 06 06 +06e+07 e+05 000
0 06 06 +06e+07 e+05 000
0 06 06 +06e+07
visionlabs_4 6e+ 160 3e+ 6e+ 9e1 .2 6 160 3e+ 6e+ 9e1 .2 6 160 3e+ 6e+ 9e1 .2 6 160 3e+ 6e+ 9e1.2 6 160 3e+ 6e+ 9e1 .2
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
T > 0 → Identification
T = 0 → Investigation

0.005
0.003
0.002

05 000
0 06 06 +06e+07
6e+ 160 3e+ 6e+ 9e1 .2

Enrolled population size, N


Figure 39: [FRVT-2018 Mugshot Dataset] Threshold-based identification miss rates vs. number of enrolled subjects. The figure shows FNIR(N, T) across various
gallery sizes when the threshold is set to achieve the given FPIRs. The rank criterion is irrelevant at high thresholds as mates are always at rank 1. The results are
computed from the trials listed in rows 1-10 of Table 1. Less accurate algorithms were not run on large N, so results are missing. For clarity, results are sorted and

105
reported into tiers spanning multiple pages. The tiering criteria is complicated: First paging by FNIR(Nb , 1, 0), then sorting by median FNIR(Nb , T), Nb = 640 000.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

yitu_5 tevian_006 cyberlink_002 s1_001 notiontag_000 yitu_3


0.700
11:12:06
2022/12/18

0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
0.003
0.002

trueface_000 qnap_003 dermalog_009 griaule_000 imperial_000 vigilantsolutions_008


0.700
0.500
0.300
0.200
0.100
FNIR(N, R, T) =

0.070
0.050
FPIR(N, T) =

0.030
0.020
0.010
0.007
0.005
0.003
0.002

FRVT
vigilantsolutions_007 rankone_007 anke_002 everai_3 everai_1 veridas_002
False negative identification rate, FNIR(N, T > 0)

0.700

-
False pos. identification rate
False neg. identification rate

0.500

FACE RECOGNITION VENDOR TEST


0.300
0.200
0.100
0.070
0.050 enrollment_style
0.030
0.020 lifetime_consolidated
0.010
0.007 recent
0.005
0.003
0.002

cognitec_004 microsoft_1 cogent_3 cogent_2 veridas_001 microsoft_0 Dataset: 2018 Mugshot


Tier: 5
0.700
0.500
0.300
0.200 FPIR=0.001
0.100 FPIR=0.010
0.070
R = Num. candidates examined
N = Num. enrolled subjects

0.050 FPIR=0.100
0.030
0.020
0.010
0.007
0.005
0.003
0.002

microsoft_2 ntechlab_6 ptakuratsatu_000 dermalog_008 cyberlink_001 remarkai_000

-
IDENTIFICATION
0.700
0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
0.003
0.002
T = Threshold

05 000
0 06 06 +06e+07 e+05 000
0 06 06 +06e+07 e+05 000
0 06 06 +06e+07
incode_004 gorilla_005 sqisoft_001 6e+ 160 3e+ 6e+ 9e1 .2 6 160 3e+ 6e+ 9e1.2 6 160 3e+ 6e+ 9e1 .2

0.700
0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
T > 0 → Identification
T = 0 → Investigation

0.005
0.003
0.002

05 000
0 06 06 +06e+07 e+05 000
0 06 06 +06e+07 e+05 000
0 06 06 +06e+07
6e+ 160 3e+ 6e+ 9e1 .2 6 160 3e+ 6e+ 9e1 .2 6 160 3e+ 6e+ 9e1 .2

Enrolled population size, N


Figure 40: [FRVT-2018 Mugshot Dataset] Threshold-based identification miss rates vs. number of enrolled subjects. The figure shows FNIR(N, T) across various
gallery sizes when the threshold is set to achieve the given FPIRs. The rank criterion is irrelevant at high thresholds as mates are always at rank 1. The results are
computed from the trials listed in rows 1-10 of Table 1. Less accurate algorithms were not run on large N, so results are missing. For clarity, results are sorted and

106
reported into tiers spanning multiple pages. The tiering criteria is complicated: First paging by FNIR(Nb , 1, 0), then sorting by median FNIR(Nb , T), Nb = 640 000.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

irex_000 daon_000 toshiba_1 qnap_002 hik_6 rankone_006

0.700
11:12:06
2022/12/18

0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005

everai_2 tech5_001 hik_5 yitu_1 ntechlab_5 deepsea_001

0.700
0.500
0.300
0.200
FNIR(N, R, T) =

0.100
0.070
FPIR(N, T) =

0.050
0.030
0.020
0.010
0.007
0.005

FRVT
cyberlink_000 acer_001 ntechlab_4 yitu_0 isystems_3 neurotechnology_5
False negative identification rate, FNIR(N, T > 0)

0.700

-
False pos. identification rate
False neg. identification rate

0.500

FACE RECOGNITION VENDOR TEST


0.300
0.200
0.100
0.070 enrollment_style
0.050
0.030 lifetime_consolidated
0.020
recent
0.010
0.007
0.005

cognitec_2 toshiba_0 ntechlab_3 qnap_001 aize_001 neurotechnology_007 Dataset: 2018 Mugshot


Tier: 6
0.700
0.500
0.300
0.200 FPIR=0.001
0.100
FPIR=0.010
R = Num. candidates examined
N = Num. enrolled subjects

0.070 FPIR=0.100
0.050
0.030
0.020
0.010
0.007
0.005

neurotechnology_4 tiger_3 pixelall_002 scanovate_000 tiger_2 gorilla_004

-
IDENTIFICATION
0.700
0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
T = Threshold

05 000
0 06 06 +06e+07 e+05 000
0 06 06 +06e+07 e+05 000
0 06 06 +06e+07 e+05 000
0 06 06 +06e+07
scanovate_001 kneron_000 6e+ 160 3e+ 6e+ 9e1 .2 6 160 3e+ 6e+ 9e1 .2 6 160 3e+ 6e+ 9e1.2 6 160 3e+ 6e+ 9e1 .2

0.700
0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
T > 0 → Identification
T = 0 → Investigation

0.010
0.007
0.005

05 000
0 06 06 +06e+07 e+05 000
0 06 06 +06e+07
6e+ 160 3e+ 6e+ 9e1 .2 6 160 3e+ 6e+ 9e1 .2

Enrolled population size, N


Figure 41: [FRVT-2018 Mugshot Dataset] Threshold-based identification miss rates vs. number of enrolled subjects. The figure shows FNIR(N, T) across various
gallery sizes when the threshold is set to achieve the given FPIRs. The rank criterion is irrelevant at high thresholds as mates are always at rank 1. The results are
computed from the trials listed in rows 1-10 of Table 1. Less accurate algorithms were not run on large N, so results are missing. For clarity, results are sorted and

107
reported into tiers spanning multiple pages. The tiering criteria is complicated: First paging by FNIR(Nb , 1, 0), then sorting by median FNIR(Nb , T), Nb = 640 000.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

pangiam_001 kedacom_001 vd_003 lookman_005 idemia_4 idemia_3


0.500
11:12:06
2022/12/18

0.300
0.200

0.100
0.070
0.050
0.030
0.020

0.010
0.007
0.005

visionlabs_3 idemia_6 idemia_5 dermalog_6 cognitec_3 idemia_1


0.500
0.300
0.200

0.100
FNIR(N, R, T) =

0.070
0.050
FPIR(N, T) =

0.030
0.020

0.010
0.007

FRVT
0.005

isystems_2 tongyitrans_0 rankone_5 dahua_1 tongyitrans_1 vigilantsolutions_5


False negative identification rate, FNIR(N, T > 0)

0.500

-
False pos. identification rate
False neg. identification rate

0.300

FACE RECOGNITION VENDOR TEST


0.200

0.100
0.070
0.050 enrollment_style
0.030 lifetime_consolidated
0.020
recent
0.010
0.007
0.005

vigilantsolutions_6 siat_0 vocord_5 tevian_5 ntechlab_0 remarkai_0 Dataset: 2018 Mugshot


0.500 Tier: 7
0.300
0.200
FPIR=0.001
0.100 FPIR=0.010
0.070
R = Num. candidates examined
N = Num. enrolled subjects

0.050 FPIR=0.100
0.030
0.020

0.010
0.007
0.005

vocord_4 qnap_000 vocord_3 remarkai_2 idemia_0 allgovision_001

-
0.500

IDENTIFICATION
0.300
0.200

0.100
0.070
0.050
0.030
0.020

0.010
0.007
T = Threshold

0.005
05 000
0 06 06 +06e+07 e+05 000
0 06 06 +06e+07 e+05 000
0 06 06 +06e+07 e+05 000
0 06 06 +06e+07 e+05 000
0 06 06 +06e+07
megvii_0 6e+ 160 3e+ 6e+ 9e1 .2 6 160 3e+ 6e+ 9e1 .2 6 160 3e+ 6e+ 9e1 .2 6 160 3e+ 6e+ 9e1.2 6 160 3e+ 6e+ 9e1 .2
0.500
0.300
0.200

0.100
0.070
0.050
0.030
0.020
T > 0 → Identification
T = 0 → Investigation

0.010
0.007
0.005
05 000
0 06 06 +06e+07
6e+ 160 3e+ 6e+ 9e1 .2

Enrolled population size, N


Figure 42: [FRVT-2018 Mugshot Dataset] Threshold-based identification miss rates vs. number of enrolled subjects. The figure shows FNIR(N, T) across various
gallery sizes when the threshold is set to achieve the given FPIRs. The rank criterion is irrelevant at high thresholds as mates are always at rank 1. The results are
computed from the trials listed in rows 1-10 of Table 1. Less accurate algorithms were not run on large N, so results are missing. For clarity, results are sorted and

108
reported into tiers spanning multiple pages. The tiering criteria is complicated: First paging by FNIR(Nb , 1, 0), then sorting by median FNIR(Nb , T), Nb = 640 000.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

t4isb_000 turingtechvip_001 synesis_005 staqu_000 lookman_4 lookman_3

0.300
11:12:06
2022/12/18

0.200

0.100
0.070
0.050
0.030
0.020

0.010
0.007

cogent_0 cogent_1 idemia_2 vd_002 dahua_0 megvii_2

0.300
0.200

0.100
FNIR(N, R, T) =

0.070
FPIR(N, T) =

0.050
0.030
0.020

0.010

FRVT
0.007

isystems_1 isystems_0 everai_0 dermalog_007 megvii_1 allgovision_000


False negative identification rate, FNIR(N, T > 0)

-
0.300
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


0.200

0.100
0.070
0.050
enrollment_style
0.030 lifetime_consolidated
0.020 recent

0.010
0.007

tevian_4 anke_1 ntechlab_1 cognitec_1 hik_3 innovatrics_4 Dataset: 2018 Mugshot


Tier: 8
0.300
0.200
FPIR=0.001
0.100 FPIR=0.010
R = Num. candidates examined
N = Num. enrolled subjects

0.070
0.050 FPIR=0.100
0.030
0.020

0.010
0.007

hik_4 anke_0 fincore_000 intellivision_002 alchera_3 incode_3

-
IDENTIFICATION
0.300
0.200

0.100
0.070
0.050
0.030
0.020

0.010
T = Threshold

0.007
05 000
0 06 06 +06e+07 e+05 000
0 06 06 +06e+07 e+05 000
0 06 06 +06e+07 e+05 000
0 06 06 +06e+07
acer_000 alchera_004 6e+ 160 3e+ 6e+ 9e1 .2 6 160 3e+ 6e+ 9e1 .2 6 160 3e+ 6e+ 9e1.2 6 160 3e+ 6e+ 9e1 .2

0.300
0.200

0.100
0.070
0.050
0.030
0.020
T > 0 → Identification
T = 0 → Investigation

0.010
0.007
05 000
0 06 06 +06e+07 e+05 000
0 06 06 +06e+07
6e+ 160 3e+ 6e+ 9e1 .2 6 160 3e+ 6e+ 9e1 .2

Enrolled population size, N


Figure 43: [FRVT-2018 Mugshot Dataset] Threshold-based identification miss rates vs. number of enrolled subjects. The figure shows FNIR(N, T) across various
gallery sizes when the threshold is set to achieve the given FPIRs. The rank criterion is irrelevant at high thresholds as mates are always at rank 1. The results are
computed from the trials listed in rows 1-10 of Table 1. Less accurate algorithms were not run on large N, so results are missing. For clarity, results are sorted and

109
reported into tiers spanning multiple pages. The tiering criteria is complicated: First paging by FNIR(Nb , 1, 0), then sorting by median FNIR(Nb , T), Nb = 640 000.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

sensetime_002 nec_0 synesis_003 dermalog_5 camvi_5 camvi_3


11:12:06
2022/12/18

0.70
0.50
0.30
0.20

0.10
0.07
0.05
0.03
0.02

0.01

camvi_4 alchera_0 nec_1 hik_1 hik_0 rankone_2

0.70
0.50
0.30
0.20
FNIR(N, R, T) =

0.10
FPIR(N, T) =

0.07
0.05
0.03
0.02

0.01

FRVT
rankone_3 tevian_3 3divi_4 incode_2 hik_2 rankone_1
False negative identification rate, FNIR(N, T > 0)

-
0.70
False pos. identification rate
False neg. identification rate

0.50

FACE RECOGNITION VENDOR TEST


0.30
0.20

0.10 enrollment_style
0.07
0.05 lifetime_consolidated
0.03 recent
0.02

0.01

tevian_0 tevian_1 realnetworks_004 realnetworks_003 3divi_5 cognitec_0 Dataset: 2018 Mugshot


Tier: 9
0.70
0.50
0.30 FPIR=0.001
0.20
FPIR=0.010
R = Num. candidates examined
N = Num. enrolled subjects

0.10 FPIR=0.100
0.07
0.05
0.03
0.02

0.01

neurotechnology_3 incode_1 tevian_2 gorilla_2 neurotechnology_6 innovatrics_3

-
IDENTIFICATION
0.70
0.50
0.30
0.20

0.10
0.07
0.05
0.03
0.02

0.01
T = Threshold

05 000
0 06 06 06 +07 e+05 000
0 06 06 06 +07 e+05 000
0 06 06 06 +07 e+05 000
0 06 06 06 +07
yisheng_0 f8_001 6e+ 160 3e+ 6e+ 9e+
1.2e 6 160 3e+ 6e+ 9e+
1.2e 6 160 3e+ 6e+ 9e+
1.2e 6 160 3e+ 6e+ 9e+
1.2e

0.70
0.50
0.30
0.20

0.10
0.07
0.05
0.03
T > 0 → Identification
T = 0 → Investigation

0.02

0.01

05 000
0 06 06 06 +07 e+05 000
0 06 06 06 +07
6e+ 160 3e+ 6e+ 9e+
1.2e 6 160 3e+ 6e+ 9e+
1.2e
Enrolled population size, N
Figure 44: [FRVT-2018 Mugshot Dataset] Threshold-based identification miss rates vs. number of enrolled subjects. The figure shows FNIR(N, T) across various
gallery sizes when the threshold is set to achieve the given FPIRs. The rank criterion is irrelevant at high thresholds as mates are always at rank 1. The results are
computed from the trials listed in rows 1-10 of Table 1. Less accurate algorithms were not run on large N, so results are missing. For clarity, results are sorted and

110
reported into tiers spanning multiple pages. The tiering criteria is complicated: First paging by FNIR(Nb , 1, 0), then sorting by median FNIR(Nb , T), Nb = 640 000.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

kneron_001 shaman_6 3divi_6 aware_3 aware_5 shaman_7


11:12:06
2022/12/18

0.70
0.50
0.30
0.20

0.10
0.07
0.05
0.03

vocord_2 vocord_1 vocord_0 rankone_4 realnetworks_1 vd_1

0.70
0.50
0.30
FNIR(N, R, T) =

0.20
FPIR(N, T) =

0.10
0.07
0.05
0.03

FRVT
gorilla_3 rankone_0 mukh_002 innovatrics_2 3divi_0 3divi_1
False negative identification rate, FNIR(N, T > 0)

-
False pos. identification rate
False neg. identification rate

0.70

FACE RECOGNITION VENDOR TEST


0.50
0.30
0.20
enrollment_style
0.10 lifetime_consolidated
0.07
recent
0.05
0.03

3divi_2 innovatrics_0 realnetworks_2 realnetworks_0 intellivision_001 innovatrics_1 Dataset: 2018 Mugshot


Tier: 10
0.70
0.50
FPIR=0.001
0.30
FPIR=0.010
0.20
R = Num. candidates examined
N = Num. enrolled subjects

FPIR=0.100
0.10
0.07
0.05
0.03

neurotechnology_1 neurotechnology_2 neurotechnology_0 incode_0 yisheng_1 gorilla_1

-
IDENTIFICATION
0.70
0.50
0.30
0.20

0.10
0.07
0.05
0.03
T = Threshold

05 000
0 06 06 06 +07 e+05 000
0 06 06 06 +07 e+05 000
0 06 06 06 +07 e+05 000
0 06 06 06 +07
20face_000 tiger_0 6e+ 160 3e+ 6e+ 9e+
1.2e 6 160 3e+ 6e+ 9e+
1.2e 6 160 3e+ 6e+ 9e+
1.2e 6 160 3e+ 6e+ 9e+
1.2e

0.70
0.50
0.30
0.20

0.10
0.07
T > 0 → Identification
T = 0 → Investigation

0.05
0.03

05 000
0 06 06 06 +07 e+05 000
0 06 06 06 +07
6e+ 160 3e+ 6e+ 9e+
1.2e 6 160 3e+ 6e+ 9e+
1.2e
Enrolled population size, N
Figure 45: [FRVT-2018 Mugshot Dataset] Threshold-based identification miss rates vs. number of enrolled subjects. The figure shows FNIR(N, T) across various
gallery sizes when the threshold is set to achieve the given FPIRs. The rank criterion is irrelevant at high thresholds as mates are always at rank 1. The results are
computed from the trials listed in rows 1-10 of Table 1. Less accurate algorithms were not run on large N, so results are missing. For clarity, results are sorted and

111
reported into tiers spanning multiple pages. The tiering criteria is complicated: First paging by FNIR(Nb , 1, 0), then sorting by median FNIR(Nb , T), Nb = 640 000.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

aware_2 aware_1 aware_0 aware_4 aware_6 glory_1


11:12:06
2022/12/18

0.70
0.50

0.30
0.20

0.10
0.07
0.05

eyedea_3 vigilantsolutions_3 glory_0 3divi_3 alchera_2 dermalog_3

0.70
0.50
FNIR(N, R, T) =

0.30
0.20
FPIR(N, T) =

0.10
0.07
0.05

FRVT
newland_2 shaman_3 dermalog_4 dermalog_0 synesis_0 shaman_0
False negative identification rate, FNIR(N, T > 0)

-
False pos. identification rate
False neg. identification rate

0.70

FACE RECOGNITION VENDOR TEST


0.50

0.30
0.20 enrollment_style
lifetime_consolidated
0.10 recent
0.07
0.05

dermalog_2 vigilantsolutions_0 camvi_2 shaman_1 dermalog_1 vigilantsolutions_4 Dataset: 2018 Mugshot


Tier: 11
0.70
0.50 FPIR=0.001
0.30 FPIR=0.010
R = Num. candidates examined
N = Num. enrolled subjects

0.20 FPIR=0.100

0.10
0.07
0.05

gorilla_0 eyedea_1 synesis_3 noblis_2 smilart_0 smilart_2

-
IDENTIFICATION
0.70
0.50

0.30
0.20

0.10
0.07
0.05
T = Threshold

05 000
0 06 06 06 +07 e+05 000
0 06 06 06 +07 e+05 000
0 06 06 06 +07 e+05 000
0 06 06 06 +07
imagus_008 intsysmsu_000 6e+ 160 3e+ 6e+ 9e+
1.2e 6 160 3e+ 6e+ 9e+
1.2e 6 160 3e+ 6e+ 9e+
1.2e 6 160 3e+ 6e+ 9e+
1.2e

0.70
0.50

0.30
0.20

0.10
T > 0 → Identification
T = 0 → Investigation

0.07
0.05

05 000
0 06 06 06 +07 e+05 000
0 06 06 06 +07
6e+ 160 3e+ 6e+ 9e+
1.2e 6 160 3e+ 6e+ 9e+
1.2e
Enrolled population size, N
Figure 46: [FRVT-2018 Mugshot Dataset] Threshold-based identification miss rates vs. number of enrolled subjects. The figure shows FNIR(N, T) across various
gallery sizes when the threshold is set to achieve the given FPIRs. The rank criterion is irrelevant at high thresholds as mates are always at rank 1. The results are
computed from the trials listed in rows 1-10 of Table 1. Less accurate algorithms were not run on large N, so results are missing. For clarity, results are sorted and

112
reported into tiers spanning multiple pages. The tiering criteria is complicated: First paging by FNIR(Nb , 1, 0), then sorting by median FNIR(Nb , T), Nb = 640 000.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

shaman_4 eyedea_2 vigilantsolutions_1 camvi_1 imagus_2 shaman_2


11:12:06
2022/12/18

0.7

0.5

0.3

digidata_000 imagus_0 smilart_1 vts_000 hbinno_0 imagus_3

0.7
FNIR(N, R, T) =

0.5
FPIR(N, T) =

0.3

FRVT
eyedea_0 quantasoft_1 verijelas_000 vigilantsolutions_2 ayonix_1 ayonix_2
False negative identification rate, FNIR(N, T > 0)

-
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


0.7

0.5 Dataset: 2018 Mugshot


Tier: 12

0.3
FPIR=0.001
FPIR=0.010
FPIR=0.100
ayonix_0 vd_0 microfocus_5 microfocus_0 microfocus_1 microfocus_2

0.7 enrollment_style
lifetime_consolidated
R = Num. candidates examined
N = Num. enrolled subjects

0.5 recent

0.3

microfocus_3 microfocus_4 microfocus_6 smilart_5 smilart_4 noblis_1

-
IDENTIFICATION
0.7

0.5

0.3
T = Threshold

05 000
0 06 06 06 +07 e+05 000
0 06 06 06 +07 e+05 000
0 06 06 06 +07 e+05 000
0 06 06 06 +07
alchera_1 vocord_6 6e+ 160 3e+ 6e+ 9e+
1.2e 6 160 3e+ 6e+ 9e+
1.2e 6 160 3e+ 6e+ 9e+
1.2e 6 160 3e+ 6e+ 9e+
1.2e

0.7

0.5
T > 0 → Identification
T = 0 → Investigation

0.3

05 000
0 06 06 06 +07 e+05 000
0 06 06 06 +07
6e+ 160 3e+ 6e+ 9e+
1.2e 6 160 3e+ 6e+ 9e+
1.2e
Enrolled population size, N
Figure 47: [FRVT-2018 Mugshot Dataset] Threshold-based identification miss rates vs. number of enrolled subjects. The figure shows FNIR(N, T) across various
gallery sizes when the threshold is set to achieve the given FPIRs. The rank criterion is irrelevant at high thresholds as mates are always at rank 1. The results are
computed from the trials listed in rows 1-10 of Table 1. Less accurate algorithms were not run on large N, so results are missing. For clarity, results are sorted and

113
reported into tiers spanning multiple pages. The tiering criteria is complicated: First paging by FNIR(Nb , 1, 0), then sorting by median FNIR(Nb , T), Nb = 640 000.
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 114
2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

11:12:06
2022/12/18

sensetime_008 sensetime_007 sensetime_006 idemia_009 sensetime_005 sensetime_004

0.700
0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
0.003
0.002
0.001

paravision_009 idemia_008 nec_005 nec_2 intema_000 ntechlab_010

0.700
0.500
0.300
FNIR(N, R, T) =

0.200
0.100
FPIR(N, T) =

0.070
0.050
0.030
0.020
0.010
0.007
0.005
0.003
0.002

FRVT
0.001

ntechlab_011 visionlabs_011 paravision_007 maxvision_001 griaule_001 visionlabs_009

-
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


0.700
0.500
False negative identification rate, FNIR(T)

0.300
0.200
0.100
0.070 Dataset: 2018 Mugshot
0.050
0.030
0.020 Tier: 1
0.010
0.007
0.005
0.003 00640000
0.002
01600000
0.001
03000000
06000000
rankone_013 clearviewai_000 canon_002 canon_001 pangiam_000 dahua_004
12000000
0.700
0.500
0.300
0.200
R = Num. candidates examined
N = Num. enrolled subjects

0.100
0.070
0.050
enrollment_style
0.030 lifetime_consolidated
0.020
0.010 recent
0.007
0.005
0.003
0.002
0.001

-
IDENTIFICATION
vts_003 realnetworks_008 line_001 neurotechnology_012 sensetime_003 yitu_4

0.700
0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
0.003
0.002
T = Threshold

0.001

04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 00
neurotechnology_010 dahua_003 sqisoft_002 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+

0.700
0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
T > 0 → Identification
T = 0 → Investigation

0.005
0.003
0.002
0.001

04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 00
1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+
False positive identification rate, FPIR(T)

Figure 48: [FRVT-2018 Mugshot Dataset] Identification miss rates vs. false positive rates. The figure shows miss rates FNIR(N, L, T) as a function of FPIR(N, T), with
N ranging from 640 000 to 12 000 000 as noted in rows 1-10 of Table 1. These error tradeoff characteristics are useful for applications where a threshold must be elevated

115
to limit false positives, such as when human reviewer labor is not matched to the volume of searches. Dark lines join points of equal threshold: If horizontal, FPIR(T)
rises with N, and mate scores are independent of N. Other algorithms adjust scores in an attempt to make FPIR independent of N.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

nec_006 firstcreditkz_001 nec_004 nec_3 kakao_001 cubox_000


0.700
11:12:06
2022/12/18

0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
0.003
0.002
0.001

lineclova_002 hyperverge_002 deepglint_001 hyperverge_001 s1_002 visionlabs_010


0.700
0.500
0.300
0.200
0.100
0.070
FNIR(N, R, T) =

0.050
0.030
0.020
FPIR(N, T) =

0.010
0.007
0.005
0.003
0.002
0.001

FRVT
ntechlab_009 cognitec_006 s1_003 cogent_006 cognitec_005 realnetworks_007
0.700

-
False pos. identification rate
False neg. identification rate

0.500
False negative identification rate, FNIR(T)

0.300

FACE RECOGNITION VENDOR TEST


0.200
0.100
0.070
0.050 Dataset: 2018 Mugshot
0.030
0.020 Tier: 2
0.010
0.007
0.005
0.003 00640000
0.002
0.001 01600000
03000000
06000000
rankone_012 rankone_011 rendip_000 realnetworks_006 neurotechnology_009 vts_001
12000000
0.700
0.500
0.300
0.200
0.100
0.070
0.050 enrollment_style
R = Num. candidates examined
N = Num. enrolled subjects

0.030
0.020 lifetime_consolidated
0.010
0.007
0.005
recent
0.003
0.002
0.001

kakao_000 cib_000 gorilla_008 ntechlab_008 microsoft_4 microsoft_3

-
IDENTIFICATION
0.700
0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
0.003
0.002
0.001
T = Threshold

04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 00
gorilla_007 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+

0.700
0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
0.003
T > 0 → Identification
T = 0 → Investigation

0.002
0.001

04 04 03 03 02 02 01 01 00
1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+
False positive identification rate, FPIR(T)
Figure 49: [FRVT-2018 Mugshot Dataset] Identification miss rates vs. false positive rates. The figure shows miss rates FNIR(N, L, T) as a function of FPIR(N, T), with
N ranging from 640 000 to 12 000 000 as noted in rows 1-10 of Table 1. These error tradeoff characteristics are useful for applications where a threshold must be elevated
to limit false positives, such as when human reviewer labor is not matched to the volume of searches. Dark lines join points of equal threshold: If horizontal, FPIR(T)

116
rises with N, and mate scores are independent of N. Other algorithms adjust scores in an attempt to make FPIR independent of N.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

cloudwalk_hr_000 cloudwalk_mt_001 cloudwalk_mt_000 tevian_007 cyberlink_004 cyberlink_003


0.700
11:12:06
2022/12/18

0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
0.003
0.002
0.001

mantra_000 paravision_005 cyberlink_005 pixelall_005 incode_005 microsoft_6


0.700
0.500
0.300
0.200
0.100
0.070
FNIR(N, R, T) =

0.050
0.030
0.020
FPIR(N, T) =

0.010
0.007
0.005
0.003
0.002
0.001

FRVT
everai_paravision_004 innovatrics_007 revealmedia_000 veridas_003 cogent_005 dahua_002
0.700

-
False pos. identification rate
False neg. identification rate

0.500
False negative identification rate, FNIR(T)

0.300

FACE RECOGNITION VENDOR TEST


0.200
0.100
0.070
0.050 enrollment_style
0.030
0.020 lifetime_consolidated
0.010
0.007 recent
0.005
0.003
0.002
0.001

Dataset: 2018 Mugshot


yitu_2 siat_1 siat_2 imagus_005 microsoft_5 visionlabs_7 Tier: 3
0.700
0.500
0.300 00640000
0.200
0.100 01600000
0.070
0.050 03000000
R = Num. candidates examined
N = Num. enrolled subjects

0.030
0.020 06000000
0.010
0.007 12000000
0.005
0.003
0.002
0.001

visionlabs_6 verihubs−inteligensia_000 imagus_007 pixelall_004 gorilla_006 vts_002

-
IDENTIFICATION
0.700
0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
0.003
0.002
0.001
T = Threshold

04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 00
fujitsulab_001 visionbox_000 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+

0.700
0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
0.003
T > 0 → Identification
T = 0 → Investigation

0.002
0.001

04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 00
1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+
False positive identification rate, FPIR(T)
Figure 50: [FRVT-2018 Mugshot Dataset] Identification miss rates vs. false positive rates. The figure shows miss rates FNIR(N, L, T) as a function of FPIR(N, T), with
N ranging from 640 000 to 12 000 000 as noted in rows 1-10 of Table 1. These error tradeoff characteristics are useful for applications where a threshold must be elevated
to limit false positives, such as when human reviewer labor is not matched to the volume of searches. Dark lines join points of equal threshold: If horizontal, FPIR(T)

117
rises with N, and mate scores are independent of N. Other algorithms adjust scores in an attempt to make FPIR independent of N.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

xforwardai_002 xforwardai_001 vnpt_002 dermalog_010 hzailu_001 visionlabs_008

0.700
11:12:06
2022/12/18

0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
0.003
0.002
0.001

xforwardai_000 vnpt_001 rankone_010 sensetime_0 imagus_006 sensetime_1

0.700
0.500
0.300
0.200
0.100
FNIR(N, R, T) =

0.070
0.050
0.030
FPIR(N, T) =

0.020
0.010
0.007
0.005
0.003
0.002
0.001

FRVT
idemia_007 hzailu_000 rankone_009 pixelall_003 decatur_000 line_000

0.700

-
False pos. identification rate
False neg. identification rate

0.500
False negative identification rate, FNIR(T)

FACE RECOGNITION VENDOR TEST


0.300
0.200
0.100
0.070 Dataset: 2018 Mugshot
0.050
0.030 Tier: 4
0.020
0.010
0.007
0.005 00640000
0.003
0.002 01600000
0.001 03000000
06000000
s1_000 innovatrics_005 realnetworks_005 dilusense_000 ntechlab_007 maxvision_000
12000000
0.700
0.500
0.300
0.200
0.100
0.070 enrollment_style
0.050
R = Num. candidates examined
N = Num. enrolled subjects

0.030 lifetime_consolidated
0.020
0.010 recent
0.007
0.005
0.003
0.002
0.001

vixvizion_009 tech5_002 fujitsulab_000 visionlabs_5 neurotechnology_008 cogent_004

-
IDENTIFICATION
0.700
0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
0.003
0.002
T = Threshold

0.001

04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 00
visionlabs_4 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+

0.700
0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
T > 0 → Identification
T = 0 → Investigation

0.003
0.002
0.001

04 04 03 03 02 02 01 01 00
1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+
False positive identification rate, FPIR(T)
Figure 51: [FRVT-2018 Mugshot Dataset] Identification miss rates vs. false positive rates. The figure shows miss rates FNIR(N, L, T) as a function of FPIR(N, T), with
N ranging from 640 000 to 12 000 000 as noted in rows 1-10 of Table 1. These error tradeoff characteristics are useful for applications where a threshold must be elevated
to limit false positives, such as when human reviewer labor is not matched to the volume of searches. Dark lines join points of equal threshold: If horizontal, FPIR(T)

118
rises with N, and mate scores are independent of N. Other algorithms adjust scores in an attempt to make FPIR independent of N.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

yitu_5 tevian_006 cyberlink_002 s1_001 notiontag_000 yitu_3


0.700
11:12:06
2022/12/18

0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
0.003
0.002
0.001

trueface_000 griaule_000 dermalog_009 qnap_003 imperial_000 vigilantsolutions_008


0.700
0.500
0.300
0.200
0.100
0.070
FNIR(N, R, T) =

0.050
0.030
0.020
FPIR(N, T) =

0.010
0.007
0.005
0.003
0.002
0.001

FRVT
vigilantsolutions_007 rankone_007 anke_002 everai_3 everai_1 veridas_002
0.700

-
False pos. identification rate
False neg. identification rate

0.500
False negative identification rate, FNIR(T)

0.300

FACE RECOGNITION VENDOR TEST


0.200
0.100
0.070 enrollment_style
0.050
0.030
0.020 lifetime_consolidated
0.010
0.007 recent
0.005
0.003
0.002
0.001

Dataset: 2018 Mugshot


cognitec_004 microsoft_1 veridas_001 ptakuratsatu_000 microsoft_0 microsoft_2 Tier: 5
0.700
0.500
0.300 00640000
0.200
01600000
0.100
0.070 03000000
0.050
R = Num. candidates examined
N = Num. enrolled subjects

0.030
0.020 06000000
0.010 12000000
0.007
0.005
0.003
0.002
0.001

dermalog_008 ntechlab_6 cyberlink_001 remarkai_000 incode_004 gorilla_005

-
IDENTIFICATION
0.700
0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
0.003
0.002
0.001
T = Threshold

04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 00
cogent_2 cogent_3 sqisoft_001 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+

0.700
0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
0.003
T > 0 → Identification
T = 0 → Investigation

0.002
0.001

04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 00
1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+
False positive identification rate, FPIR(T)
Figure 52: [FRVT-2018 Mugshot Dataset] Identification miss rates vs. false positive rates. The figure shows miss rates FNIR(N, L, T) as a function of FPIR(N, T), with
N ranging from 640 000 to 12 000 000 as noted in rows 1-10 of Table 1. These error tradeoff characteristics are useful for applications where a threshold must be elevated
to limit false positives, such as when human reviewer labor is not matched to the volume of searches. Dark lines join points of equal threshold: If horizontal, FPIR(T)

119
rises with N, and mate scores are independent of N. Other algorithms adjust scores in an attempt to make FPIR independent of N.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

daon_000 toshiba_1 hik_6 qnap_002 everai_2 rankone_006

0.700
11:12:06
2022/12/18

0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
0.003
0.002
0.001

hik_5 yitu_1 tech5_001 deepsea_001 ntechlab_5 cyberlink_000

0.700
0.500
0.300
0.200
0.100
0.070
FNIR(N, R, T) =

0.050
0.030
FPIR(N, T) =

0.020
0.010
0.007
0.005
0.003
0.002
0.001

FRVT
acer_001 ntechlab_4 isystems_3 yitu_0 cognitec_2 kneron_000

0.700

-
False pos. identification rate
False neg. identification rate

0.500
False negative identification rate, FNIR(T)

FACE RECOGNITION VENDOR TEST


0.300
0.200
0.100
0.070
0.050 Dataset: 2018 Mugshot
0.030
0.020
Tier: 6
0.010
0.007
0.005 00640000
0.003
0.002 01600000
0.001
03000000
06000000
neurotechnology_5 toshiba_0 qnap_001 aize_001 ntechlab_3 neurotechnology_4
12000000
0.700
0.500
0.300
0.200
0.100
0.070 enrollment_style
0.050
R = Num. candidates examined
N = Num. enrolled subjects

0.030 lifetime_consolidated
0.020
0.010 recent
0.007
0.005
0.003
0.002
0.001

scanovate_000 tiger_3 pixelall_002 tiger_2 irex_000 scanovate_001

-
IDENTIFICATION
0.700
0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
0.003
0.002
0.001
T = Threshold

04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 00
gorilla_004 neurotechnology_007 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+

0.700
0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
T > 0 → Identification
T = 0 → Investigation

0.003
0.002
0.001

04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 00
1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+
False positive identification rate, FPIR(T)
Figure 53: [FRVT-2018 Mugshot Dataset] Identification miss rates vs. false positive rates. The figure shows miss rates FNIR(N, L, T) as a function of FPIR(N, T), with
N ranging from 640 000 to 12 000 000 as noted in rows 1-10 of Table 1. These error tradeoff characteristics are useful for applications where a threshold must be elevated
to limit false positives, such as when human reviewer labor is not matched to the volume of searches. Dark lines join points of equal threshold: If horizontal, FPIR(T)

120
rises with N, and mate scores are independent of N. Other algorithms adjust scores in an attempt to make FPIR independent of N.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

pangiam_001 kedacom_001 vd_003 lookman_005 idemia_3 idemia_4

0.700
11:12:06
2022/12/18

0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
0.003
0.002
0.001

visionlabs_3 dermalog_6 cognitec_3 isystems_2 tongyitrans_0 idemia_5

0.700
0.500
0.300
0.200
0.100
FNIR(N, R, T) =

0.070
0.050
0.030
FPIR(N, T) =

0.020
0.010
0.007
0.005
0.003
0.002

FRVT
0.001

dahua_1 idemia_6 rankone_5 idemia_1 vigilantsolutions_5 siat_0

0.700

-
False pos. identification rate
False neg. identification rate

0.500
False negative identification rate, FNIR(T)

FACE RECOGNITION VENDOR TEST


0.300
0.200
0.100
0.070 enrollment_style
0.050
0.030 lifetime_consolidated
0.020
0.010 recent
0.007
0.005
0.003
0.002
0.001
Dataset: 2018 Mugshot
tongyitrans_1 vigilantsolutions_6 vocord_5 remarkai_0 qnap_000 tevian_5 Tier: 7
0.700
0.500 00640000
0.300
0.200 01600000
0.100
0.070 03000000
R = Num. candidates examined
N = Num. enrolled subjects

0.050
0.030 06000000
0.020
12000000
0.010
0.007
0.005
0.003
0.002
0.001

ntechlab_0 vocord_3 vocord_4 remarkai_2 idemia_0 allgovision_001

-
IDENTIFICATION
0.700
0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
0.003
0.002
T = Threshold

0.001

04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 00
megvii_0 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+

0.700
0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
T > 0 → Identification
T = 0 → Investigation

0.003
0.002
0.001

04 04 03 03 02 02 01 01 00
1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+
False positive identification rate, FPIR(T)
Figure 54: [FRVT-2018 Mugshot Dataset] Identification miss rates vs. false positive rates. The figure shows miss rates FNIR(N, L, T) as a function of FPIR(N, T), with
N ranging from 640 000 to 12 000 000 as noted in rows 1-10 of Table 1. These error tradeoff characteristics are useful for applications where a threshold must be elevated
to limit false positives, such as when human reviewer labor is not matched to the volume of searches. Dark lines join points of equal threshold: If horizontal, FPIR(T)

121
rises with N, and mate scores are independent of N. Other algorithms adjust scores in an attempt to make FPIR independent of N.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

turingtechvip_001 t4isb_000 staqu_000 lookman_4 lookman_3 dahua_0

0.700
11:12:06
2022/12/18

0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
0.003
0.002
0.001

vd_002 megvii_2 isystems_1 isystems_0 dermalog_007 idemia_2

0.700
0.500
0.300
0.200
0.100
FNIR(N, R, T) =

0.070
0.050
0.030
FPIR(N, T) =

0.020
0.010
0.007
0.005
0.003
0.002

FRVT
0.001

everai_0 megvii_1 allgovision_000 tevian_4 ntechlab_1 anke_1

0.700

-
False pos. identification rate
False neg. identification rate

0.500
False negative identification rate, FNIR(T)

FACE RECOGNITION VENDOR TEST


0.300
0.200
0.100 enrollment_style
0.070
0.050
0.030 lifetime_consolidated
0.020
0.010 recent
0.007
0.005
0.003
0.002
0.001
Dataset: 2018 Mugshot
synesis_005 cognitec_1 cogent_1 cogent_0 innovatrics_4 intellivision_002 Tier: 8
0.700
0.500 00640000
0.300
0.200 01600000
0.100
0.070 03000000
R = Num. candidates examined
N = Num. enrolled subjects

0.050
0.030 06000000
0.020
12000000
0.010
0.007
0.005
0.003
0.002
0.001

hik_3 fincore_000 anke_0 hik_4 alchera_3 incode_3

-
IDENTIFICATION
0.700
0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
0.003
0.002
T = Threshold

0.001
04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 00
acer_000 alchera_004 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+

0.700
0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
T > 0 → Identification
T = 0 → Investigation

0.005
0.003
0.002
0.001
04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 00
1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+
False positive identification rate, FPIR(T)
Figure 55: [FRVT-2018 Mugshot Dataset] Identification miss rates vs. false positive rates. The figure shows miss rates FNIR(N, L, T) as a function of FPIR(N, T), with
N ranging from 640 000 to 12 000 000 as noted in rows 1-10 of Table 1. These error tradeoff characteristics are useful for applications where a threshold must be elevated
to limit false positives, such as when human reviewer labor is not matched to the volume of searches. Dark lines join points of equal threshold: If horizontal, FPIR(T)

122
rises with N, and mate scores are independent of N. Other algorithms adjust scores in an attempt to make FPIR independent of N.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

sensetime_002 nec_0 dermalog_5 alchera_0 hik_1 camvi_5

0.700
11:12:06
2022/12/18

0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
0.003
0.002

nec_1 hik_0 camvi_3 tevian_3 rankone_2 rankone_3

0.700
0.500
0.300
0.200
FNIR(N, R, T) =

0.100
0.070
0.050
FPIR(N, T) =

0.030
0.020
0.010
0.007
0.005
0.003
0.002

FRVT
incode_2 3divi_4 hik_2 rankone_1 tevian_0 tevian_1

0.700

-
False pos. identification rate
False neg. identification rate

0.500
False negative identification rate, FNIR(T)

FACE RECOGNITION VENDOR TEST


0.300
0.200
0.100
0.070
Dataset: 2018 Mugshot
0.050 Tier: 9
0.030
0.020
0.010 00640000
0.007
0.005 01600000
0.003
0.002 03000000
06000000
camvi_4 realnetworks_004 3divi_5 realnetworks_003 cognitec_0 neurotechnology_6
12000000
0.700
0.500
0.300
0.200
0.100 enrollment_style
0.070
R = Num. candidates examined
N = Num. enrolled subjects

0.050 lifetime_consolidated
0.030
0.020 recent
0.010
0.007
0.005
0.003
0.002

tevian_2 gorilla_2 incode_1 neurotechnology_3 synesis_003 innovatrics_3

-
IDENTIFICATION
0.700
0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
0.003
T = Threshold

0.002

04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 00
yisheng_0 f8_001 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+

0.700
0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
T > 0 → Identification
T = 0 → Investigation

0.007
0.005
0.003
0.002

04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 00
1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+
False positive identification rate, FPIR(T)
Figure 56: [FRVT-2018 Mugshot Dataset] Identification miss rates vs. false positive rates. The figure shows miss rates FNIR(N, L, T) as a function of FPIR(N, T), with
N ranging from 640 000 to 12 000 000 as noted in rows 1-10 of Table 1. These error tradeoff characteristics are useful for applications where a threshold must be elevated
to limit false positives, such as when human reviewer labor is not matched to the volume of searches. Dark lines join points of equal threshold: If horizontal, FPIR(T)

123
rises with N, and mate scores are independent of N. Other algorithms adjust scores in an attempt to make FPIR independent of N.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

kneron_001 3divi_6 shaman_6 aware_5 aware_3 vocord_2

0.700
11:12:06
2022/12/18

0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
0.003
0.002

shaman_7 rankone_4 vocord_1 vd_1 vocord_0 gorilla_3

0.700
0.500
0.300
0.200
FNIR(N, R, T) =

0.100
0.070
0.050
FPIR(N, T) =

0.030
0.020
0.010
0.007
0.005
0.003

FRVT
0.002

realnetworks_1 mukh_002 rankone_0 3divi_0 3divi_1 innovatrics_2

0.700

-
False pos. identification rate
False neg. identification rate

0.500
False negative identification rate, FNIR(T)

FACE RECOGNITION VENDOR TEST


0.300
0.200
0.100 enrollment_style
0.070
0.050 lifetime_consolidated
0.030
0.020 recent
0.010
0.007
0.005
0.003
0.002
Dataset: 2018 Mugshot
3divi_2 realnetworks_2 realnetworks_0 innovatrics_0 intellivision_001 innovatrics_1 Tier: 10
0.700
0.500 00640000
0.300
0.200 01600000
0.100 03000000
R = Num. candidates examined
N = Num. enrolled subjects

0.070
0.050 06000000
0.030
0.020 12000000
0.010
0.007
0.005
0.003
0.002

incode_0 neurotechnology_1 neurotechnology_2 neurotechnology_0 yisheng_1 gorilla_1

-
IDENTIFICATION
0.700
0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007
0.005
0.003
T = Threshold

0.002
04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 00
20face_000 tiger_0 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+

0.700
0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
T > 0 → Identification
T = 0 → Investigation

0.007
0.005
0.003
0.002
04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 00
1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+
False positive identification rate, FPIR(T)
Figure 57: [FRVT-2018 Mugshot Dataset] Identification miss rates vs. false positive rates. The figure shows miss rates FNIR(N, L, T) as a function of FPIR(N, T), with
N ranging from 640 000 to 12 000 000 as noted in rows 1-10 of Table 1. These error tradeoff characteristics are useful for applications where a threshold must be elevated
to limit false positives, such as when human reviewer labor is not matched to the volume of searches. Dark lines join points of equal threshold: If horizontal, FPIR(T)

124
rises with N, and mate scores are independent of N. Other algorithms adjust scores in an attempt to make FPIR independent of N.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

aware_2 aware_1 aware_0 aware_6 aware_4 glory_1

0.700
11:12:06
2022/12/18

0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
0.007

eyedea_3 vigilantsolutions_3 3divi_3 alchera_2 glory_0 newland_2

0.700
0.500
0.300
0.200
FNIR(N, R, T) =

0.100
FPIR(N, T) =

0.070
0.050
0.030
0.020
0.010
0.007

FRVT
dermalog_3 dermalog_0 shaman_3 dermalog_4 dermalog_2 imagus_008

-
0.700
False pos. identification rate
False neg. identification rate

False negative identification rate, FNIR(T)

0.500

FACE RECOGNITION VENDOR TEST


0.300
0.200
enrollment_style
0.100
0.070 lifetime_consolidated
0.050
recent
0.030
0.020
0.010
0.007
Dataset: 2018 Mugshot
shaman_0 camvi_2 vigilantsolutions_0 dermalog_1 shaman_1 vigilantsolutions_4 Tier: 11
0.700
0.500 00640000
0.300 01600000
0.200
03000000
R = Num. candidates examined
N = Num. enrolled subjects

0.100
0.070 06000000
0.050 12000000
0.030
0.020
0.010
0.007

gorilla_0 eyedea_1 synesis_0 synesis_3 noblis_2 smilart_0

-
IDENTIFICATION
0.700
0.500
0.300
0.200
0.100
0.070
0.050
0.030
0.020
0.010
T = Threshold

0.007

04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 00
smilart_2 intsysmsu_000 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+

0.700
0.500
0.300
0.200
0.100
0.070
0.050
0.030
T > 0 → Identification
T = 0 → Investigation

0.020
0.010
0.007

04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 00
1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+
False positive identification rate, FPIR(T)
Figure 58: [FRVT-2018 Mugshot Dataset] Identification miss rates vs. false positive rates. The figure shows miss rates FNIR(N, L, T) as a function of FPIR(N, T), with
N ranging from 640 000 to 12 000 000 as noted in rows 1-10 of Table 1. These error tradeoff characteristics are useful for applications where a threshold must be elevated
to limit false positives, such as when human reviewer labor is not matched to the volume of searches. Dark lines join points of equal threshold: If horizontal, FPIR(T)

125
rises with N, and mate scores are independent of N. Other algorithms adjust scores in an attempt to make FPIR independent of N.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

eyedea_2 shaman_4 vigilantsolutions_1 camvi_1 noblis_1 imagus_2


11:12:06
2022/12/18

0.70
0.50

0.30
0.20

0.10
0.07
0.05

0.03

shaman_2 imagus_0 hbinno_0 digidata_000 eyedea_0 imagus_3

0.70
0.50

0.30
FNIR(N, R, T) =

0.20
FPIR(N, T) =

0.10
0.07
0.05

0.03

FRVT
smilart_1 vts_000 verijelas_000 vigilantsolutions_2 quantasoft_1 ayonix_1

-
False pos. identification rate
False neg. identification rate

0.70
False negative identification rate, FNIR(T)

FACE RECOGNITION VENDOR TEST


0.50

0.30 enrollment_style
0.20
lifetime_consolidated
0.10 recent
0.07
0.05

0.03
Dataset: 2018 Mugshot
ayonix_2 ayonix_0 vd_0 microfocus_5 microfocus_0 microfocus_1 Tier: 12

0.70 00640000
0.50
01600000
0.30
03000000
R = Num. candidates examined
N = Num. enrolled subjects

0.20
06000000
0.10 12000000
0.07
0.05

0.03

microfocus_2 microfocus_4 microfocus_3 microfocus_6 smilart_5 smilart_4

-
IDENTIFICATION
0.70
0.50

0.30
0.20

0.10
0.07
0.05
T = Threshold

0.03

04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 00
alchera_1 vocord_6 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+

0.70
0.50

0.30
0.20

0.10
0.07
T > 0 → Identification
T = 0 → Investigation

0.05

0.03

04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 00
1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+
False positive identification rate, FPIR(T)
Figure 59: [FRVT-2018 Mugshot Dataset] Identification miss rates vs. false positive rates. The figure shows miss rates FNIR(N, L, T) as a function of FPIR(N, T), with
N ranging from 640 000 to 12 000 000 as noted in rows 1-10 of Table 1. These error tradeoff characteristics are useful for applications where a threshold must be elevated
to limit false positives, such as when human reviewer labor is not matched to the volume of searches. Dark lines join points of equal threshold: If horizontal, FPIR(T)

126
rises with N, and mate scores are independent of N. Other algorithms adjust scores in an attempt to make FPIR independent of N.
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 127

Appendix B Effect of time-lapse: Accuracy after face ageing


This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

11:12:06
2022/12/18

sensetime_006 sensetime_004 sensetime_008


0.005

0.003

0.002
FNIR(N, R, T) =
FPIR(N, T) =

FRVT
False negative identification rate (FNIR)

0.001

-
False pos. identification rate
False neg. identification rate

Dataset: 2018 Mugshots

FACE RECOGNITION VENDOR TEST


Tier: 1

Time Lapse
(years)
(00,02]
(02,04]
sensetime_007 cloudwalk_hr_000 paravision_009
0.005 (04,06]
(06,08]
(08,10]
(10,12]
(12,14]
R = Num. candidates examined
N = Num. enrolled subjects

(14,18]
0.003

0.002

-
IDENTIFICATION
0.001
T = Threshold

1 3 10 30 50 1 3 10 30 50 1 3 10 30 50
Rank
T > 0 → Identification
T = 0 → Investigation

Figure 60: [FRVT-2018 Mugshot Ageing Dataset] Identification miss rates vs. rank by time-elapsed. The oldest image of each individual is enrolled. Thereafter,
all more recent images are searched. Miss rates are computed over all searches noted in row 17 of Table 1 and binned by number of years between search and initial
enrollment.

128
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

11:12:06
2022/12/18

sensetime_005 kakao_001 cloudwalk_mt_001

0.005

0.003
FNIR(N, R, T) =
FPIR(N, T) =

0.002

FRVT
False negative identification rate (FNIR)

-
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


Dataset: 2018 Mugshots
Tier: 2
0.001
Time Lapse
(years)
(00,02]
(02,04]
cloudwalk_mt_000 idemia_009 neurotechnology_012
(04,06]
0.005 (06,08]
(08,10]
(10,12]
R = Num. candidates examined
N = Num. enrolled subjects

(12,14]
(14,18]

0.003

-
IDENTIFICATION
0.002
T = Threshold

0.001

1 3 10 30 50 1 3 10 30 50 1 3 10 30 50
T > 0 → Identification
T = 0 → Investigation

Rank

Figure 61: [FRVT-2018 Mugshot Ageing Dataset] Identification miss rates vs. rank by time-elapsed. The oldest image of each individual is enrolled. Thereafter,
all more recent images are searched. Miss rates are computed over all searches noted in row 17 of Table 1 and binned by number of years between search and initial

129
enrollment.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

11:12:06
2022/12/18

clearviewai_000 paravision_007 griaule_001

0.005

0.003
FNIR(N, R, T) =
FPIR(N, T) =

0.002

FRVT
False negative identification rate (FNIR)

-
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


Dataset: 2018 Mugshots
Tier: 3

Time Lapse
0.001 (years)
(00,02]
(02,04]
cogent_006 rankone_013 canon_001
(04,06]
(06,08]
(08,10]
0.005
(10,12]
R = Num. candidates examined
N = Num. enrolled subjects

(12,14]
(14,18]

-
0.003

IDENTIFICATION
0.002
T = Threshold

0.001

1 3 10 30 50 1 3 10 30 50 1 3 10 30 50
T > 0 → Identification
T = 0 → Investigation

Rank

Figure 62: [FRVT-2018 Mugshot Ageing Dataset] Identification miss rates vs. rank by time-elapsed. The oldest image of each individual is enrolled. Thereafter,
all more recent images are searched. Miss rates are computed over all searches noted in row 17 of Table 1 and binned by number of years between search and initial

130
enrollment.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

11:12:06
2022/12/18

cubox_000 sensetime_003 nec_005

0.005

0.003
FNIR(N, R, T) =
FPIR(N, T) =

FRVT
0.002
False negative identification rate (FNIR)

-
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


Dataset: 2018 Mugshots
Tier: 4

Time Lapse
(years)
0.001 (00,02]
(02,04]
lineclova_002 maxvision_001 vts_003
(04,06]
(06,08]
(08,10]
0.005 (10,12]
R = Num. candidates examined
N = Num. enrolled subjects

(12,14]
(14,18]

-
IDENTIFICATION
0.003

0.002
T = Threshold

0.001
1 3 10 30 50 1 3 10 30 50 1 3 10 30 50
T > 0 → Identification
T = 0 → Investigation

Rank

Figure 63: [FRVT-2018 Mugshot Ageing Dataset] Identification miss rates vs. rank by time-elapsed. The oldest image of each individual is enrolled. Thereafter,
all more recent images are searched. Miss rates are computed over all searches noted in row 17 of Table 1 and binned by number of years between search and initial

131
enrollment.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

11:12:06
2022/12/18

idemia_008 intema_000 nec_006

0.005
FNIR(N, R, T) =

0.003
FPIR(N, T) =

FRVT
0.002
False negative identification rate (FNIR)

-
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


Dataset: 2018 Mugshots
Tier: 5

Time Lapse
(years)
(00,02]
(02,04]
realnetworks_008 pangiam_000 vnpt_002
(04,06]
(06,08]
(08,10]
(10,12]
R = Num. candidates examined
N = Num. enrolled subjects

0.005
(12,14]
(14,18]

-
IDENTIFICATION
0.003

0.002
T = Threshold

1 3 10 30 50 1 3 10 30 50 1 3 10 30 50
T > 0 → Identification
T = 0 → Investigation

Rank

Figure 64: [FRVT-2018 Mugshot Ageing Dataset] Identification miss rates vs. rank by time-elapsed. The oldest image of each individual is enrolled. Thereafter,
all more recent images are searched. Miss rates are computed over all searches noted in row 17 of Table 1 and binned by number of years between search and initial

132
enrollment.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

11:12:06
2022/12/18

cib_000 deepglint_001 xforwardai_001

0.007

0.005
FNIR(N, R, T) =
FPIR(N, T) =

0.003

FRVT
False negative identification rate (FNIR)

-
False pos. identification rate
False neg. identification rate

0.002

FACE RECOGNITION VENDOR TEST


Dataset: 2018 Mugshots
Tier: 6

Time Lapse
(years)
(00,02]
(02,04]
neurotechnology_010 canon_002 rankone_012
(04,06]
0.007 (06,08]
(08,10]
(10,12]
R = Num. candidates examined
N = Num. enrolled subjects

(12,14]
0.005 (14,18]

-
IDENTIFICATION
0.003
T = Threshold

0.002

1 3 10 30 50 1 3 10 30 50 1 3 10 30 50
T > 0 → Identification
T = 0 → Investigation

Rank

Figure 65: [FRVT-2018 Mugshot Ageing Dataset] Identification miss rates vs. rank by time-elapsed. The oldest image of each individual is enrolled. Thereafter,
all more recent images are searched. Miss rates are computed over all searches noted in row 17 of Table 1 and binned by number of years between search and initial

133
enrollment.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

11:12:06
2022/12/18

visionlabs_010 visionlabs_009 s1_002

0.007

0.005
FNIR(N, R, T) =
FPIR(N, T) =

0.003

FRVT
False negative identification rate (FNIR)

-
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


0.002 Dataset: 2018 Mugshots
Tier: 7

Time Lapse
(years)
(00,02]
(02,04]
s1_003 ntechlab_011 dermalog_010
(04,06]
(06,08]
0.007
(08,10]
(10,12]
R = Num. candidates examined
N = Num. enrolled subjects

(12,14]
(14,18]
0.005

-
IDENTIFICATION
0.003
T = Threshold

0.002

1 3 10 30 50 1 3 10 30 50 1 3 10 30 50
T > 0 → Identification
T = 0 → Investigation

Rank

Figure 66: [FRVT-2018 Mugshot Ageing Dataset] Identification miss rates vs. rank by time-elapsed. The oldest image of each individual is enrolled. Thereafter,
all more recent images are searched. Miss rates are computed over all searches noted in row 17 of Table 1 and binned by number of years between search and initial

134
enrollment.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

11:12:06
2022/12/18

dahua_003 visionlabs_011 hzailu_001

0.007

0.005
FNIR(N, R, T) =
FPIR(N, T) =

FRVT
0.003
False negative identification rate (FNIR)

-
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


Dataset: 2018 Mugshots
Tier: 8
0.002
Time Lapse
(years)
(00,02]
(02,04]
xforwardai_000 rankone_011 gorilla_008
(04,06]
(06,08]
(08,10]
(10,12]
R = Num. candidates examined
N = Num. enrolled subjects

0.007 (12,14]
(14,18]

0.005

-
IDENTIFICATION
0.003
T = Threshold

0.002

1 3 10 30 50 1 3 10 30 50 1 3 10 30 50
T > 0 → Identification
T = 0 → Investigation

Rank

Figure 67: [FRVT-2018 Mugshot Ageing Dataset] Identification miss rates vs. rank by time-elapsed. The oldest image of each individual is enrolled. Thereafter,
all more recent images are searched. Miss rates are computed over all searches noted in row 17 of Table 1 and binned by number of years between search and initial

135
enrollment.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

11:12:06
2022/12/18

ntechlab_009 paravision_005 everai_paravision_004

0.010

0.007

0.005
FNIR(N, R, T) =
FPIR(N, T) =

FRVT
0.003
False negative identification rate (FNIR)

-
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


Dataset: 2018 Mugshots
0.002 Tier: 9

Time Lapse
(years)
(00,02]
(02,04]
nec_2 irex_000 realnetworks_006
(04,06]
(06,08]
0.010 (08,10]
(10,12]
R = Num. candidates examined
N = Num. enrolled subjects

(12,14]
(14,18]
0.007

-
0.005

IDENTIFICATION
0.003
T = Threshold

0.002

1 3 10 30 50 1 3 10 30 50 1 3 10 30 50
T > 0 → Identification
T = 0 → Investigation

Rank

Figure 68: [FRVT-2018 Mugshot Ageing Dataset] Identification miss rates vs. rank by time-elapsed. The oldest image of each individual is enrolled. Thereafter,
all more recent images are searched. Miss rates are computed over all searches noted in row 17 of Table 1 and binned by number of years between search and initial

136
enrollment.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

11:12:06
2022/12/18

visionlabs_008 dahua_002 imagus_005


0.020

0.010
FNIR(N, R, T) =

0.007
FPIR(N, T) =

0.005

FRVT
False negative identification rate (FNIR)

-
False pos. identification rate
False neg. identification rate

0.003

FACE RECOGNITION VENDOR TEST


Dataset: 2018 Mugshots
Tier: 10
0.002 Time Lapse
(years)
(00,02]
(02,04]
cyberlink_003 microsoft_5 gorilla_007
(04,06]
0.020
(06,08]
(08,10]
(10,12]
R = Num. candidates examined
N = Num. enrolled subjects

(12,14]
(14,18]
0.010

-
IDENTIFICATION
0.007

0.005
T = Threshold

0.003

0.002

1 3 10 30 50 1 3 10 30 50 1 3 10 30 50
T > 0 → Identification
T = 0 → Investigation

Rank

Figure 69: [FRVT-2018 Mugshot Ageing Dataset] Identification miss rates vs. rank by time-elapsed. The oldest image of each individual is enrolled. Thereafter,
all more recent images are searched. Miss rates are computed over all searches noted in row 17 of Table 1 and binned by number of years between search and initial

137
enrollment.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

11:12:06
2022/12/18

microsoft_4 microsoft_6 microsoft_3

0.010

0.007
FNIR(N, R, T) =
FPIR(N, T) =

0.005

FRVT
False negative identification rate (FNIR)

-
False pos. identification rate
False neg. identification rate

0.003

FACE RECOGNITION VENDOR TEST


Dataset: 2018 Mugshots
Tier: 11
0.002 Time Lapse
(years)
(00,02]
(02,04]
nec_004 trueface_000 fujitsulab_001
(04,06]
(06,08]
(08,10]
(10,12]
R = Num. candidates examined
N = Num. enrolled subjects

(12,14]
(14,18]
0.010

-
0.007

IDENTIFICATION
0.005
T = Threshold

0.003

0.002

1 3 10 30 50 1 3 10 30 50 1 3 10 30 50
T > 0 → Identification
T = 0 → Investigation

Rank

Figure 70: [FRVT-2018 Mugshot Ageing Dataset] Identification miss rates vs. rank by time-elapsed. The oldest image of each individual is enrolled. Thereafter,
all more recent images are searched. Miss rates are computed over all searches noted in row 17 of Table 1 and binned by number of years between search and initial

138
enrollment.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

11:12:06
2022/12/18

visionlabs_6 synesis_005 pixelall_003


0.020

0.010
FNIR(N, R, T) =
FPIR(N, T) =

0.007

FRVT
0.005
False negative identification rate (FNIR)

-
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


Dataset: 2018 Mugshots
0.003 Tier: 12

Time Lapse
(years)
(00,02]
(02,04]
pixelall_004 dilusense_000 maxvision_000
0.020 (04,06]
(06,08]
(08,10]
(10,12]
R = Num. candidates examined
N = Num. enrolled subjects

(12,14]
(14,18]

0.010

-
IDENTIFICATION
0.007

0.005
T = Threshold

0.003

1 3 10 30 50 1 3 10 30 50 1 3 10 30 50
T > 0 → Identification
T = 0 → Investigation

Rank

Figure 71: [FRVT-2018 Mugshot Ageing Dataset] Identification miss rates vs. rank by time-elapsed. The oldest image of each individual is enrolled. Thereafter,
all more recent images are searched. Miss rates are computed over all searches noted in row 17 of Table 1 and binned by number of years between search and initial

139
enrollment.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

11:12:06
2022/12/18

imperial_000 rankone_010 cognitec_006

0.020

0.010
FNIR(N, R, T) =
FPIR(N, T) =

0.007

FRVT
0.005
False negative identification rate (FNIR)

-
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


Dataset: 2018 Mugshots
Tier: 13
0.003 Time Lapse
(years)
(00,02]
(02,04]
cyberlink_002 vixvizion_009 sensetime_002
(04,06]
(06,08]
(08,10]
0.020 (10,12]
R = Num. candidates examined
N = Num. enrolled subjects

(12,14]
(14,18]

-
0.010

IDENTIFICATION
0.007

0.005
T = Threshold

0.003

1 3 10 30 50 1 3 10 30 50 1 3 10 30 50
T > 0 → Identification
T = 0 → Investigation

Rank

Figure 72: [FRVT-2018 Mugshot Ageing Dataset] Identification miss rates vs. rank by time-elapsed. The oldest image of each individual is enrolled. Thereafter,
all more recent images are searched. Miss rates are computed over all searches noted in row 17 of Table 1 and binned by number of years between search and initial

140
enrollment.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

11:12:06
2022/12/18

firstcreditkz_001 idemia_007 everai_3

0.020

0.010
FNIR(N, R, T) =
FPIR(N, T) =

0.007

FRVT
False negative identification rate (FNIR)

0.005

-
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


Dataset: 2018 Mugshots
Tier: 14

Time Lapse
0.003 (years)
(00,02]
(02,04]
rankone_009 anke_002 dermalog_008
(04,06]
(06,08]
0.020 (08,10]
(10,12]
R = Num. candidates examined
N = Num. enrolled subjects

(12,14]
(14,18]

-
0.010

IDENTIFICATION
0.007

0.005
T = Threshold

0.003

1 3 10 30 50 1 3 10 30 50 1 3 10 30 50
T > 0 → Identification
T = 0 → Investigation

Rank

Figure 73: [FRVT-2018 Mugshot Ageing Dataset] Identification miss rates vs. rank by time-elapsed. The oldest image of each individual is enrolled. Thereafter,
all more recent images are searched. Miss rates are computed over all searches noted in row 17 of Table 1 and binned by number of years between search and initial

141
enrollment.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

11:12:06
2022/12/18

nec_3 innovatrics_005 visionlabs_5

0.030

0.020
FNIR(N, R, T) =

0.010
FPIR(N, T) =

0.007

FRVT
0.005
False negative identification rate (FNIR)

-
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


Dataset: 2018 Mugshots
0.003 Tier: 15

Time Lapse
0.002 (years)
(00,02]
(02,04]
gorilla_005 ptakuratsatu_000 veridas_001
(04,06]
(06,08]
(08,10]
0.030
(10,12]
R = Num. candidates examined
N = Num. enrolled subjects

(12,14]
0.020 (14,18]

-
IDENTIFICATION
0.010

0.007

0.005
T = Threshold

0.003

0.002

1 3 10 30 50 1 3 10 30 50 1 3 10 30 50
T > 0 → Identification
T = 0 → Investigation

Rank

Figure 74: [FRVT-2018 Mugshot Ageing Dataset] Identification miss rates vs. rank by time-elapsed. The oldest image of each individual is enrolled. Thereafter,
all more recent images are searched. Miss rates are computed over all searches noted in row 17 of Table 1 and binned by number of years between search and initial

142
enrollment.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

11:12:06
2022/12/18

ntechlab_008 ntechlab_007 everai_2

0.030

0.020
FNIR(N, R, T) =

0.010
FPIR(N, T) =

0.007

FRVT
0.005
False negative identification rate (FNIR)

-
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


0.003 Dataset: 2018 Mugshots
Tier: 16
0.002 Time Lapse
(years)
(00,02]
(02,04]
cogent_004 yitu_4 cognitec_004
(04,06]
(06,08]
0.030 (08,10]
(10,12]
R = Num. candidates examined
N = Num. enrolled subjects

(12,14]
0.020 (14,18]

-
IDENTIFICATION
0.010

0.007

0.005
T = Threshold

0.003

0.002

1 3 10 30 50 1 3 10 30 50 1 3 10 30 50
T > 0 → Identification
T = 0 → Investigation

Rank

Figure 75: [FRVT-2018 Mugshot Ageing Dataset] Identification miss rates vs. rank by time-elapsed. The oldest image of each individual is enrolled. Thereafter,
all more recent images are searched. Miss rates are computed over all searches noted in row 17 of Table 1 and binned by number of years between search and initial

143
enrollment.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

11:12:06
2022/12/18

incode_004 yitu_5 microsoft_0

0.030

0.020
FNIR(N, R, T) =
FPIR(N, T) =

0.010

FRVT
False negative identification rate (FNIR)

0.007

-
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


Dataset: 2018 Mugshots
Tier: 17
0.005
Time Lapse
(years)
(00,02]
(02,04]
rankone_007 neurotechnology_007 scanovate_001
(04,06]
(06,08]
(08,10]
(10,12]
R = Num. candidates examined
N = Num. enrolled subjects

0.030 (12,14]
(14,18]

0.020

-
IDENTIFICATION
0.010
T = Threshold

0.007

0.005

1 3 10 30 50 1 3 10 30 50 1 3 10 30 50
T > 0 → Identification
T = 0 → Investigation

Rank

Figure 76: [FRVT-2018 Mugshot Ageing Dataset] Identification miss rates vs. rank by time-elapsed. The oldest image of each individual is enrolled. Thereafter,
all more recent images are searched. Miss rates are computed over all searches noted in row 17 of Table 1 and binned by number of years between search and initial

144
enrollment.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

11:12:06
2022/12/18

visionlabs_4 yitu_2 cogent_2

0.030

0.020
FNIR(N, R, T) =
FPIR(N, T) =

0.010

FRVT
0.007
False negative identification rate (FNIR)

-
False pos. identification rate
False neg. identification rate

0.005

FACE RECOGNITION VENDOR TEST


Dataset: 2018 Mugshots
Tier: 18

Time Lapse
0.003 (years)
(00,02]
(02,04]
pixelall_002 neurotechnology_5 toshiba_1
(04,06]
(06,08]
(08,10]
0.030 (10,12]
R = Num. candidates examined
N = Num. enrolled subjects

(12,14]
(14,18]
0.020

-
IDENTIFICATION
0.010

0.007
T = Threshold

0.005

0.003

1 3 10 30 50 1 3 10 30 50 1 3 10 30 50
T > 0 → Identification
T = 0 → Investigation

Rank

Figure 77: [FRVT-2018 Mugshot Ageing Dataset] Identification miss rates vs. rank by time-elapsed. The oldest image of each individual is enrolled. Thereafter,
all more recent images are searched. Miss rates are computed over all searches noted in row 17 of Table 1 and binned by number of years between search and initial

145
enrollment.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

11:12:06
2022/12/18

cogent_3 deepsea_001 isystems_3

0.030

0.020
FNIR(N, R, T) =
FPIR(N, T) =

FRVT
0.010
False negative identification rate (FNIR)

-
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


Dataset: 2018 Mugshots
0.007 Tier: 19

Time Lapse
0.005 (years)
(00,02]
(02,04]
neurotechnology_4 kedacom_001 lookman_005
(04,06]
(06,08]
(08,10]
(10,12]
R = Num. candidates examined
N = Num. enrolled subjects

(12,14]
0.030
(14,18]

0.020

-
IDENTIFICATION
0.010
T = Threshold

0.007

0.005

1 3 10 30 50 1 3 10 30 50 1 3 10 30 50
T > 0 → Identification
T = 0 → Investigation

Rank

Figure 78: [FRVT-2018 Mugshot Ageing Dataset] Identification miss rates vs. rank by time-elapsed. The oldest image of each individual is enrolled. Thereafter,
all more recent images are searched. Miss rates are computed over all searches noted in row 17 of Table 1 and binned by number of years between search and initial

146
enrollment.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

11:12:06
2022/12/18

yitu_3 tech5_001 cognitec_2

0.050

0.030

0.020
FNIR(N, R, T) =
FPIR(N, T) =

0.010

FRVT
False negative identification rate (FNIR)

0.007

-
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


Dataset: 2018 Mugshots
0.005 Tier: 20

Time Lapse
(years)
0.003 (00,02]
(02,04]
cognitec_3 lookman_3 synesis_003
(04,06]
(06,08]
0.050 (08,10]
(10,12]
R = Num. candidates examined
N = Num. enrolled subjects

(12,14]
(14,18]
0.030

-
0.020

IDENTIFICATION
0.010
T = Threshold

0.007

0.005

0.003
1 3 10 30 50 1 3 10 30 50 1 3 10 30 50
T > 0 → Identification
T = 0 → Investigation

Rank

Figure 79: [FRVT-2018 Mugshot Ageing Dataset] Identification miss rates vs. rank by time-elapsed. The oldest image of each individual is enrolled. Thereafter,
all more recent images are searched. Miss rates are computed over all searches noted in row 17 of Table 1 and binned by number of years between search and initial

147
enrollment.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

11:12:06
2022/12/18

isystems_2 idemia_4 dermalog_6

0.070

0.050

0.030
FNIR(N, R, T) =
FPIR(N, T) =

0.020

FRVT
False negative identification rate (FNIR)

-
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


Dataset: 2018 Mugshots
0.010 Tier: 21

Time Lapse
0.007 (years)
(00,02]
(02,04]
pangiam_001 cogent_0 cogent_1
(04,06]
(06,08]
0.070 (08,10]
(10,12]
R = Num. candidates examined
N = Num. enrolled subjects

(12,14]
0.050 (14,18]

-
IDENTIFICATION
0.030

0.020
T = Threshold

0.010

0.007

1 3 10 30 50 1 3 10 30 50 1 3 10 30 50
T > 0 → Identification
T = 0 → Investigation

Rank

Figure 80: [FRVT-2018 Mugshot Ageing Dataset] Identification miss rates vs. rank by time-elapsed. The oldest image of each individual is enrolled. Thereafter,
all more recent images are searched. Miss rates are computed over all searches noted in row 17 of Table 1 and binned by number of years between search and initial

148
enrollment.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

11:12:06
2022/12/18

ntechlab_6 ntechlab_4 idemia_3


0.100

0.070

0.050

0.030
FNIR(N, R, T) =

0.020
FPIR(N, T) =

0.010

FRVT
0.007
False negative identification rate (FNIR)

-
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


0.005 Dataset: 2018 Mugshots
Tier: 22
0.003 Time Lapse
(years)
0.002 (00,02]
(02,04]
idemia_5 dermalog_007 idemia_6
(04,06]
0.100
(06,08]
(08,10]
0.070
(10,12]
R = Num. candidates examined
N = Num. enrolled subjects

0.050 (12,14]
(14,18]

0.030

-
IDENTIFICATION
0.020

0.010

0.007
T = Threshold

0.005

0.003

0.002
1 3 10 30 50 1 3 10 30 50 1 3 10 30 50
T > 0 → Identification
T = 0 → Investigation

Rank

Figure 81: [FRVT-2018 Mugshot Ageing Dataset] Identification miss rates vs. rank by time-elapsed. The oldest image of each individual is enrolled. Thereafter,
all more recent images are searched. Miss rates are computed over all searches noted in row 17 of Table 1 and binned by number of years between search and initial

149
enrollment.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

11:12:06
2022/12/18

yitu_0 ntechlab_3 idemia_0

0.100

0.070

0.050

0.030
FNIR(N, R, T) =
FPIR(N, T) =

0.020

FRVT
0.010
False negative identification rate (FNIR)

-
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


0.007 Dataset: 2018 Mugshots
Tier: 23
0.005
Time Lapse
(years)
0.003 (00,02]
(02,04]
rankone_5 cognitec_1 intellivision_002
(04,06]
0.100 (06,08]
(08,10]
(10,12]
R = Num. candidates examined
N = Num. enrolled subjects

0.070
(12,14]
0.050 (14,18]

0.030

-
IDENTIFICATION
0.020

0.010
T = Threshold

0.007

0.005

0.003
1 3 10 30 50 1 3 10 30 50 1 3 10 30 50
T > 0 → Identification
T = 0 → Investigation

Rank

Figure 82: [FRVT-2018 Mugshot Ageing Dataset] Identification miss rates vs. rank by time-elapsed. The oldest image of each individual is enrolled. Thereafter,
all more recent images are searched. Miss rates are computed over all searches noted in row 17 of Table 1 and binned by number of years between search and initial

150
enrollment.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

11:12:06
2022/12/18

megvii_0 incode_3 anke_0

0.100

0.070

0.050

0.030
FNIR(N, R, T) =
FPIR(N, T) =

0.020

FRVT
0.010
False negative identification rate (FNIR)

-
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


0.007 Dataset: 2018 Mugshots
Tier: 24
0.005
Time Lapse
(years)
0.003 (00,02]
(02,04]
3divi_5 nec_0 neurotechnology_3
(04,06]
(06,08]
0.100 (08,10]
(10,12]
R = Num. candidates examined
N = Num. enrolled subjects

0.070 (12,14]
(14,18]
0.050

-
IDENTIFICATION
0.030

0.020

0.010
T = Threshold

0.007

0.005

0.003
1 3 10 30 50 1 3 10 30 50 1 3 10 30 50
T > 0 → Identification
T = 0 → Investigation

Rank

Figure 83: [FRVT-2018 Mugshot Ageing Dataset] Identification miss rates vs. rank by time-elapsed. The oldest image of each individual is enrolled. Thereafter,
all more recent images are searched. Miss rates are computed over all searches noted in row 17 of Table 1 and binned by number of years between search and initial

151
enrollment.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

11:12:06
2022/12/18

ntechlab_0 gorilla_2 rankone_2

0.200

0.100

0.070

0.050
FNIR(N, R, T) =
FPIR(N, T) =

0.030

0.020

FRVT
False negative identification rate (FNIR)

-
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


0.010
Dataset: 2018 Mugshots
0.007 Tier: 25

0.005 Time Lapse


(years)
(00,02]
(02,04]
cognitec_0 aware_5 mukh_002
(04,06]
(06,08]
0.200 (08,10]
(10,12]
R = Num. candidates examined
N = Num. enrolled subjects

(12,14]
0.100 (14,18]

0.070

-
0.050

IDENTIFICATION
0.030

0.020
T = Threshold

0.010

0.007

0.005

1 3 10 30 50 1 3 10 30 50 1 3 10 30 50
T > 0 → Identification
T = 0 → Investigation

Rank

Figure 84: [FRVT-2018 Mugshot Ageing Dataset] Identification miss rates vs. rank by time-elapsed. The oldest image of each individual is enrolled. Thereafter,
all more recent images are searched. Miss rates are computed over all searches noted in row 17 of Table 1 and binned by number of years between search and initial

152
enrollment.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

11:12:06
2022/12/18

nec_1 realnetworks_004 rankone_0

0.300

0.200

0.100

0.070
FNIR(N, R, T) =
FPIR(N, T) =

0.050

FRVT
0.030
False negative identification rate (FNIR)

0.020

-
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


Dataset: 2018 Mugshots
Tier: 26
0.010 Time Lapse
(years)
0.007 (00,02]
(02,04]
realnetworks_003 rankone_4 t4isb_000
(04,06]
0.300 (06,08]
(08,10]
0.200 (10,12]
R = Num. candidates examined
N = Num. enrolled subjects

(12,14]
(14,18]

0.100

-
0.070

IDENTIFICATION
0.050

0.030
T = Threshold

0.020

0.010

0.007
1 3 10 30 50 1 3 10 30 50 1 3 10 30 50
T > 0 → Identification
T = 0 → Investigation

Rank

Figure 85: [FRVT-2018 Mugshot Ageing Dataset] Identification miss rates vs. rank by time-elapsed. The oldest image of each individual is enrolled. Thereafter,
all more recent images are searched. Miss rates are computed over all searches noted in row 17 of Table 1 and binned by number of years between search and initial

153
enrollment.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

11:12:06
2022/12/18

realnetworks_2 aware_6 imagus_008


0.70
0.50

0.30

0.20
FNIR(N, R, T) =

0.10
FPIR(N, T) =

0.07

FRVT
0.05
False negative identification rate (FNIR)

-
0.03
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


Dataset: 2018 Mugshots
0.02 Tier: 27

Time Lapse
(years)
0.01
(00,02]
(02,04]
camvi_4 camvi_5 noblis_2
(04,06]
0.70 (06,08]
(08,10]
0.50
(10,12]
R = Num. candidates examined
N = Num. enrolled subjects

(12,14]
0.30 (14,18]

0.20

-
IDENTIFICATION
0.10

0.07
0.05
T = Threshold

0.03

0.02

0.01

1 3 10 30 50 1 3 10 30 50 1 3 10 30 50
T > 0 → Identification
T = 0 → Investigation

Rank

Figure 86: [FRVT-2018 Mugshot Ageing Dataset] Identification miss rates vs. rank by time-elapsed. The oldest image of each individual is enrolled. Thereafter,
all more recent images are searched. Miss rates are computed over all searches noted in row 17 of Table 1 and binned by number of years between search and initial

154
enrollment.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

11:12:06
2022/12/18

innovatrics_4 ayonix_2 microfocus_5

0.7

0.5
FNIR(N, R, T) =
FPIR(N, T) =

0.3

FRVT
False negative identification rate (FNIR)

-
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


0.2 Dataset: 2018 Mugshots
Tier: 28

Time Lapse
(years)
(00,02]
1 3 10 30 50 (02,04]
vts_000 siat_2
(04,06]
(06,08]
(08,10]
(10,12]
R = Num. candidates examined
N = Num. enrolled subjects

0.7
(12,14]
(14,18]

0.5

-
IDENTIFICATION
0.3
T = Threshold

0.2

1 3 10 30 50 1 3 10 30 50
T > 0 → Identification
T = 0 → Investigation

Rank

Figure 87: [FRVT-2018 Mugshot Ageing Dataset] Identification miss rates vs. rank by time-elapsed. The oldest image of each individual is enrolled. Thereafter,
all more recent images are searched. Miss rates are computed over all searches noted in row 17 of Table 1 and binned by number of years between search and initial

155
enrollment.
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 156
2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

11:12:06
2022/12/18

nec_006 sensetime_004 nec_004 sensetime_003


0.700
0.500

0.300
0.200

0.100
0.070
0.050

0.030
FNIR(N, R, T) =

0.020
FPIR(N, T) =

0.010
0.007

FRVT
0.005

idemia_008 cloudwalk_hr_000 cloudwalk_mt_000 sensetime_008

-
False pos. identification rate
False neg. identification rate

0.700

FACE RECOGNITION VENDOR TEST


0.500
False negative identification rate (FNIR)

0.300
0.200 Dataset: 2018 Mugshots
N = 3068801
0.100 (00,02]
(02,04]
0.070 (04,06]
0.050 (06,08]
(08,10]
0.030 (10,12]
0.020 (12,14]
R = Num. candidates examined
N = Num. enrolled subjects

(14,18]

0.010
0.007
0.005

-
IDENTIFICATION
nec_005 cloudwalk_mt_001 sensetime_007 idemia_009
0.700
0.500

0.300
0.200

0.100
T = Threshold

0.070
0.050

0.030
0.020

0.010
T > 0 → Identification
T = 0 → Investigation

0.007
0.005
01 .002.003 .005.007.010 .020.030 .050.070.100 .200.300 .500.700 01 02 03 05 07 10 .020.030 .050.070.100 .200.300 .500.700 01 02 03 05 07 10 .020.030 .050.070.100 .200.300 .500.700 01 .002.003 .005.007.010 .020.030 .050.070.100 .200.300 .500.700
0.0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0 0.0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

False positive identification rate (FPIR)

Figure 88: [FRVT-2018 Mugshot Ageing Dataset] Identification miss rates vs. FPIR by time-elapsed. The oldest image of each individual is enrolled. Thereafter,
all more recent images are searched. Miss rates are computed over all searches noted in row 17 of Table 1 and binned by number of years between search and initial

157
enrollment. FPIR is computed from the same FRVT 2018 non-mates noted in row 3 of Table 1 with N = 3 000 000.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

sensetime_006 cubox_000 nec_3 sensetime_005


0.700
11:12:06
2022/12/18

0.500

0.300
0.200

0.100
0.070
0.050

0.030
0.020
FNIR(N, R, T) =
FPIR(N, T) =

0.010
0.007
0.005

FRVT
sensetime_002 nec_2 rankone_013 kakao_001
0.700

-
False pos. identification rate
False neg. identification rate

0.500

FACE RECOGNITION VENDOR TEST


False negative identification rate (FNIR)

0.300
0.200 Dataset: 2018 Mugshots
N = 3068801
0.100 (00,02]
(02,04]
0.070 (04,06]
0.050 (06,08]
(08,10]
0.030 (10,12]
0.020 (12,14]
(14,18]
R = Num. candidates examined
N = Num. enrolled subjects

0.010
0.007
0.005

firstcreditkz_001 intema_000 paravision_009 visionlabs_011

-
0.700

IDENTIFICATION
0.500

0.300
0.200

0.100
T = Threshold

0.070
0.050

0.030
0.020

0.010
0.007
T > 0 → Identification
T = 0 → Investigation

0.005
01 02 03 05 07 10 .020.030 .050.070.100 .200.300 .500.700 01 02 03 05 07 10 .020.030 .050.070.100 .200.300 .500.700 01 02 03 05 07 10 .020.030 .050.070.100 .200.300 .500.700 01 02 03 05 07 10 .020.030 .050.070.100 .200.300 .500.700
0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0

False positive identification rate (FPIR)

Figure 89: [FRVT-2018 Mugshot Ageing Dataset] Identification miss rates vs. FPIR by time-elapsed. The oldest image of each individual is enrolled. Thereafter,
all more recent images are searched. Miss rates are computed over all searches noted in row 17 of Table 1 and binned by number of years between search and initial

158
enrollment. FPIR is computed from the same FRVT 2018 non-mates noted in row 3 of Table 1 with N = 3 000 000.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

xforwardai_001 visionlabs_009 deepglint_001 visionlabs_010


0.700
11:12:06
2022/12/18

0.500

0.300
0.200

0.100
0.070
0.050

0.030
0.020
FNIR(N, R, T) =
FPIR(N, T) =

0.010
0.007
0.005

FRVT
paravision_005 lineclova_002 maxvision_001 griaule_001
0.700

-
False pos. identification rate
False neg. identification rate

0.500

FACE RECOGNITION VENDOR TEST


False negative identification rate (FNIR)

0.300
0.200 Dataset: 2018 Mugshots
N = 3068801
0.100 (00,02]
(02,04]
0.070 (04,06]
0.050 (06,08]
(08,10]
0.030 (10,12]
0.020 (12,14]
(14,18]
R = Num. candidates examined
N = Num. enrolled subjects

0.010
0.007
0.005

paravision_007 rankone_012 s1_002 cogent_006

-
0.700

IDENTIFICATION
0.500

0.300
0.200

0.100
T = Threshold

0.070
0.050

0.030
0.020

0.010
0.007
T > 0 → Identification
T = 0 → Investigation

0.005
01 02 03 05 07 10 .020.030 .050.070.100 .200.300 .500.700 01 02 03 05 07 10 .020.030 .050.070.100 .200.300 .500.700 01 02 03 05 07 10 .020.030 .050.070.100 .200.300 .500.700 01 02 03 05 07 10 .020.030 .050.070.100 .200.300 .500.700
0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0

False positive identification rate (FPIR)

Figure 90: [FRVT-2018 Mugshot Ageing Dataset] Identification miss rates vs. FPIR by time-elapsed. The oldest image of each individual is enrolled. Thereafter,
all more recent images are searched. Miss rates are computed over all searches noted in row 17 of Table 1 and binned by number of years between search and initial

159
enrollment. FPIR is computed from the same FRVT 2018 non-mates noted in row 3 of Table 1 with N = 3 000 000.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

ntechlab_009 clearviewai_000 realnetworks_008 ntechlab_011


0.700
11:12:06
2022/12/18

0.500

0.300
0.200

0.100
0.070
0.050

0.030
0.020
FNIR(N, R, T) =
FPIR(N, T) =

0.010
0.007
0.005

FRVT
canon_002 canon_001 vnpt_002 hzailu_001
0.700

-
False pos. identification rate
False neg. identification rate

0.500

FACE RECOGNITION VENDOR TEST


False negative identification rate (FNIR)

0.300
0.200 Dataset: 2018 Mugshots
N = 3068801
0.100 (00,02]
(02,04]
0.070 (04,06]
0.050 (06,08]
(08,10]
0.030 (10,12]
0.020 (12,14]
(14,18]
R = Num. candidates examined
N = Num. enrolled subjects

0.010
0.007
0.005

pangiam_000 dermalog_010 rankone_011 neurotechnology_012

-
0.700

IDENTIFICATION
0.500

0.300
0.200

0.100
T = Threshold

0.070
0.050

0.030
0.020

0.010
0.007
T > 0 → Identification
T = 0 → Investigation

0.005
01 02 03 05 07 10 .020.030 .050.070.100 .200.300 .500.700 01 02 03 05 07 10 .020.030 .050.070.100 .200.300 .500.700 01 02 03 05 07 10 .020.030 .050.070.100 .200.300 .500.700 01 02 03 05 07 10 .020.030 .050.070.100 .200.300 .500.700
0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0

False positive identification rate (FPIR)

Figure 91: [FRVT-2018 Mugshot Ageing Dataset] Identification miss rates vs. FPIR by time-elapsed. The oldest image of each individual is enrolled. Thereafter,
all more recent images are searched. Miss rates are computed over all searches noted in row 17 of Table 1 and binned by number of years between search and initial

160
enrollment. FPIR is computed from the same FRVT 2018 non-mates noted in row 3 of Table 1 with N = 3 000 000.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

rankone_010 visionlabs_008 dahua_003 everai_paravision_004


0.700
11:12:06
2022/12/18

0.500

0.300
0.200

0.100
0.070
0.050

0.030
0.020
FNIR(N, R, T) =
FPIR(N, T) =

0.010
0.007
0.005

FRVT
microsoft_6 cyberlink_003 cib_000 vts_003
0.700

-
False pos. identification rate
False neg. identification rate

0.500

FACE RECOGNITION VENDOR TEST


False negative identification rate (FNIR)

0.300
0.200 Dataset: 2018 Mugshots
N = 3068801
0.100 (00,02]
(02,04]
0.070 (04,06]
0.050 (06,08]
(08,10]
0.030 (10,12]
0.020 (12,14]
(14,18]
R = Num. candidates examined
N = Num. enrolled subjects

0.010
0.007
0.005

realnetworks_006 cognitec_006 s1_003 neurotechnology_010

-
0.700

IDENTIFICATION
0.500

0.300
0.200

0.100
T = Threshold

0.070
0.050

0.030
0.020

0.010
0.007
T > 0 → Identification
T = 0 → Investigation

0.005
01 02 03 05 07 10 .020.030 .050.070.100 .200.300 .500.700 01 02 03 05 07 10 .020.030 .050.070.100 .200.300 .500.700 01 02 03 05 07 10 .020.030 .050.070.100 .200.300 .500.700 01 02 03 05 07 10 .020.030 .050.070.100 .200.300 .500.700
0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0

False positive identification rate (FPIR)

Figure 92: [FRVT-2018 Mugshot Ageing Dataset] Identification miss rates vs. FPIR by time-elapsed. The oldest image of each individual is enrolled. Thereafter,
all more recent images are searched. Miss rates are computed over all searches noted in row 17 of Table 1 and binned by number of years between search and initial

161
enrollment. FPIR is computed from the same FRVT 2018 non-mates noted in row 3 of Table 1 with N = 3 000 000.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

xforwardai_000 imagus_005 dahua_002 idemia_007


0.700
11:12:06
2022/12/18

0.500

0.300
0.200

0.100
0.070
0.050

0.030
0.020
FNIR(N, R, T) =
FPIR(N, T) =

0.010
0.007
0.005

FRVT
rankone_007 cyberlink_002 yitu_5 yitu_4
0.700

-
False pos. identification rate
False neg. identification rate

0.500

FACE RECOGNITION VENDOR TEST


False negative identification rate (FNIR)

0.300
0.200 Dataset: 2018 Mugshots
N = 3068801
0.100 (00,02]
(02,04]
0.070 (04,06]
0.050 (06,08]
(08,10]
0.030 (10,12]
0.020 (12,14]
(14,18]
R = Num. candidates examined
N = Num. enrolled subjects

0.010
0.007
0.005

kedacom_001 trueface_000 cogent_004 fujitsulab_001

-
0.700

IDENTIFICATION
0.500

0.300
0.200

0.100
T = Threshold

0.070
0.050

0.030
0.020

0.010
0.007
T > 0 → Identification
T = 0 → Investigation

0.005
01 02 03 05 07 10 .020.030 .050.070.100 .200.300 .500.700 01 02 03 05 07 10 .020.030 .050.070.100 .200.300 .500.700 01 02 03 05 07 10 .020.030 .050.070.100 .200.300 .500.700 01 02 03 05 07 10 .020.030 .050.070.100 .200.300 .500.700
0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0

False positive identification rate (FPIR)

Figure 93: [FRVT-2018 Mugshot Ageing Dataset] Identification miss rates vs. FPIR by time-elapsed. The oldest image of each individual is enrolled. Thereafter,
all more recent images are searched. Miss rates are computed over all searches noted in row 17 of Table 1 and binned by number of years between search and initial

162
enrollment. FPIR is computed from the same FRVT 2018 non-mates noted in row 3 of Table 1 with N = 3 000 000.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

rankone_009 pixelall_003 ptakuratsatu_000 microsoft_5


0.700
11:12:06
2022/12/18

0.500

0.300
0.200

0.100
0.070
0.050

0.030
0.020
FNIR(N, R, T) =
FPIR(N, T) =

0.010
0.007
0.005

FRVT
visionlabs_6 lookman_005 cognitec_004 pixelall_004
0.700

-
False pos. identification rate
False neg. identification rate

0.500

FACE RECOGNITION VENDOR TEST


False negative identification rate (FNIR)

0.300
0.200 Dataset: 2018 Mugshots
N = 3068801
0.100 (00,02]
(02,04]
0.070 (04,06]
0.050 (06,08]
(08,10]
0.030 (10,12]
0.020 (12,14]
(14,18]
R = Num. candidates examined
N = Num. enrolled subjects

0.010
0.007
0.005

idemia_6 gorilla_008 irex_000 synesis_005

-
0.700

IDENTIFICATION
0.500

0.300
0.200

0.100
T = Threshold

0.070
0.050

0.030
0.020

0.010
0.007
T > 0 → Identification
T = 0 → Investigation

0.005
01 02 03 05 07 10 .020.030 .050.070.100 .200.300 .500.700 01 02 03 05 07 10 .020.030 .050.070.100 .200.300 .500.700 01 02 03 05 07 10 .020.030 .050.070.100 .200.300 .500.700 01 02 03 05 07 10 .020.030 .050.070.100 .200.300 .500.700
0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0

False positive identification rate (FPIR)

Figure 94: [FRVT-2018 Mugshot Ageing Dataset] Identification miss rates vs. FPIR by time-elapsed. The oldest image of each individual is enrolled. Thereafter,
all more recent images are searched. Miss rates are computed over all searches noted in row 17 of Table 1 and binned by number of years between search and initial

163
enrollment. FPIR is computed from the same FRVT 2018 non-mates noted in row 3 of Table 1 with N = 3 000 000.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

ntechlab_008 innovatrics_005 microsoft_4 ntechlab_007


0.700
11:12:06
2022/12/18

0.500

0.300
0.200

0.100
0.070
0.050

0.030
0.020
FNIR(N, R, T) =
FPIR(N, T) =

0.010
0.007
0.005

FRVT
microsoft_3 anke_002 idemia_4 imperial_000
0.700

-
False pos. identification rate
False neg. identification rate

0.500

FACE RECOGNITION VENDOR TEST


False negative identification rate (FNIR)

0.300
0.200 Dataset: 2018 Mugshots
N = 3068801
0.100 (00,02]
(02,04]
0.070 (04,06]
0.050 (06,08]
(08,10]
0.030 (10,12]
0.020 (12,14]
(14,18]
R = Num. candidates examined
N = Num. enrolled subjects

0.010
0.007
0.005

maxvision_000 gorilla_007 dilusense_000 vixvizion_009

-
0.700

IDENTIFICATION
0.500

0.300
0.200

0.100
T = Threshold

0.070
0.050

0.030
0.020

0.010
0.007
T > 0 → Identification
T = 0 → Investigation

0.005
01 02 03 05 07 10 .020.030 .050.070.100 .200.300 .500.700 01 02 03 05 07 10 .020.030 .050.070.100 .200.300 .500.700 01 02 03 05 07 10 .020.030 .050.070.100 .200.300 .500.700 01 02 03 05 07 10 .020.030 .050.070.100 .200.300 .500.700
0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0

False positive identification rate (FPIR)

Figure 95: [FRVT-2018 Mugshot Ageing Dataset] Identification miss rates vs. FPIR by time-elapsed. The oldest image of each individual is enrolled. Thereafter,
all more recent images are searched. Miss rates are computed over all searches noted in row 17 of Table 1 and binned by number of years between search and initial

164
enrollment. FPIR is computed from the same FRVT 2018 non-mates noted in row 3 of Table 1 with N = 3 000 000.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

yitu_2 yitu_3 everai_3 deepsea_001


0.700
11:12:06
2022/12/18

0.500

0.300
0.200

0.100
0.070
0.050

0.030
0.020
FNIR(N, R, T) =
FPIR(N, T) =

0.010
0.007
0.005

FRVT
lookman_3 veridas_001 microsoft_0 idemia_5
0.700

-
False pos. identification rate
False neg. identification rate

0.500

FACE RECOGNITION VENDOR TEST


False negative identification rate (FNIR)

0.300
0.200 Dataset: 2018 Mugshots
N = 3068801
0.100 (00,02]
(02,04]
0.070 (04,06]
0.050 (06,08]
(08,10]
0.030 (10,12]
0.020 (12,14]
(14,18]
R = Num. candidates examined
N = Num. enrolled subjects

0.010
0.007
0.005

dermalog_008 idemia_3 cogent_2 cogent_3

-
0.700

IDENTIFICATION
0.500

0.300
0.200

0.100
T = Threshold

0.070
0.050

0.030
0.020

0.010
0.007
T > 0 → Identification
T = 0 → Investigation

0.005
01 02 03 05 07 10 .020.030 .050.070.100 .200.300 .500.700 01 02 03 05 07 10 .020.030 .050.070.100 .200.300 .500.700 01 02 03 05 07 10 .020.030 .050.070.100 .200.300 .500.700 01 02 03 05 07 10 .020.030 .050.070.100 .200.300 .500.700
0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0

False positive identification rate (FPIR)

Figure 96: [FRVT-2018 Mugshot Ageing Dataset] Identification miss rates vs. FPIR by time-elapsed. The oldest image of each individual is enrolled. Thereafter,
all more recent images are searched. Miss rates are computed over all searches noted in row 17 of Table 1 and binned by number of years between search and initial

165
enrollment. FPIR is computed from the same FRVT 2018 non-mates noted in row 3 of Table 1 with N = 3 000 000.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

visionlabs_5 dermalog_6 neurotechnology_4 incode_004


0.700
11:12:06
2022/12/18

0.500

0.300
0.200

0.100
0.070
0.050

0.030
0.020
FNIR(N, R, T) =
FPIR(N, T) =

0.010
0.007
0.005

FRVT
rankone_5 cognitec_3 isystems_3 cognitec_2
0.700

-
False pos. identification rate
False neg. identification rate

0.500

FACE RECOGNITION VENDOR TEST


False negative identification rate (FNIR)

0.300
0.200 Dataset: 2018 Mugshots
N = 3068801
0.100 (00,02]
(02,04]
0.070 (04,06]
0.050 (06,08]
(08,10]
0.030 (10,12]
0.020 (12,14]
(14,18]
R = Num. candidates examined
N = Num. enrolled subjects

0.010
0.007
0.005

neurotechnology_5 neurotechnology_007 cogent_1 cogent_0

-
0.700

IDENTIFICATION
0.500

0.300
0.200

0.100
T = Threshold

0.070
0.050

0.030
0.020

0.010
0.007
T > 0 → Identification
T = 0 → Investigation

0.005
01 02 03 05 07 10 .020.030 .050.070.100 .200.300 .500.700 01 02 03 05 07 10 .020.030 .050.070.100 .200.300 .500.700 01 02 03 05 07 10 .020.030 .050.070.100 .200.300 .500.700 01 02 03 05 07 10 .020.030 .050.070.100 .200.300 .500.700
0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0

False positive identification rate (FPIR)

Figure 97: [FRVT-2018 Mugshot Ageing Dataset] Identification miss rates vs. FPIR by time-elapsed. The oldest image of each individual is enrolled. Thereafter,
all more recent images are searched. Miss rates are computed over all searches noted in row 17 of Table 1 and binned by number of years between search and initial

166
enrollment. FPIR is computed from the same FRVT 2018 non-mates noted in row 3 of Table 1 with N = 3 000 000.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

visionlabs_4 gorilla_005 scanovate_001 ntechlab_6


0.700
11:12:06
2022/12/18

0.500

0.300
0.200

0.100
0.070
0.050

0.030
0.020
FNIR(N, R, T) =
FPIR(N, T) =

0.010
0.007
0.005

FRVT
ntechlab_4 isystems_2 nec_0 dermalog_007
0.700

-
False pos. identification rate
False neg. identification rate

0.500

FACE RECOGNITION VENDOR TEST


False negative identification rate (FNIR)

0.300
0.200 Dataset: 2018 Mugshots
N = 3068801
0.100 (00,02]
(02,04]
0.070 (04,06]
0.050 (06,08]
(08,10]
0.030 (10,12]
0.020 (12,14]
(14,18]
R = Num. candidates examined
N = Num. enrolled subjects

0.010
0.007
0.005

pixelall_002 camvi_4 tech5_001 synesis_003

-
0.700

IDENTIFICATION
0.500

0.300
0.200

0.100
T = Threshold

0.070
0.050

0.030
0.020

0.010
0.007
T > 0 → Identification
T = 0 → Investigation

0.005
01 02 03 05 07 10 .020.030 .050.070.100 .200.300 .500.700 01 02 03 05 07 10 .020.030 .050.070.100 .200.300 .500.700 01 02 03 05 07 10 .020.030 .050.070.100 .200.300 .500.700 01 02 03 05 07 10 .020.030 .050.070.100 .200.300 .500.700
0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0

False positive identification rate (FPIR)

Figure 98: [FRVT-2018 Mugshot Ageing Dataset] Identification miss rates vs. FPIR by time-elapsed. The oldest image of each individual is enrolled. Thereafter,
all more recent images are searched. Miss rates are computed over all searches noted in row 17 of Table 1 and binned by number of years between search and initial

167
enrollment. FPIR is computed from the same FRVT 2018 non-mates noted in row 3 of Table 1 with N = 3 000 000.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

ntechlab_3 yitu_0 idemia_0 nec_1


0.700
11:12:06
2022/12/18

0.500

0.300
0.200

0.100
0.070
0.050

0.030
0.020
FNIR(N, R, T) =
FPIR(N, T) =

0.010
0.007
0.005

FRVT
ntechlab_0 anke_0 megvii_0 cognitec_1
0.700

-
False pos. identification rate
False neg. identification rate

0.500

FACE RECOGNITION VENDOR TEST


False negative identification rate (FNIR)

0.300
0.200 Dataset: 2018 Mugshots
N = 3068801
0.100 (00,02]
(02,04]
0.070 (04,06]
0.050 (06,08]
(08,10]
0.030 (10,12]
0.020 (12,14]
(14,18]
R = Num. candidates examined
N = Num. enrolled subjects

0.010
0.007
0.005

3divi_5 cognitec_0 aware_5 neurotechnology_3

-
0.700

IDENTIFICATION
0.500

0.300
0.200

0.100
T = Threshold

0.070
0.050

0.030
0.020

0.010
0.007
T > 0 → Identification
T = 0 → Investigation

0.005
01 02 03 05 07 10 .020.030 .050.070.100 .200.300 .500.700 01 02 03 05 07 10 .020.030 .050.070.100 .200.300 .500.700 01 02 03 05 07 10 .020.030 .050.070.100 .200.300 .500.700 01 02 03 05 07 10 .020.030 .050.070.100 .200.300 .500.700
0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0

False positive identification rate (FPIR)

Figure 99: [FRVT-2018 Mugshot Ageing Dataset] Identification miss rates vs. FPIR by time-elapsed. The oldest image of each individual is enrolled. Thereafter,
all more recent images are searched. Miss rates are computed over all searches noted in row 17 of Table 1 and binned by number of years between search and initial

168
enrollment. FPIR is computed from the same FRVT 2018 non-mates noted in row 3 of Table 1 with N = 3 000 000.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

rankone_2 incode_3 rankone_4 gorilla_2


0.700
11:12:06
2022/12/18

0.500

0.300
0.200

0.100
0.070
0.050

0.030
0.020
FNIR(N, R, T) =
FPIR(N, T) =

0.010
0.007
0.005

FRVT
realnetworks_004 realnetworks_003 rankone_0 realnetworks_2
0.700

-
False pos. identification rate
False neg. identification rate

0.500

FACE RECOGNITION VENDOR TEST


False negative identification rate (FNIR)

0.300
0.200 Dataset: 2018 Mugshots
N = 3068801
0.100 (00,02]
(02,04]
0.070 (04,06]
0.050 (06,08]
(08,10]
0.030 (10,12]
0.020 (12,14]
(14,18]
R = Num. candidates examined
N = Num. enrolled subjects

0.010
0.007
0.005

innovatrics_4 mukh_002 vts_000 siat_2

-
0.700

IDENTIFICATION
0.500

0.300
0.200

0.100
T = Threshold

0.070
0.050

0.030
0.020

0.010
0.007
T > 0 → Identification
T = 0 → Investigation

0.005
01 02 03 05 07 10 .020.030 .050.070.100 .200.300 .500.700 01 02 03 05 07 10 .020.030 .050.070.100 .200.300 .500.700 01 02 03 05 07 10 .020.030 .050.070.100 .200.300 .500.700 01 02 03 05 07 10 .020.030 .050.070.100 .200.300 .500.700
0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 0 0 0 0 0 0

False positive identification rate (FPIR)

Figure 100: [FRVT-2018 Mugshot Ageing Dataset] Identification miss rates vs. FPIR by time-elapsed. The oldest image of each individual is enrolled. Thereafter,
all more recent images are searched. Miss rates are computed over all searches noted in row 17 of Table 1 and binned by number of years between search and initial

169
enrollment. FPIR is computed from the same FRVT 2018 non-mates noted in row 3 of Table 1 with N = 3 000 000.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

noblis_2 ayonix_2 microfocus_5

0.700
11:12:06
2022/12/18

0.500

0.300
FNIR(N, R, T) =
FPIR(N, T) =

0.200

FRVT
-
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


False negative identification rate (FNIR)

0.100

Dataset: 2018 Mugshots


N = 3068801
0.070
(00,02]
(02,04]
(04,06]
(06,08]
0.050 (08,10]
(10,12]
(12,14]
(14,18]
R = Num. candidates examined
N = Num. enrolled subjects

0.030

-
IDENTIFICATION
0.020
T = Threshold

0.010

0.007

0.005
T > 0 → Identification
T = 0 → Investigation

01 02 .003 .005 .007 .010 20 .030 .050 .070 .100 00 .300 .500 .700 01 02 .003 .005 .007 .010 20 30 .050 .070 .100 00 .300 .500 .700 01 02 .003 .005 .007 .010 20 .030 .050 .070 .100 00 00 .500 .700
0.0 0.0 0 0 0 0 0.0 0 0 0 0 0.2 0 0 0 0.0 0.0 0 0 0 0 0.0 0.0 0 0 0 0.2 0 0 0 0.0 0.0 0 0 0 0 0.0 0 0 0 0 0.2 0.3 0 0

False positive identification rate (FPIR)

Figure 101: [FRVT-2018 Mugshot Ageing Dataset] Identification miss rates vs. FPIR by time-elapsed. The oldest image of each individual is enrolled. Thereafter,
all more recent images are searched. Miss rates are computed over all searches noted in row 17 of Table 1 and binned by number of years between search and initial

170
enrollment. FPIR is computed from the same FRVT 2018 non-mates noted in row 3 of Table 1 with N = 3 000 000.
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 171
2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

11:12:06
2022/12/18

cloudwalk_hr_000 cloudwalk_mt_000 cloudwalk_mt_001

300 300 300

280 280
280

Dataset: 2018 Mugshots


Tier: 1
FNIR(N, R, T) =
FPIR(N, T) =

FNIR (Rank = 1)
0.20
260
260 260

FRVT
0.15

-
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


0.10
240
240 240

0.05
Score

nec_005 sensetime_007 sensetime_008 0.00

1.0 1.0
TVAL
R = Num. candidates examined
N = Num. enrolled subjects

FPIR = 0.001

0.8

FPIR = 0.003

-
IDENTIFICATION
0.8 0.8
FPIR = 0.010

0.7
FPIR = 0.030

RANK 2 MEDIAN
T = Threshold

NONMATE

0.6 0.6
0.6
T > 0 → Identification
T = 0 → Investigation

0.5 0.4 0.4

(00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18]
Time lapse between search and initial encounter enrollment (years)

Figure 102: [FRVT-2018 Mugshot Ageing Dataset] Native mate scores vs. time-elapsed. The oldest image of each individual is enrolled. Thereafter, all more recent
images are searched. Mated score distributions are computed over all searches noted in row 17 of Table 1 binned by number of years between search and initial

172
enrollment.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

idemia_009 intema_000 kakao_001

2.0
11:12:06
2022/12/18

1.0

20000

1.8

15000
0.9 Dataset: 2018 Mugshots
Tier: 2

FNIR (Rank = 1)
FNIR(N, R, T) =

0.20
FPIR(N, T) =

10000 1.6

0.15

FRVT
0.8

-
False pos. identification rate
False neg. identification rate

0.10

FACE RECOGNITION VENDOR TEST


5000
1.4

0.05
Score

nec_006 sensetime_004 sensetime_006 0.00

1.0
1.0 1.0
TVAL
R = Num. candidates examined
N = Num. enrolled subjects

FPIR = 0.001

0.9
FPIR = 0.003

-
0.8 0.8

IDENTIFICATION
FPIR = 0.010

0.8
FPIR = 0.030

RANK 2 MEDIAN
NONMATE
T = Threshold

0.7 0.6
0.6

0.6

0.4
T > 0 → Identification
T = 0 → Investigation

0.4

(00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18]
Time lapse between search and initial encounter enrollment (years)

Figure 103: [FRVT-2018 Mugshot Ageing Dataset] Native mate scores vs. time-elapsed. The oldest image of each individual is enrolled. Thereafter, all more recent
images are searched. Mated score distributions are computed over all searches noted in row 17 of Table 1 binned by number of years between search and initial

173
enrollment.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

cubox_000 idemia_008 lineclova_002


11:12:06
2022/12/18

2.0 1.0

20000

1.8 0.8

TVAL
15000

FPIR = 0.001
FNIR(N, R, T) =
FPIR(N, T) =

1.6 0.6 FPIR = 0.003


10000

FRVT
FPIR = 0.010

-
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


FPIR = 0.030
1.4 5000
0.4

RANK 2 MEDIAN
NONMATE

0
Score

paravision_007 paravision_009 sensetime_005 Dataset: 2018 Mugshots


Tier: 3
4.0 1.0
FNIR (Rank = 1)
0.20
R = Num. candidates examined
N = Num. enrolled subjects

0.15
1.2

-
0.8

IDENTIFICATION
3.5 0.10

0.05

0.8
T = Threshold

0.6 0.00

3.0

0.4

0.4
T > 0 → Identification
T = 0 → Investigation

(00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18]
Time lapse between search and initial encounter enrollment (years)

Figure 104: [FRVT-2018 Mugshot Ageing Dataset] Native mate scores vs. time-elapsed. The oldest image of each individual is enrolled. Thereafter, all more recent
images are searched. Mated score distributions are computed over all searches noted in row 17 of Table 1 binned by number of years between search and initial

174
enrollment.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

maxvision_001 nec_004 neurotechnology_012

1.0
11:12:06
2022/12/18

3200

0.8

0.9 2800

Dataset: 2018 Mugshots


Tier: 4
0.7
FNIR (Rank = 1)
FNIR(N, R, T) =

0.20
FPIR(N, T) =

2400
0.8
0.15

FRVT
0.6

-
False pos. identification rate
False neg. identification rate

0.10

FACE RECOGNITION VENDOR TEST


2000
0.7
0.05
0.5
Score

rankone_013 sensetime_003 visionlabs_011 0.00

1.0 1.0 1.0


TVAL
R = Num. candidates examined
N = Num. enrolled subjects

FPIR = 0.001

FPIR = 0.003

-
0.8
0.8

IDENTIFICATION
FPIR = 0.010
0.9

FPIR = 0.030

RANK 2 MEDIAN
NONMATE
0.6
T = Threshold

0.6

0.8

0.4
T > 0 → Identification
T = 0 → Investigation

0.4

(00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18]
Time lapse between search and initial encounter enrollment (years)

Figure 105: [FRVT-2018 Mugshot Ageing Dataset] Native mate scores vs. time-elapsed. The oldest image of each individual is enrolled. Thereafter, all more recent
images are searched. Mated score distributions are computed over all searches noted in row 17 of Table 1 binned by number of years between search and initial

175
enrollment.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

canon_001 cogent_006 dermalog_010


100
2.0
11:12:06
2022/12/18

3900
95

1.8

TVAL

3600 90
FPIR = 0.001
FNIR(N, R, T) =
FPIR(N, T) =

1.6 FPIR = 0.003

FRVT
85
FPIR = 0.010
3300

-
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


FPIR = 0.030
1.4
80
RANK 2 MEDIAN
NONMATE
3000
Score

griaule_001 nec_2 s1_002 Dataset: 2018 Mugshots


Tier: 5
1.0
FNIR (Rank = 1)
0.20
R = Num. candidates examined
N = Num. enrolled subjects

0.9 0.9
0.15

0.9

-
IDENTIFICATION
0.10

0.8
0.8
0.8 0.05
T = Threshold

0.00

0.7
0.7
0.7
T > 0 → Identification
T = 0 → Investigation

0.6
0.6

(00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18]
Time lapse between search and initial encounter enrollment (years)

Figure 106: [FRVT-2018 Mugshot Ageing Dataset] Native mate scores vs. time-elapsed. The oldest image of each individual is enrolled. Thereafter, all more recent
images are searched. Mated score distributions are computed over all searches noted in row 17 of Table 1 binned by number of years between search and initial

176
enrollment.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

pangiam_000 realnetworks_008 visionlabs_009


11:12:06
2022/12/18

9e+05 4.0 1.0

0.9
8e+05
Dataset: 2018 Mugshots
3.5 Tier: 6

FNIR (Rank = 1)
FNIR(N, R, T) =

0.20
FPIR(N, T) =

0.8

0.15

FRVT
7e+05

-
False pos. identification rate
False neg. identification rate

0.10

FACE RECOGNITION VENDOR TEST


3.0

0.7

0.05

6e+05
Score

visionlabs_010 vnpt_002 xforwardai_001 0.00

1.0 100 2.0


TVAL
R = Num. candidates examined
N = Num. enrolled subjects

FPIR = 0.001

FPIR = 0.003

0.9 1.8
90

-
IDENTIFICATION
FPIR = 0.010

FPIR = 0.030

1.6
0.8 RANK 2 MEDIAN
NONMATE
80
T = Threshold

1.4
0.7

70
T > 0 → Identification
T = 0 → Investigation

(00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18]
Time lapse between search and initial encounter enrollment (years)

Figure 107: [FRVT-2018 Mugshot Ageing Dataset] Native mate scores vs. time-elapsed. The oldest image of each individual is enrolled. Thereafter, all more recent
images are searched. Mated score distributions are computed over all searches noted in row 17 of Table 1 binned by number of years between search and initial

177
enrollment.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

clearviewai_000 deepglint_001 hzailu_001

1.0
11:12:06
2022/12/18

1.0 1.0

0.8 0.8 0.8

Dataset: 2018 Mugshots


Tier: 7

FNIR (Rank = 1)
FNIR(N, R, T) =

0.20
FPIR(N, T) =

0.6 0.6
0.6

0.15

FRVT
-
False pos. identification rate
False neg. identification rate

0.10

FACE RECOGNITION VENDOR TEST


0.4
0.4
0.4

0.05
Score

neurotechnology_010 rankone_012 vts_003 0.00

3200 1.0
TVAL
R = Num. candidates examined
N = Num. enrolled subjects

FPIR = 0.001
4000

FPIR = 0.003
2800

-
0.8

IDENTIFICATION
3000 FPIR = 0.010

FPIR = 0.030

2400 2000 RANK 2 MEDIAN


NONMATE
T = Threshold

0.6

1000

2000

0.4
T > 0 → Identification
T = 0 → Investigation

(00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18]
Time lapse between search and initial encounter enrollment (years)

Figure 108: [FRVT-2018 Mugshot Ageing Dataset] Native mate scores vs. time-elapsed. The oldest image of each individual is enrolled. Thereafter, all more recent
images are searched. Mated score distributions are computed over all searches noted in row 17 of Table 1 binned by number of years between search and initial

178
enrollment.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

canon_002 cib_000 firstcreditkz_001

40000
11:12:06
2022/12/18

2.0

0.7

38000
1.8

TVAL
0.6

FPIR = 0.001
FNIR(N, R, T) =

36000
FPIR(N, T) =

1.6
FPIR = 0.003

FRVT
0.5 FPIR = 0.010
34000

-
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


1.4
FPIR = 0.030

RANK 2 MEDIAN
NONMATE
0.4 32000
Score

1.2
paravision_005 s1_003 sensetime_002 Dataset: 2018 Mugshots
Tier: 8
4.0 1.0
FNIR (Rank = 1)
0.20
R = Num. candidates examined
N = Num. enrolled subjects

0.9
0.15

0.8

-
IDENTIFICATION
0.10
3.5
0.8

0.05

0.6
T = Threshold

0.00
0.7
3.0

0.4
T > 0 → Identification
T = 0 → Investigation

0.6

(00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18]
Time lapse between search and initial encounter enrollment (years)

Figure 109: [FRVT-2018 Mugshot Ageing Dataset] Native mate scores vs. time-elapsed. The oldest image of each individual is enrolled. Thereafter, all more recent
images are searched. Mated score distributions are computed over all searches noted in row 17 of Table 1 binned by number of years between search and initial

179
enrollment.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

dahua_003 everai_paravision_004 nec_3


11:12:06
2022/12/18

10000 4.0

0.8

9000

3.5 Dataset: 2018 Mugshots


Tier: 9
0.7
FNIR (Rank = 1)
FNIR(N, R, T) =

8000 0.20
FPIR(N, T) =

0.15

FRVT
0.6
7000 3.0

-
False pos. identification rate
False neg. identification rate

0.10

FACE RECOGNITION VENDOR TEST


0.05

6000 0.5
Score

ntechlab_011 rankone_011 xforwardai_000 0.00

0.750 1.0 2.0


TVAL
R = Num. candidates examined
N = Num. enrolled subjects

FPIR = 0.001

0.725
FPIR = 0.003

1.8

-
0.8

IDENTIFICATION
FPIR = 0.010
0.700

FPIR = 0.030

RANK 2 MEDIAN
0.675 1.6 NONMATE
T = Threshold

0.6

0.650

1.4

0.4
T > 0 → Identification
T = 0 → Investigation

0.625

(00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18]
Time lapse between search and initial encounter enrollment (years)

Figure 110: [FRVT-2018 Mugshot Ageing Dataset] Native mate scores vs. time-elapsed. The oldest image of each individual is enrolled. Thereafter, all more recent
images are searched. Mated score distributions are computed over all searches noted in row 17 of Table 1 binned by number of years between search and initial

180
enrollment.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

cognitec_006 cyberlink_003 gorilla_008


1.0
11:12:06
2022/12/18

30 2.0

1.8

25 0.8 Dataset: 2018 Mugshots


Tier: 10

FNIR (Rank = 1)
FNIR(N, R, T) =

0.20
FPIR(N, T) =

1.6

0.15

FRVT
20
0.6

-
False pos. identification rate
False neg. identification rate

0.10

FACE RECOGNITION VENDOR TEST


1.4

0.05

15
Score

realnetworks_006 trueface_000 visionlabs_008 0.00

2.00 9e+05 1.0


TVAL
R = Num. candidates examined
N = Num. enrolled subjects

FPIR = 0.001

1.75
FPIR = 0.003

0.9

-
8e+05

IDENTIFICATION
FPIR = 0.010

1.50

FPIR = 0.030

RANK 2 MEDIAN
1.25 NONMATE
0.8
T = Threshold

7e+05

1.00

0.7
T > 0 → Identification
T = 0 → Investigation

0.75 6e+05

(00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18]
Time lapse between search and initial encounter enrollment (years)

Figure 111: [FRVT-2018 Mugshot Ageing Dataset] Native mate scores vs. time-elapsed. The oldest image of each individual is enrolled. Thereafter, all more recent
images are searched. Mated score distributions are computed over all searches noted in row 17 of Table 1 binned by number of years between search and initial

181
enrollment.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

cyberlink_002 dahua_002 fujitsulab_001


11:12:06
2022/12/18

2.0 10000

0.9
9000
1.8

TVAL

FPIR = 0.001
FNIR(N, R, T) =

8000
FPIR(N, T) =

1.6 FPIR = 0.003


0.8

FRVT
FPIR = 0.010

7000

-
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


FPIR = 0.030
1.4

RANK 2 MEDIAN
NONMATE
0.7
6000
Score

imagus_005 irex_000 ntechlab_009 Dataset: 2018 Mugshots


Tier: 11
1.0 1.0
FNIR (Rank = 1)
0.20
R = Num. candidates examined
N = Num. enrolled subjects

0.15
2.4
0.8 0.8

-
IDENTIFICATION
0.10

0.05
2.0
0.6
0.6
T = Threshold

0.00

0.4 1.6
0.4
T > 0 → Identification
T = 0 → Investigation

(00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18]
Time lapse between search and initial encounter enrollment (years)

Figure 112: [FRVT-2018 Mugshot Ageing Dataset] Native mate scores vs. time-elapsed. The oldest image of each individual is enrolled. Thereafter, all more recent
images are searched. Mated score distributions are computed over all searches noted in row 17 of Table 1 binned by number of years between search and initial

182
enrollment.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

cogent_004 gorilla_007 pixelall_003


4000
1.0
11:12:06
2022/12/18

1.0

3750 0.9

0.8

0.8 Dataset: 2018 Mugshots


Tier: 12
3500
FNIR (Rank = 1)
FNIR(N, R, T) =

0.20
FPIR(N, T) =

0.7
0.6
0.15

FRVT
3250

-
0.6
False pos. identification rate
False neg. identification rate

0.10

FACE RECOGNITION VENDOR TEST


3000
0.4 0.05
0.5
Score

pixelall_004 rankone_010 visionlabs_6 0.00

1.0 1.0
TVAL
R = Num. candidates examined
N = Num. enrolled subjects

FPIR = 0.001

1e−07 FPIR = 0.003

0.8

-
0.8

IDENTIFICATION
FPIR = 0.010

FPIR = 0.030

RANK 2 MEDIAN
0.6 NONMATE
5e−08
T = Threshold

0.6

0.4
T > 0 → Identification
T = 0 → Investigation

0.4
0e+00

(00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18]
Time lapse between search and initial encounter enrollment (years)

Figure 113: [FRVT-2018 Mugshot Ageing Dataset] Native mate scores vs. time-elapsed. The oldest image of each individual is enrolled. Thereafter, all more recent
images are searched. Mated score distributions are computed over all searches noted in row 17 of Table 1 binned by number of years between search and initial

183
enrollment.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

maxvision_000 microsoft_3 microsoft_4

1.0
11:12:06
2022/12/18

1.0 1.0

0.9 0.9 0.9

Dataset: 2018 Mugshots


Tier: 13

FNIR (Rank = 1)
FNIR(N, R, T) =

0.20
FPIR(N, T) =

0.8 0.8
0.8
0.15

FRVT
-
False pos. identification rate
False neg. identification rate

0.10

FACE RECOGNITION VENDOR TEST


0.7 0.7
0.7
0.05
Score

microsoft_6 pangiam_001 synesis_005 0.00

9e+05 1.0
TVAL
R = Num. candidates examined
N = Num. enrolled subjects

FPIR = 0.001

0.75 FPIR = 0.003


0.8

-
8e+05

IDENTIFICATION
FPIR = 0.010

FPIR = 0.030

0.6
0.50
RANK 2 MEDIAN
NONMATE
T = Threshold

7e+05

0.4

0.25
T > 0 → Identification
T = 0 → Investigation

6e+05

(00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18]
Time lapse between search and initial encounter enrollment (years)

Figure 114: [FRVT-2018 Mugshot Ageing Dataset] Native mate scores vs. time-elapsed. The oldest image of each individual is enrolled. Thereafter, all more recent
images are searched. Mated score distributions are computed over all searches noted in row 17 of Table 1 binned by number of years between search and initial

184
enrollment.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

dilusense_000 imperial_000 innovatrics_005

1.0
11:12:06
2022/12/18

2.0 100

75
0.8 1.8

TVAL

FPIR = 0.001
FNIR(N, R, T) =

50
FPIR(N, T) =

FPIR = 0.003
0.6 1.6

FRVT
FPIR = 0.010

-
25
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


FPIR = 0.030

0.4 1.4 RANK 2 MEDIAN


NONMATE

0
Score

kedacom_001 microsoft_5 ptakuratsatu_000 Dataset: 2018 Mugshots


Tier: 14
1.0 1.0 100
FNIR (Rank = 1)
0.20
R = Num. candidates examined
N = Num. enrolled subjects

0.15
75
0.8

-
0.9

IDENTIFICATION
0.10

50 0.05

0.6
T = Threshold

0.8 0.00

25

0.4

0.7
T > 0 → Identification
T = 0 → Investigation

(00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18]
Time lapse between search and initial encounter enrollment (years)

Figure 115: [FRVT-2018 Mugshot Ageing Dataset] Native mate scores vs. time-elapsed. The oldest image of each individual is enrolled. Thereafter, all more recent
images are searched. Mated score distributions are computed over all searches noted in row 17 of Table 1 binned by number of years between search and initial

185
enrollment.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

anke_002 everai_3 lookman_005


11:12:06
2022/12/18

1.0 4.0 1.0

0.8
0.9
3.5 Dataset: 2018 Mugshots
Tier: 15

FNIR (Rank = 1)
FNIR(N, R, T) =

0.20
FPIR(N, T) =

0.6
0.15

FRVT
0.8

3.0

-
False pos. identification rate
False neg. identification rate

0.10

FACE RECOGNITION VENDOR TEST


0.4 0.05

0.7
Score

visionlabs_5 yitu_4 yitu_5 0.00

1.0
18.5 TVAL
R = Num. candidates examined
N = Num. enrolled subjects

FPIR = 0.001
16.8

FPIR = 0.003
0.9

-
IDENTIFICATION
FPIR = 0.010
18.0

16.4 FPIR = 0.030


0.8

RANK 2 MEDIAN
NONMATE
T = Threshold

17.5
0.7
16.0
T > 0 → Identification
T = 0 → Investigation

0.6

(00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18]
Time lapse between search and initial encounter enrollment (years)

Figure 116: [FRVT-2018 Mugshot Ageing Dataset] Native mate scores vs. time-elapsed. The oldest image of each individual is enrolled. Thereafter, all more recent
images are searched. Mated score distributions are computed over all searches noted in row 17 of Table 1 binned by number of years between search and initial

186
enrollment.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

cognitec_004 dermalog_008 idemia_007

20000
11:12:06
2022/12/18

1.0 100

0.9 95
15000

Dataset: 2018 Mugshots


Tier: 16
90
0.8
FNIR (Rank = 1)
FNIR(N, R, T) =

0.20
FPIR(N, T) =

10000

85
0.7 0.15

FRVT
-
False pos. identification rate
False neg. identification rate

0.10

FACE RECOGNITION VENDOR TEST


5000
80
0.6

0.05

75
Score

rankone_009 veridas_001 vixvizion_009 0.00

1.0 4.0 1.0


TVAL
R = Num. candidates examined
N = Num. enrolled subjects

FPIR = 0.001

0.9 FPIR = 0.003

0.8

-
IDENTIFICATION
FPIR = 0.010
3.5

0.8 FPIR = 0.030

0.6 RANK 2 MEDIAN


NONMATE
T = Threshold

0.7

3.0

0.4
0.6
T > 0 → Identification
T = 0 → Investigation

(00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18]
Time lapse between search and initial encounter enrollment (years)

Figure 117: [FRVT-2018 Mugshot Ageing Dataset] Native mate scores vs. time-elapsed. The oldest image of each individual is enrolled. Thereafter, all more recent
images are searched. Mated score distributions are computed over all searches noted in row 17 of Table 1 binned by number of years between search and initial

187
enrollment.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

cogent_2 everai_2 lookman_3


4000
11:12:06
2022/12/18

4.0 1.0

3750

0.9
3.5 Dataset: 2018 Mugshots
Tier: 17
3500 FNIR (Rank = 1)
FNIR(N, R, T) =

0.20
FPIR(N, T) =

0.15

FRVT
0.8
3250 3.0

-
False pos. identification rate
False neg. identification rate

0.10

FACE RECOGNITION VENDOR TEST


3000 0.05

0.7
Score

neurotechnology_007 rankone_007 visionlabs_4 0.00

2500 1.0 1.0


TVAL
R = Num. candidates examined
N = Num. enrolled subjects

FPIR = 0.001

FPIR = 0.003
0.9 0.9

-
2000

IDENTIFICATION
FPIR = 0.010

FPIR = 0.030

0.8 0.8
RANK 2 MEDIAN
NONMATE
T = Threshold

1500

0.7 0.7
T > 0 → Identification
T = 0 → Investigation

1000

(00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18]
Time lapse between search and initial encounter enrollment (years)

Figure 118: [FRVT-2018 Mugshot Ageing Dataset] Native mate scores vs. time-elapsed. The oldest image of each individual is enrolled. Thereafter, all more recent
images are searched. Mated score distributions are computed over all searches noted in row 17 of Table 1 binned by number of years between search and initial

188
enrollment.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

cogent_3 incode_004 isystems_3


4000
11:12:06
2022/12/18

2.0 1.0

3750

1.8 0.9

TVAL

3500 FPIR = 0.001


FNIR(N, R, T) =
FPIR(N, T) =

FPIR = 0.003
0.8
1.6

FRVT
3250 FPIR = 0.010

-
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


FPIR = 0.030

0.7
3000 1.4 RANK 2 MEDIAN
NONMATE
Score

neurotechnology_4 neurotechnology_5 yitu_2 Dataset: 2018 Mugshots


11.5 Tier: 18
1.0
FNIR (Rank = 1)
0.20
R = Num. candidates examined
N = Num. enrolled subjects

0.15
120

11.0
0.8

-
IDENTIFICATION
0.10

90 0.05

0.6 10.5
T = Threshold

0.00

60

0.4 10.0
T > 0 → Identification
T = 0 → Investigation

(00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18]
Time lapse between search and initial encounter enrollment (years)

Figure 119: [FRVT-2018 Mugshot Ageing Dataset] Native mate scores vs. time-elapsed. The oldest image of each individual is enrolled. Thereafter, all more recent
images are searched. Mated score distributions are computed over all searches noted in row 17 of Table 1 binned by number of years between search and initial

189
enrollment.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

dermalog_6 ntechlab_007 ntechlab_008


100
11:12:06
2022/12/18

95
2.4 2.4

Dataset: 2018 Mugshots


Tier: 19
90
FNIR (Rank = 1)
FNIR(N, R, T) =

0.20
FPIR(N, T) =

2.0 2.0

0.15

FRVT
85

-
False pos. identification rate
False neg. identification rate

0.10

FACE RECOGNITION VENDOR TEST


1.6
1.6

80
0.05
Score

pixelall_002 synesis_003 yitu_3 0.00


11.5
1.0 1.0
TVAL
R = Num. candidates examined
N = Num. enrolled subjects

FPIR = 0.001

FPIR = 0.003

0.8 11.0

-
0.8

IDENTIFICATION
FPIR = 0.010

FPIR = 0.030

0.6 RANK 2 MEDIAN


10.5 NONMATE
0.6
T = Threshold

0.4

0.4 10.0
T > 0 → Identification
T = 0 → Investigation

(00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18]
Time lapse between search and initial encounter enrollment (years)

Figure 120: [FRVT-2018 Mugshot Ageing Dataset] Native mate scores vs. time-elapsed. The oldest image of each individual is enrolled. Thereafter, all more recent
images are searched. Mated score distributions are computed over all searches noted in row 17 of Table 1 binned by number of years between search and initial

190
enrollment.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

cognitec_2 cognitec_3 gorilla_005


11:12:06
2022/12/18

1.0 1.0 1.0

0.9 0.9

0.8 TVAL

0.8 0.8 FPIR = 0.001


FNIR(N, R, T) =
FPIR(N, T) =

FPIR = 0.003

FRVT
0.7 0.7
FPIR = 0.010
0.6

-
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


FPIR = 0.030

0.6 0.6
RANK 2 MEDIAN
NONMATE
Score

isystems_2 microsoft_0 toshiba_1 Dataset: 2018 Mugshots


Tier: 20
1.0 1.0 1.0
FNIR (Rank = 1)
0.20
R = Num. candidates examined
N = Num. enrolled subjects

0.15
0.9
0.9

-
0.9

IDENTIFICATION
0.10

0.8 0.05

0.8
T = Threshold

0.00
0.8

0.7

0.7
T > 0 → Identification
T = 0 → Investigation

0.7 0.6

(00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18]
Time lapse between search and initial encounter enrollment (years)

Figure 121: [FRVT-2018 Mugshot Ageing Dataset] Native mate scores vs. time-elapsed. The oldest image of each individual is enrolled. Thereafter, all more recent
images are searched. Mated score distributions are computed over all searches noted in row 17 of Table 1 binned by number of years between search and initial

191
enrollment.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

deepsea_001 idemia_3 idemia_4


15000
11:12:06
2022/12/18

2.0

9000

1.8
10000
TVAL

FPIR = 0.001
FNIR(N, R, T) =

7000
FPIR(N, T) =

FPIR = 0.003
1.6

FRVT
5000 FPIR = 0.010

-
False pos. identification rate
False neg. identification rate

5000

FACE RECOGNITION VENDOR TEST


FPIR = 0.030

1.4
RANK 2 MEDIAN
NONMATE

0
Score

idemia_6 scanovate_001 tech5_001 Dataset: 2018 Mugshots


Tier: 21
1.0 151
FNIR (Rank = 1)
0.20
R = Num. candidates examined
N = Num. enrolled subjects

10000 0.15
150

-
0.9

IDENTIFICATION
0.10

149 0.05

5000
0.8
T = Threshold

0.00

148

0.7
T > 0 → Identification
T = 0 → Investigation

0 147

(00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18]
Time lapse between search and initial encounter enrollment (years)

Figure 122: [FRVT-2018 Mugshot Ageing Dataset] Native mate scores vs. time-elapsed. The oldest image of each individual is enrolled. Thereafter, all more recent
images are searched. Mated score distributions are computed over all searches noted in row 17 of Table 1 binned by number of years between search and initial

192
enrollment.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

cogent_0 cogent_1 dermalog_007


11:12:06
2022/12/18

100

3750 3750

95

TVAL
3500 3500
FPIR = 0.001
FNIR(N, R, T) =

90
FPIR(N, T) =

FPIR = 0.003

FRVT
3250 3250
FPIR = 0.010
85

-
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


FPIR = 0.030

RANK 2 MEDIAN
3000 3000 80 NONMATE
Score

idemia_5 rankone_5 yitu_0 Dataset: 2018 Mugshots


Tier: 22
1.0 10.00
FNIR (Rank = 1)
0.20
R = Num. candidates examined
N = Num. enrolled subjects

0.15
10000
0.9 9.75

-
IDENTIFICATION
0.10

0.8 9.50
0.05

5000
T = Threshold

0.00

0.7 9.25

0.6
9.00
T > 0 → Identification
T = 0 → Investigation

(00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18]
Time lapse between search and initial encounter enrollment (years)

Figure 123: [FRVT-2018 Mugshot Ageing Dataset] Native mate scores vs. time-elapsed. The oldest image of each individual is enrolled. Thereafter, all more recent
images are searched. Mated score distributions are computed over all searches noted in row 17 of Table 1 binned by number of years between search and initial

193
enrollment.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

cognitec_1 idemia_0 intellivision_002


11:12:06
2022/12/18

1.0 100
12000

0.9
80
8000 Dataset: 2018 Mugshots
Tier: 23

FNIR (Rank = 1)
FNIR(N, R, T) =

0.8 0.20
FPIR(N, T) =

60
0.15

FRVT
4000
0.7

-
False pos. identification rate
False neg. identification rate

0.10

FACE RECOGNITION VENDOR TEST


40
0.05
0.6
0
Score

ntechlab_4 ntechlab_6 t4isb_000 0.00

1.0 1.0
TVAL
R = Num. candidates examined
N = Num. enrolled subjects

FPIR = 0.001

FPIR = 0.003

0.9 0.9 0.9

-
IDENTIFICATION
FPIR = 0.010

FPIR = 0.030

0.8 RANK 2 MEDIAN


0.8
NONMATE
T = Threshold

0.8

0.7
0.7
T > 0 → Identification
T = 0 → Investigation

0.7
(00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18]
Time lapse between search and initial encounter enrollment (years)

Figure 124: [FRVT-2018 Mugshot Ageing Dataset] Native mate scores vs. time-elapsed. The oldest image of each individual is enrolled. Thereafter, all more recent
images are searched. Mated score distributions are computed over all searches noted in row 17 of Table 1 binned by number of years between search and initial

194
enrollment.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

3divi_5 anke_0 incode_3


11:12:06
2022/12/18

1.0 2.0

4.0

0.8 1.8

Dataset: 2018 Mugshots


3.5 Tier: 24

FNIR (Rank = 1)
FNIR(N, R, T) =

0.20
FPIR(N, T) =

0.6 1.6

0.15

FRVT
3.0

-
False pos. identification rate
False neg. identification rate

0.10

FACE RECOGNITION VENDOR TEST


0.4 1.4

0.05
Score

megvii_0 neurotechnology_3 ntechlab_3 0.00

1.0 1.0
TVAL
R = Num. candidates examined
N = Num. enrolled subjects

FPIR = 0.001

90
FPIR = 0.003

0.8 0.9

-
IDENTIFICATION
FPIR = 0.010

FPIR = 0.030
80

RANK 2 MEDIAN
0.6 0.8 NONMATE
T = Threshold

70

0.4 0.7
T > 0 → Identification
T = 0 → Investigation

(00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18]
Time lapse between search and initial encounter enrollment (years)

Figure 125: [FRVT-2018 Mugshot Ageing Dataset] Native mate scores vs. time-elapsed. The oldest image of each individual is enrolled. Thereafter, all more recent
images are searched. Mated score distributions are computed over all searches noted in row 17 of Table 1 binned by number of years between search and initial

195
enrollment.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

cognitec_0 gorilla_2 nec_0


11:12:06
2022/12/18

1.0 1.0

0.8

0.8 TVAL
0.8

0.7 FPIR = 0.001


FNIR(N, R, T) =
FPIR(N, T) =

FPIR = 0.003

FRVT
0.6 FPIR = 0.010
0.6
0.6

-
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


FPIR = 0.030

RANK 2 MEDIAN
NONMATE

0.5
0.4
Score

nec_1 ntechlab_0 rankone_2 Dataset: 2018 Mugshots


Tier: 25
1.0 1.0 1.0
FNIR (Rank = 1)
0.20
R = Num. candidates examined
N = Num. enrolled subjects

0.9 0.15

0.9

-
IDENTIFICATION
0.8 0.10

0.8

0.05
0.8
T = Threshold

0.7 0.00
0.6

0.7
0.6
T > 0 → Identification
T = 0 → Investigation

0.4
(00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18]
Time lapse between search and initial encounter enrollment (years)

Figure 126: [FRVT-2018 Mugshot Ageing Dataset] Native mate scores vs. time-elapsed. The oldest image of each individual is enrolled. Thereafter, all more recent
images are searched. Mated score distributions are computed over all searches noted in row 17 of Table 1 binned by number of years between search and initial

196
enrollment.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

aware_5 camvi_4 imagus_008


11:12:06
2022/12/18

1.0 1.0

15

0.8 0.8

Dataset: 2018 Mugshots


Tier: 26
10
FNIR (Rank = 1)
FNIR(N, R, T) =

0.20
FPIR(N, T) =

0.6
0.6
0.15

FRVT
5

-
False pos. identification rate
False neg. identification rate

0.10

FACE RECOGNITION VENDOR TEST


0.4
0.4
0.05

0
Score

mukh_002 rankone_0 rankone_4 0.00

1.0
1.0 1.0
TVAL
R = Num. candidates examined
N = Num. enrolled subjects

FPIR = 0.001

0.9
0.9 FPIR = 0.003
0.9

-
IDENTIFICATION
FPIR = 0.010

0.8

FPIR = 0.030
0.8

0.8
RANK 2 MEDIAN
0.7 NONMATE
T = Threshold

0.7

0.6
0.7
T > 0 → Identification
T = 0 → Investigation

0.6
0.5
(00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18]
Time lapse between search and initial encounter enrollment (years)

Figure 127: [FRVT-2018 Mugshot Ageing Dataset] Native mate scores vs. time-elapsed. The oldest image of each individual is enrolled. Thereafter, all more recent
images are searched. Mated score distributions are computed over all searches noted in row 17 of Table 1 binned by number of years between search and initial

197
enrollment.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

aware_6 camvi_5 innovatrics_4


11:12:06
2022/12/18

1.0 1000

20
750
0.8

TVAL

FPIR = 0.001
FNIR(N, R, T) =

500
FPIR(N, T) =

0.6 FPIR = 0.003


10

FRVT
FPIR = 0.010

250

-
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


FPIR = 0.030
0.4

RANK 2 MEDIAN
NONMATE

0 0
Score

realnetworks_003 realnetworks_004 realnetworks_2 Dataset: 2018 Mugshots


Tier: 27

FNIR (Rank = 1)
0.20
R = Num. candidates examined
N = Num. enrolled subjects

1.8 1.8 1.8


0.15

-
IDENTIFICATION
0.10

1.5 1.5 1.5


0.05
T = Threshold

0.00

1.2 1.2 1.2


T > 0 → Identification
T = 0 → Investigation

0.9 0.9 0.9

(00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18]
Time lapse between search and initial encounter enrollment (years)

Figure 128: [FRVT-2018 Mugshot Ageing Dataset] Native mate scores vs. time-elapsed. The oldest image of each individual is enrolled. Thereafter, all more recent
images are searched. Mated score distributions are computed over all searches noted in row 17 of Table 1 binned by number of years between search and initial

198
enrollment.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

ayonix_2 microfocus_5 noblis_2


2040
11:12:06
2022/12/18

1.00 1.0

0.9
0.95
2030
Dataset: 2018 Mugshots
Tier: 28
0.8
FNIR (Rank = 1)
FNIR(N, R, T) =

0.20
FPIR(N, T) =

0.90

0.7 0.15

FRVT
2020

-
False pos. identification rate
False neg. identification rate

0.10

FACE RECOGNITION VENDOR TEST


0.85
0.6

0.05

2010
Score

siat_2 vts_000 (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] 0.00

1.0
TVAL
60000
R = Num. candidates examined
N = Num. enrolled subjects

FPIR = 0.001

FPIR = 0.003

-
0.8

IDENTIFICATION
FPIR = 0.010
40000

FPIR = 0.030

RANK 2 MEDIAN
NONMATE
0.6
T = Threshold

20000

0.4
T > 0 → Identification
T = 0 → Investigation

(00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18] (00,02] (02,04] (04,06] (06,08] (08,10] (10,12] (12,14] (14,18]
Time lapse between search and initial encounter enrollment (years)

Figure 129: [FRVT-2018 Mugshot Ageing Dataset] Native mate scores vs. time-elapsed. The oldest image of each individual is enrolled. Thereafter, all more recent
images are searched. Mated score distributions are computed over all searches noted in row 17 of Table 1 binned by number of years between search and initial

199
enrollment.
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 200

Appendix C Effect of enrolling multiple images


This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

11:12:06
2022/12/18

microsoft_4 microsoft_3 yitu_2 yitu_3


0.300
0.200
0.100
0.070 ● ●
0.050
● ● ●
0.030 ●
0.020 ● ● ● ●
● ● ●
● ●
0.010 ● ●
0.007 ● ● ●
0.005 ● ●
● ● ●
0.003 ● ●
0.002 ●

0.001
FNIR(N, R, T) =
FPIR(N, T) =

visionlabs_5 microsoft_0 microsoft_1 visionlabs_4


0.300
0.200

FRVT
0.100 ● ●
● ●
0.070 ● ●
0.050 ● ● ●
● ● ●
0.030
● ● ●
0.020 ●

-
False pos. identification rate
False neg. identification rate

● ● ●
Dataset: 2018 Mugshot, N = 1600000

FACE RECOGNITION VENDOR TEST


0.010 ● ● ●

0.007
● ● Tier=1
False negative identification rate, FNIR(T)

0.005 ● ●
0.003
0.002
FPIR=0.0003
0.001 FPIR=0.0010
FPIR=0.0030
FPIR=0.0100
FPIR=0.0300
FPIR=0.1000
FPIR=0.3000
visionlabs_3 everai_1 microsoft_2 ntechlab_4
0.300
0.200
● ●
0.100 ● ● nim
0.070
R = Num. candidates examined
N = Num. enrolled subjects

0.050 ● ● ● ●
● ● ● ● ● 1
0.030 ● ● ● ●
0.020 ● ● ● 2
● ●
● ● 3
0.010 ● ●
0.007 ● ●
● 4
0.005
0.003 5,6
0.002 7+

-
0.001

IDENTIFICATION
04 e−04 e−03 e−03 e−02 e−02 e−01 e−01 e+00 04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 00
megvii_0 1e− 3 1 3 1 3 1 3 1 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+
0.300

0.200 ●

0.100 ●
T = Threshold

0.070 ●
0.050 ●
0.030 ●
0.020
0.010
0.007
0.005
0.003
0.002
0.001
T > 0 → Identification
T = 0 → Investigation

04 0 4 03 03 02 02 01 e−01 e+00
1e− 3e− 1e− 3e− 1e− 3e− 1e− 3 1

False positive identification rate, FPIR(T)

Figure 130: [FRVT-2018 Mugshot Dataset] Effect of enrolling multiple images for each identity. The plot shows an identification miss rates vs. false positive rates, at
seven operating thresholds. The enrolled population size is fixed. The images are enrolled with lifetime-consolidation - see section 2.3.

201
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

idemia_4 idemia_3 idemia_1 isystems_2


0.500
11:12:06
2022/12/18

0.300

0.200

0.100 ● ●
0.070 ● ● ●
0.050 ● ● ●
● ● ● ●
0.030 ●
● ● ●
0.020 ● ● ●
● ● ● ●
0.010 ● ● ●
0.007
0.005
0.003
0.002

0.001
FNIR(N, R, T) =

ntechlab_3 nec_0 ntechlab_0 tongyitrans_0


FPIR(N, T) =

0.500
0.300
0.200
● ●

FRVT

0.100 ● ● ●
● ●
0.070 ● ● ●
0.050 ● ● ●
● ● ● Dataset: 2018 Mugshot, N = 1600000
● ● ●

-
False pos. identification rate
False neg. identification rate

0.030 ● ● Tier=2
False negative identification rate, FNIR(T)

FACE RECOGNITION VENDOR TEST


0.020 ●
● ●

0.010 ● FPIR=0.0003
0.007 FPIR=0.0010
0.005
FPIR=0.0030
0.003
0.002
FPIR=0.0100
FPIR=0.0300
0.001 FPIR=0.1000
FPIR=0.3000
cognitec_1 idemia_0 ntechlab_1 tevian_4
0.500
0.300 ●
0.200 ● nim
● ●
● ● ● ● ● 1
● ●
R = Num. candidates examined
N = Num. enrolled subjects

0.100 ●
● ● 2
0.070 ● ● ●
0.050 ● ● ●
● 3
0.030 ● ● ● ●
4
0.020 ● ● ● ● 5,6
0.010 7+
0.007

-
0.005

IDENTIFICATION
0.003
0.002

0.001

04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 e−01 e+00
cogent_1 vocord_3 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3 1

0.500 ●
0.300
0.200

T = Threshold

0.100 ●

0.070
0.050 ● ●

● ●
0.030
0.020
● ●

0.010 ● ●
0.007
0.005
0.003
0.002
T > 0 → Identification
T = 0 → Investigation

0.001

04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 00
1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+

False positive identification rate, FPIR(T)

Figure 131: [FRVT-2018 Mugshot Dataset] Effect of enrolling multiple images for each identity. The plot shows an identification miss rates vs. false positive rates, at

202
seven operating thresholds. The enrolled population size is fixed. The images are enrolled with lifetime-consolidation - see section 2.3.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

yitu_1 yitu_0 isystems_1 isystems_0


11:12:06
2022/12/18

0.700
0.500
0.300
0.200 ● ●
● ●
0.100 ● ●
0.070 ● ●
0.050 ● ● ● ●
● ● ● ●
0.030 ●
● ● ●
0.020 ● ●
● ●
● ●
0.010
0.007 ● ●
0.005
0.003
FNIR(N, R, T) =

0.002

cogent_0 rankone_2 rankone_3 vocord_4


FPIR(N, T) =


0.700
0.500

0.300

FRVT
● ●
0.200
● ●
0.100 ● ● ●
● ● ● Dataset: 2018 Mugshot, N = 1600000

-
0.070
False pos. identification rate
False neg. identification rate

● ● ● Tier=3
False negative identification rate, FNIR(T)

● ●

FACE RECOGNITION VENDOR TEST


0.050 ●
● ● ● ●
0.030 ●
● ● FPIR=0.0003
0.020 ●
● FPIR=0.0010

0.010 FPIR=0.0030
0.007
0.005
FPIR=0.0100
FPIR=0.0300
0.003
0.002
FPIR=0.1000
FPIR=0.3000
tevian_3 rankone_1 cognitec_0 3divi_4

0.700
0.500
nim
● ●
0.300 ● ●
● ● 1

R = Num. candidates examined
N = Num. enrolled subjects

0.200 ● ●
● ● ● ● 2
● ● ● ●
0.100
● ● 3
● ●
0.070
● ● ● 4
0.050 ●
● ● 5,6
0.030 ●
● 7+
0.020

-
0.010

IDENTIFICATION
0.007
0.005
0.003
0.002
04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 e−01 e+00
incode_1 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3 1

0.700
0.500

T = Threshold

0.300

0.200 ●

0.100 ●
0.070

0.050
0.030 ●
0.020

0.010
0.007
0.005
T > 0 → Identification
T = 0 → Investigation

0.003
0.002
04 04 03 03 02 02 01 01 00
1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+

False positive identification rate, FPIR(T)

Figure 132: [FRVT-2018 Mugshot Dataset] Effect of enrolling multiple images for each identity. The plot shows an identification miss rates vs. false positive rates, at

203
seven operating thresholds. The enrolled population size is fixed. The images are enrolled with lifetime-consolidation - see section 2.3.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

everai_0 alchera_0 neurotechnology_4 idemia_2


0.700
11:12:06
2022/12/18

0.500

0.300 ●

0.200

● ●
0.100 ●
● ●
0.070 ●
● ● ● ●
0.050
● ● ● ●
0.030 ● ●

● ● ●
0.020 ● ●


0.010
0.007 ●
0.005
FNIR(N, R, T) =

nec_1 aware_3 tongyitrans_1 hik_4


FPIR(N, T) =

0.700
0.500

0.300

FRVT
0.200 ●

● ● ●

0.100 ● ● ● Dataset: 2018 Mugshot, N = 1600000

-

False pos. identification rate
False neg. identification rate

0.070 ● ● ● Tier=4
False negative identification rate, FNIR(T)

FACE RECOGNITION VENDOR TEST


0.050 ● ● ● ●
● ● ●

0.030 ● ● FPIR=0.0003

0.020 FPIR=0.0010
● ●
FPIR=0.0030
0.010 ● FPIR=0.0100
0.007 FPIR=0.0300
0.005
FPIR=0.1000
FPIR=0.3000
rankone_0 hik_3 aware_4 incode_0
0.700 ●
0.500 ●
● nim
● ●
0.300 ●
● ● ● ●
0.200 ● 1
R = Num. candidates examined
N = Num. enrolled subjects

● ● ●
● ● 2
● ● ●
0.100 ● ● 3
● ●
0.070 ● 4
● ●
0.050 5,6

0.030 7+

0.020

-
IDENTIFICATION
0.010
0.007
0.005

04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 e−01 e+00
yisheng_1 tiger_0 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3 1

0.700
0.500 ● ●
● ●

0.300 ● ●
T = Threshold

● ●
0.200
● ●

0.100 ● ●
0.070

0.050

0.030
0.020

0.010
T > 0 → Identification
T = 0 → Investigation

0.007
0.005

04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 00
1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+

False positive identification rate, FPIR(T)

Figure 133: [FRVT-2018 Mugshot Dataset] Effect of enrolling multiple images for each identity. The plot shows an identification miss rates vs. false positive rates, at

204
seven operating thresholds. The enrolled population size is fixed. The images are enrolled with lifetime-consolidation - see section 2.3.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

hik_2 realnetworks_1 neurotechnology_3 glory_1



11:12:06
2022/12/18

0.70

0.50 ●


0.30 ●
● ●
● ● ●
0.20 ●
● ●

● ●
0.10 ● ●


0.07 ● ●
0.05 ● ●

● ●
0.03

0.02

FNIR(N, R, T) =

gorilla_1 eyedea_3 innovatrics_3 3divi_3


FPIR(N, T) =

0.70 ●
0.50
● ●

FRVT
● ● ●
● ● ●
0.30 ● ●
● ●
● ●
0.20 ● ● ● Dataset: 2018 Mugshot, N = 1600000

-
False pos. identification rate
False neg. identification rate

● ● Tier=5
False negative identification rate, FNIR(T)

● ●

FACE RECOGNITION VENDOR TEST


0.10 ● ●
0.07 ● FPIR=0.0003
0.05 FPIR=0.0010

FPIR=0.0030
0.03
FPIR=0.0100
0.02 FPIR=0.0300
FPIR=0.1000
FPIR=0.3000
vigilantsolutions_3 dermalog_4 dermalog_3 shaman_3

0.70
● ● ● ● nim
0.50 ● ● ●
● ● ● ●
● ● ● ● ● 1
R = Num. candidates examined
N = Num. enrolled subjects

0.30 ● ● ● ●
● ● ● ● 2
0.20 ● ● ● ● 3
4

0.10 5,6
0.07 7+
0.05

-
IDENTIFICATION
0.03

0.02

04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 e+00
vigilantsolutions_4 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1

0.70 ●
● ●
0.50 ●
T = Threshold


0.30 ●
0.20 ●

0.10
0.07
0.05

0.03
T > 0 → Identification
T = 0 → Investigation

0.02

04 e−04 e−03 e−03 e−02 e−02 e−01 e−01 e+00


1e− 3 1 3 1 3 1 3 1

False positive identification rate, FPIR(T)

Figure 134: [FRVT-2018 Mugshot Dataset] Effect of enrolling multiple images for each identity. The plot shows an identification miss rates vs. false positive rates, at

205
seven operating thresholds. The enrolled population size is fixed. The images are enrolled with lifetime-consolidation - see section 2.3.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

realnetworks_0 glory_0 innovatrics_2 siat_2


11:12:06
2022/12/18

0.70 ●

0.50 ●

● ● ●
0.30 ●

● ● ● ●
● ● ● ● ● ●
0.20 ● ●
● ●
● ●
0.10
● ●
0.07
● ●
0.05
FNIR(N, R, T) =

siat_1 shaman_4 imagus_2 imagus_3


FPIR(N, T) =

● ●
● ●
● ●
0.70 ● ● ●
● ● ●
● ● ●

FRVT
0.50 ●
● ●
● ●

0.30 ● ● Dataset: 2018 Mugshot, N = 1600000


● ● ● ● ● ●

-
False pos. identification rate
False neg. identification rate

Tier=6
False negative identification rate, FNIR(T)

FACE RECOGNITION VENDOR TEST


0.20

FPIR=0.0003
0.10 FPIR=0.0010
FPIR=0.0030
0.07 FPIR=0.0100
0.05 FPIR=0.0300
FPIR=0.1000
FPIR=0.3000
vd_0 microfocus_4 ayonix_0 microfocus_3

● ● ● ● ● ● ● ● ● ●
● ● ● ● ●
● ● ● ● ●
0.70 ● ● ● ● ● nim
● ●
0.50 ● ● 1
R = Num. candidates examined
N = Num. enrolled subjects

2
0.30 3
4
0.20
5,6
7+

0.10

-
IDENTIFICATION
0.07

0.05

04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 00 04 04 03 03 02 02 01 01 e+00
alchera_1 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1e+ 1e− 3e− 1e− 3e− 1e− 3e− 1e− 3e− 1

● ● ● ● ● ● ●

0.70
T = Threshold

0.50

0.30

0.20

0.10

0.07
T > 0 → Identification
T = 0 → Investigation

0.05

04 e−04 e−03 e−03 e−02 e−02 e−01 e−01 e+00


1e− 3 1 3 1 3 1 3 1

False positive identification rate, FPIR(T)

Figure 135: [FRVT-2018 Mugshot Dataset] Effect of enrolling multiple images for each identity. The plot shows an identification miss rates vs. false positive rates, at

206
seven operating thresholds. The enrolled population size is fixed. The images are enrolled with lifetime-consolidation - see section 2.3.
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 207

Appendix D Accuracy with poor quality webcam images


This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 208
2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

0.011 veridas_003
11:12:06
2022/12/18

0.011 hyperverge_001
0.011 hzailu_001
0.011 cloudwalk_mt_001
0.011 cloudwalk_mt_000
0.011 hyperverge_002
0.011 microsoft_5
0.011 pixelall_005
0.011 neurotechnology_009
0.011 tech5_002
0.011 cyberlink_005
0.011 gorilla_007
0.011 kakao_000
0.01 0.010 rankone_012
0.010 visionlabs_010
FNIR(N, R, T) =

0.010 cogent_005
FPIR(N, T) =

0.010 everai_paravision_004
0.010 cognitec_005
0.010 yitu_2
0.010 mantra_000

FRVT
0.010 cubox_000
0.010 nec_3
0.010 vts_001

-
False pos. identification rate
False neg. identification rate

0.010 cloudwalk_hr_000

FACE RECOGNITION VENDOR TEST


0.010 realnetworks_006
False negative identification rate (FNIR)

0.010 rankone_010
0.010 gorilla_008
0.010 cognitec_006
0.010 paravision_005
0.010 revealmedia_000
0.010 s1_003
0.010 ntechlab_008
0.010 irex_000
0.009 neurotechnology_010
0.009 realnetworks_007
0.009 tevian_007
0.009 s1_002
R = Num. candidates examined
N = Num. enrolled subjects

0.009 kakao_001
0.009 cyberlink_003
0.009 nec_2
0.009 visionlabs_011
0.009 nec_004
0.008 visionlabs_009
0.008 intema_000

-
IDENTIFICATION
0.008 yitu_4
0.008 firstcreditkz_001
0.008 ntechlab_010
0.008 cib_000
0.008 nec_006
0.008 paravision_007
0.008 griaule_001
0.008 ntechlab_009
0.008 neurotechnology_012
T = Threshold

0.008 realnetworks_008
0.008 pangiam_000
0.008 nec_005
0.008 maxvision_001
0.008 dahua_004
0.008 lineclova_002
0.007 cogent_006
0.007 line_001
0.007 paravision_009
T > 0 → Identification
T = 0 → Investigation

0.007 dahua_003
0.007 vts_003
0.007 clearviewai_000
0.007 deepglint_001
1 3 10 20
0.007 sensetime_004
Rank 0.007 ntechlab_011

Figure 136: [Webcam Dataset] Identification miss rates vs. rank. The results apply to cross-domain recognition in which webcams are searched against enrolled
mugshots. The FNIR values are higher than those for mugshot-mugshot identification due to low image resolution, lighting and less constrained subject pose in

209
webcam images - see Figure 6.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

0.020 sensetime_002
0.019 everai_3
0.019 sqisoft_001
11:12:06
2022/12/18

0.019 ntechlab_4
0.019 rankone_007
0.02 0.019 visionlabs_5
0.018 remarkai_000
0.018 ntechlab_5
0.018 gorilla_005
0.018 cyberlink_001
0.017 megvii_2
0.017 incode_004
0.017 megvii_1
0.017 vigilantsolutions_008
0.017 qnap_003
0.017 vigilantsolutions_007
FNIR(N, R, T) =

0.017 daon_000
0.017 megvii_0
FPIR(N, T) =

0.017 ntechlab_6
0.017 ptakuratsatu_000
0.017 s1_000
0.017 tech5_001

FRVT
0.017 hik_5
0.017 hik_6
0.016 anke_002

-
False pos. identification rate
False neg. identification rate

False negative identification rate (FNIR)

FACE RECOGNITION VENDOR TEST


0.016 cognitec_004
0.016 yitu_3
0.016 deepsea_001
0.016 sensetime_0
0.016 sensetime_1
0.015 idemia_007
0.015 visionlabs_6
0.015 visionlabs_7
0.015 imperial_000
0.015 dermalog_008
0.015 rendip_000
0.015 pixelall_004
0.015 maxvision_000
R = Num. candidates examined
N = Num. enrolled subjects

0.014 yitu_5
0.014 vnpt_001
0.01 0.014 dermalog_009
0.014 innovatrics_005
0.014 neurotechnology_008
0.014 pixelall_003
0.014 veridas_001

-
IDENTIFICATION
0.014 veridas_002
0.014 s1_001
0.014 fujitsulab_000
0.014 line_000
0.014 xforwardai_000
0.014 imagus_006
0.014 visionlabs_008
0.014 trueface_000
0.014 griaule_000
T = Threshold

0.013 fujitsulab_001
0.013 hzailu_000
0.013 realnetworks_005
0.013 imagus_007
0.013 cogent_004
0.013 vts_002
0.013 pangiam_001
0.013 synesis_005
0.013 vixvizion_009
T > 0 → Identification
T = 0 → Investigation

0.013 rankone_009
0.013 xforwardai_001
0.012 ntechlab_007
1 3 10 20
0.012 verihubs−inteligensia_000
Rank 0.012 vnpt_002

Figure 137: [Webcam Dataset] Identification miss rates vs. rank. The results apply to cross-domain recognition in which webcams are searched against enrolled
mugshots. The FNIR values are higher than those for mugshot-mugshot identification due to low image resolution, lighting and less constrained subject pose in

210
webcam images - see Figure 6.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

0.045 scanovate_000
0.044 gorilla_2
0.042 neurotechnology_3
11:12:06
2022/12/18

0.041 nec_0
0.041 rankone_5
0.05 0.040 innovatrics_4
0.040 scanovate_001
0.040 incode_3
0.039 lookman_4
0.039 idemia_5
0.038 allgovision_001
0.038 isystems_0
0.038 lookman_3
0.04 0.038 everai_0
0.038 tevian_4
0.038 anke_0
FNIR(N, R, T) =

0.038 anke_1
FPIR(N, T) =

0.038 alchera_004
0.037 dermalog_5
0.036 lookman_005
0.036 mukh_002

FRVT
0.036 acer_000
0.036 kedacom_001
0.03 0.035 alchera_3

-
False pos. identification rate
False neg. identification rate

0.034 fincore_000
False negative identification rate (FNIR)

FACE RECOGNITION VENDOR TEST


0.034 cognitec_1
0.034 idemia_0
0.034 idemia_3
0.033 allgovision_000
0.033 hik_0
0.032 idemia_4
0.031 ntechlab_0
0.031 intellivision_002
0.030 remarkai_0
0.030 visionlabs_3
0.029 remarkai_2
0.028 tevian_5
0.02 0.027 qnap_000
R = Num. candidates examined
N = Num. enrolled subjects

0.027 vd_002
0.027 hik_3
0.027 kneron_000
0.027 hik_4
0.027 dermalog_007
0.026 turingtechvip_001

-
0.026 dahua_0

IDENTIFICATION
0.026 isystems_2
0.026 sqisoft_002
0.025 cognitec_2
0.025 cognitec_3
0.024 dermalog_6
0.024 neurotechnology_5
0.024 vocord_3
0.024 gorilla_004
T = Threshold

0.024 dahua_1
0.023 t4isb_000
0.023 intsysmsu_000
0.023 vocord_5
0.023 synesis_003
0.023 isystems_3
0.023 tiger_2
0.023 tiger_3
0.023 ntechlab_3
0.01
T > 0 → Identification
T = 0 → Investigation

0.022 tongyitrans_0
0.022 tongyitrans_1
0.022 aize_001
1 3 10 20 0.022 pixelall_002
Rank 0.022 toshiba_1
0.022 qnap_001

Figure 138: [Webcam Dataset] Identification miss rates vs. rank. The results apply to cross-domain recognition in which webcams are searched against enrolled
mugshots. The FNIR values are higher than those for mugshot-mugshot identification due to low image resolution, lighting and less constrained subject pose in

211
webcam images - see Figure 6.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

0.621 kneron_001
0.608 vts_000
0.601 microfocus_5
11:12:06
2022/12/18

0.583 microfocus_6
0.551 vd_0
0.90 0.548 digidata_000
0.527 ayonix_2
0.80 0.527 ayonix_1
0.522 noblis_1
0.70 0.513 imagus_3
0.482 imagus_0
0.60 0.446 siat_2
0.443 eyedea_0
0.392 noblis_2
0.50 0.369 verijelas_000
0.361 synesis_0
FNIR(N, R, T) =

0.351 tiger_1
0.337 camvi_1
FPIR(N, T) =

0.40
0.333 siat_1
0.325 smilart_0
0.320 glory_0
0.319 shaman_4

FRVT
0.30 0.301 imagus_2
0.267 glory_1
0.262 shaman_0

-
False pos. identification rate
False neg. identification rate

False negative identification rate (FNIR)

FACE RECOGNITION VENDOR TEST


0.244 vigilantsolutions_4
0.235 synesis_3
0.218 dermalog_0
0.20 0.217 dermalog_3
0.215 dermalog_4
0.212 vigilantsolutions_0
0.206 3divi_3
0.176 aware_4
0.172 shaman_3
0.166 alchera_2
0.151 vigilantsolutions_3
0.148 eyedea_3
0.141 rankone_4
R = Num. candidates examined
N = Num. enrolled subjects

0.10 0.138 aware_0


0.128 aware_6
0.09 0.117 rankone_0
0.117 newland_2
0.08
0.104 neurotechnology_0
0.103 camvi_5
0.07
0.102 intellivision_001

-
IDENTIFICATION
0.100 incode_0
0.06 0.095 tiger_0
0.095 gorilla_1
0.093 imagus_008
0.05 0.090 camvi_3
0.090 aware_3
0.086 3divi_0
0.04 0.085 20face_000
0.078 realnetworks_0
T = Threshold

0.078 realnetworks_1
0.078 realnetworks_2
0.077 camvi_4
0.03
0.076 innovatrics_0
0.074 3divi_6
0.074 innovatrics_2
0.072 idemia_6
0.071 rankone_2
0.070 gorilla_3
T > 0 → Identification
T = 0 → Investigation

0.02
0.068 vocord_0
0.068 rankone_3
0.067 aware_5
1 3 10 20
0.066 tevian_0
Rank 0.062 3divi_4

Figure 139: [Webcam Dataset] Identification miss rates vs. rank. The results apply to cross-domain recognition in which webcams are searched against enrolled
mugshots. The FNIR values are higher than those for mugshot-mugshot identification due to low image resolution, lighting and less constrained subject pose in

212
webcam images - see Figure 6.
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 213
2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

0.062 trueface_000
0.060 irex_000
11:12:06
2022/12/18

0.059 notiontag_000
0.058 rankone_010
0.058 fujitsulab_001
0.90 0.057 visionbox_000
0.80 0.056 kakao_000
0.055 veridas_003
0.70 0.055 dermalog_010
0.053 rankone_012
0.60
0.053 cyberlink_002
0.053 realnetworks_006
0.50
0.052 neurotechnology_009
0.051 visionlabs_008
0.40 0.051 vts_001
FNIR(N, R, T) =

0.051 innovatrics_007
0.051 cogent_004
FPIR(N, T) =

0.30 0.051 hzailu_000


0.050 pixelall_005
0.048 yitu_2
0.048 rankone_011

FRVT
0.046 dahua_002
0.20 0.045 ntechlab_008
0.045 cib_000

-
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


0.043 realnetworks_007
False negative identification rate, FNIR(T)

0.043 incode_005
0.042 revealmedia_000
0.041 cognitec_005
0.041 dahua_003
0.041 mantra_000
0.10
0.040 cyberlink_005
0.09 0.040 cognitec_006
0.08 0.037 everai_paravision_004
0.037 neurotechnology_010
0.07
0.037 s1_003
0.06 0.037 cogent_005
0.037 microsoft_6
0.05 0.036 cyberlink_004
R = Num. candidates examined
N = Num. enrolled subjects

0.035 cyberlink_003
0.034 rankone_013
0.04 0.033 vts_003
0.032 neurotechnology_012
0.032 tevian_006
0.03 0.032 vnpt_002

-
0.031 hyperverge_001

IDENTIFICATION
0.031 s1_002
0.030 pangiam_000
0.030 pangiam_001
0.02
0.029 realnetworks_008
0.028 xforwardai_001
0.028 griaule_001
0.027 visionlabs_010
0.027 hyperverge_002
T = Threshold

0.027 yitu_4
0.027 line_001
0.01 0.026 dahua_004
0.025 visionlabs_009
0.025 clearviewai_000
0.025 maxvision_001
0.025 paravision_007
0.024 paravision_005
0.023 cogent_006
T > 0 → Identification
T = 0 → Investigation

0.023 canon_001
0.022 ntechlab_009
0.022 tevian_007
0.020 canon_002
0.001 0.010 0.100 1.000 0.020 nec_2
False positive identification rate, FPIR(T) 0.020 visionlabs_011
0.019 paravision_009

Figure 140: [Webcam Dataset] Identification miss rates vs. false positive rates. The results apply to cross-domain recognition in which webcams are searched against
enrolled mugshots. The FNIR values are higher than those for mugshot-mugshot identification due to low image resolution, lighting and less constrained subject pose

214
in webcam images - see Figure 6.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

0.166 allgovision_000
0.164 idemia_3
0.162 cognitec_3
11:12:06
2022/12/18

0.162 ntechlab_0
0.160 gorilla_004
0.90 0.159 visionlabs_4
0.80 0.158 tiger_2
0.158 tiger_3
0.70
0.158 hik_3
0.60 0.155 hik_0
0.154 vocord_3
0.152 dermalog_007
0.50
0.152 hik_4
0.148 vd_002
0.40 0.147 visionlabs_5
0.144 tevian_5
FNIR(N, R, T) =

0.143 aize_001
0.142 gorilla_005
FPIR(N, T) =

0.30
0.137 visionlabs_3
0.137 qnap_001
0.135 dahua_0

FRVT
0.130 vocord_5
0.20 0.129 neurotechnology_5
0.127 everai_1
0.126 isystems_2

-
False negative identification rate, FNIR(T)
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


0.124 ptakuratsatu_000
0.123 synesis_003
0.122 dahua_1
0.120 incode_004
0.119 everai_2
0.10 0.119 remarkai_000
0.118 idemia_4
0.09
0.118 toshiba_0
0.08 0.117 ntechlab_3
0.07 0.117 neurotechnology_4
0.116 cyberlink_000
0.06 0.116 megvii_0
0.115 microsoft_0
R = Num. candidates examined
N = Num. enrolled subjects

0.05 0.112 tongyitrans_0


0.109 cyberlink_001
0.109 acer_001
0.04 0.107 isystems_3
0.107 siat_0
0.106 qnap_002
0.105 dermalog_6

-
0.03

IDENTIFICATION
0.105 ntechlab_4
0.102 ntechlab_5
0.101 deepsea_001
0.101 tongyitrans_1
0.02 0.100 vd_003
0.098 cogent_2
0.097 megvii_1
0.097 cognitec_004
T = Threshold

0.096 megvii_2
0.095 everai_3
0.095 cogent_3
0.095 rankone_007
0.01 0.095 line_000
0.094 ntechlab_6
0.093 dermalog_008
0.092 toshiba_1
0.090 visionlabs_6
0.090 visionlabs_7
T > 0 → Identification
T = 0 → Investigation

0.090 yitu_0
0.089 innovatrics_005
0.088 vigilantsolutions_007
0.001 0.010 0.100 1.000
0.086 hik_6
False positive identification rate, FPIR(T) 0.084 s1_000

Figure 141: [Webcam Dataset] Identification miss rates vs. false positive rates. The results apply to cross-domain recognition in which webcams are searched against
enrolled mugshots. The FNIR values are higher than those for mugshot-mugshot identification due to low image resolution, lighting and less constrained subject pose

215
in webcam images - see Figure 6.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

0.814 imagus_2
0.813 verijelas_000
0.808 yisheng_1
0.772 imagus_008
11:12:06
2022/12/18

0.769 camvi_1
0.90 0.754 shaman_4
0.734 synesis_0
0.80 0.695 vigilantsolutions_0
0.660 vigilantsolutions_3
0.70 0.657 dermalog_0
0.657 dermalog_4
0.60 0.655 dermalog_3
0.646 synesis_3
0.50 0.626 3divi_3
0.621 shaman_0
0.619 vts_000
0.40 0.597 shaman_3
FNIR(N, R, T) =

0.591 alchera_2
FPIR(N, T) =

0.579 tiger_1
0.577 digidata_000
0.30 0.547 glory_0
0.543 eyedea_3

FRVT
0.537 glory_1
0.529 alchera_004
0.509 aware_4

-
False negative identification rate, FNIR(T)
False pos. identification rate
False neg. identification rate

0.500 tiger_0

FACE RECOGNITION VENDOR TEST


0.20 0.478 siat_2
0.466 newland_2
0.465 neurotechnology_0
0.453 gorilla_1
0.449 20face_000
0.434 gorilla_3
0.426 rankone_4
0.424 3divi_0
0.420 incode_0
0.10 0.418 neurotechnology_6
0.404 intellivision_001
0.09
0.398 aware_6
0.08 0.391 rankone_0
R = Num. candidates examined
N = Num. enrolled subjects

0.365 siat_1
0.07 0.361 innovatrics_0
0.343 3divi_4
0.06 0.342 3divi_6
0.339 3divi_5
0.331 tevian_0
0.05

-
0.318 realnetworks_0

IDENTIFICATION
0.318 realnetworks_1
0.315 realnetworks_2
0.04
0.310 innovatrics_2
0.303 cognitec_0
0.298 aware_3
0.298 tevian_3
0.03
0.297 innovatrics_3
0.296 incode_1
T = Threshold

0.285 vocord_0
0.281 vd_1
0.269 incode_2
0.02 0.268 gorilla_2
0.266 neurotechnology_3
0.266 realnetworks_003
0.264 incode_3
0.263 realnetworks_004
0.261 rankone_2
T > 0 → Identification
T = 0 → Investigation

0.255 rankone_3
0.253 aware_5
0.240 scanovate_000
0.001 0.010 0.100 1.000 0.240 shaman_7
False positive identification rate, FPIR(T) 0.237 shaman_6
0.227 scanovate_001

Figure 142: [Webcam Dataset] Identification miss rates vs. false positive rates. The results apply to cross-domain recognition in which webcams are searched against
enrolled mugshots. The FNIR values are higher than those for mugshot-mugshot identification due to low image resolution, lighting and less constrained subject pose

216
in webcam images - see Figure 6.
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 217

Appendix E Accuracy for profile-view to frontal recognition

Figures 143 - 145 gives accuracy results for searching 100 000 mated and 100 000 non-mated profile-view images against
the same FRVT 2018 frontal enrollment dataset, N = 1 600 000, used in the main mugshot trials. This experiment
corresponds to row-13 of Table 1. An example of profile-view image is given in Figure 7.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

0.0422 clearviewai_000 0.0917 gorilla_005


0.60
11:12:06
2022/12/18

0.0423 sensetime_006 0.0920 qnap_002


0.0430 cubox_000 0.0935 vixvizion_009
0.0431 cloudwalk_mt_000 0.0990 notiontag_000
0.50 0.0431 intema_000 0.1014 visionlabs_7
0.0435 kakao_001 0.1018 visionlabs_6
0.0439 neurotechnology_012 0.1035 deepglint_001
0.0444 sensetime_005 0.1051 verihubs−inteligensia_000
0.0445 hyperverge_002 0.1054 line_000
0.40 0.0446 idemia_009 0.1078 innovatrics_007
0.0447 xforwardai_002 0.1088 decatur_000
0.0448 cloudwalk_hr_000 0.1098 vts_002
0.0448 maxvision_001 0.1150 dahua_003
0.0450 visionlabs_011 0.1160 cyberlink_005
FNIR(N, R, T) =

0.0452 hyperverge_001 0.1176 griaule_001


0.30 0.0452 paravision_009 0.1191 3
FPIR(N, T) =

0.0454 nec_006 0.1235 rankone_011


0.0456 vnpt_002 0.1237 trueface_000
0.0456 ntechlab_011 0.1341 sqisoft_001
0.0456 neurotechnology_010 0.1341 hzailu_000

FRVT
0.0459 paravision_007 0.1346 s1_000
0.0460 visionlabs_010 0.1402 tech5_002
0.0463 cogent_006 0.1431 kneron_001
False negative identification rate (FNIR)

-
False pos. identification rate
False neg. identification rate

0.0466 xforwardai_001 0.1521 imperial_000

FACE RECOGNITION VENDOR TEST


0.20 0.0468 line_001 0.1577 ntechlab_007
0.0472 dermalog_010 0.1583 ntechlab_6
0.0474 vts_003 0.1588 s1_001
0.0476 rankone_013 0.1595 imagus_008
0.0482 gorilla_008 0.1595 revealmedia_000
0.0487 idemia_008 0.1672 imagus_006
0.0492 paravision_005 0.1679 visionlabs_5
0.0494 pangiam_000 0.1689 ntechlab_5
0.0494 sensetime_004 0.1723 visionlabs_4
0.0501 nec_005 0.1760 alchera_004
0.0502 pangiam_001 0.1763 maxvision_000
0.0508 sqisoft_002 0.1770 2
0.0509 ntechlab_010 0.1777 1
R = Num. candidates examined
N = Num. enrolled subjects

0.0509 canon_001 0.1825 imagus_005


0.10 0.0510 xforwardai_000 0.1855 imagus_007
0.0535 lineclova_002 0.1883 pixelall_005
0.0548 canon_002 0.1912 dahua_002
0.09
0.0553 paravision_004 0.1939 realnetworks_007
0.0555 cib_000 0.1961 veridas_003

-
0.08 0.0560 gorilla_007 0.2012 realnetworks_006

IDENTIFICATION
0.0575 firstcreditkz_001 0.2116 megvii_2
0.0579 realnetworks_008 0.2118 griaule_000
0.07 0.0593 turingtechvip_001 0.2313 rankone_010
0.0612 gorilla_006 0.2358 t4isb_000
0.0618 hzailu_001 0.2365 dilusense_000
0.06 0.0626 kakao_000 0.2438 nec_3
0.0654 tevian_007 0.2485 acer_001
0.0665 microsoft_5 0.2586 innovatrics_005
T = Threshold

0.0672 microsoft_6 0.2695 nec_2


0.05 0.0677 visionlabs_009 0.2765 cyberlink_004
0.0703 ntechlab_009 0.2774 fujitsulab_000
0.0706 tevian_006 0.2795 tevian_5
0.0709 qnap_003 0.2839 rendip_000
0.0721 ntechlab_008 0.2955 realnetworks_005
0.04 0.0726 dahua_004 0.2976 tiger_2
0.0733 s1_002 0.2983 ntechlab_3
0.0743 rankone_012 0.3082 ntechlab_4
1 3 10 30 50 0.0765 incode_005 0.3083 sensetime_002
T > 0 → Identification
T = 0 → Investigation

Rank 0.0769 s1_003 0.3122 hik_5


0.0776 vnpt_001 0.3123 hik_6

Figure 143: [Mugshot and profile-view dataset] Rank-based accuracy. For some of the more accurate Phase 3 algorithms the figure plots error tradeoff characteristics
for frontal and profile-view searches into an enrolled set of N = 1 600 000 frontal images. Note that some algorithms fail on profile-view images with FNIR → 1 - this
evaluation did not ask developers to provide profile-view capability. Some algorithms, on the other hand, give FNIR approaching that for frontal-view searches using c.
2010 algorithms. The best result is that 91% of profile-view searches yield the correct mate at rank 1, and better than 94% in the top-50 candidates.

218
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

11:12:06
2022/12/18

1.00

0.90

0.80

0.70

0.60
FNIR(N, R, T) =

Dataset: 2018 Mugshot−Profile


FPIR(N, T) =

0.50
FNIR@FPIR = 0.002
N=1600000

0.0651 cloudwalk_mt_001 0.2854 neurotechnology_012

FRVT
0.40
0.0963 cloudwalk_mt_000 0.3078 canon_001
0.1188 cloudwalk_hr_000 0.3523 microsoft_6
False negative identification rate, FNIR(T)

0.1226 s1_002 0.3560 canon_002

-
False pos. identification rate
False neg. identification rate

0.1247 idemia_009 0.3596 xforwardai_000

FACE RECOGNITION VENDOR TEST


0.1287 sensetime_008 0.3673 visionlabs_010
0.30
0.1350 kakao_001 0.3810 nec_005
0.1390 cubox_000 0.3829 tevian_006
0.1418 sensetime_005 0.3932 ntechlab_009
0.1466 xforwardai_002 0.4030 sqisoft_002
0.1786 idemia_008 0.4087 gorilla_008
0.1842 hyperverge_002 0.4137 sensetime_003
0.1922 maxvision_001 0.4184 visionlabs_008
0.20 0.1957 hyperverge_001 0.4240 kakao_000
0.1984 sensetime_004 0.4420 dahua_004
0.1988 cogent_006 0.4675 microsoft_5
0.2039 ntechlab_011 0.4757 incode_005
0.2107 nec_006 0.4810 gorilla_006
R = Num. candidates examined
N = Num. enrolled subjects

0.2201 ntechlab_010 0.4836 ntechlab_008


0.2265 vnpt_002 0.4844 t4isb_000
0.2271 xforwardai_001 0.4880 gorilla_007
0.2326 intema_000 0.5150 vnpt_001
0.2376 neurotechnology_010 0.5339 neurotechnology_009
0.2478 tevian_007 0.5396 dahua_003

-
0.2490 firstcreditkz_001 0.5696 notiontag_000

IDENTIFICATION
0.2579 pangiam_000 0.5877 dilusense_000
0.10
0.2647 pangiam_001
0.09

0.08

0.07
T = Threshold

0.06

0.05

3e−04 1e−03 3e−03 1e−02 3e−02 1e−01 3e−01 1e+00


T > 0 → Identification
T = 0 → Investigation

False positive identification rate, FPIR(T)

Figure 144: [Mugshot and profile-view dataset] Threshold-based accuracy. For some of the more accurate Phase 3 algorithms the figure plots error tradeoff characteris-
tics for frontal and profile-view searches into an enrolled set of N = 1 600 000 frontal images. Note that some algorithms fail on profile-view images with FNIR → 1 - this
evaluation did not ask developers to provide profile-view capability. Some algorithms, on the other hand, give FNIR approaching that for frontal-view searches using c.
2010 algorithms.

219
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

11:12:06
2022/12/18

canon_001 canon_002 cloudwalk_hr_000 cloudwalk_mt_000 cloudwalk_mt_001 cogent_006 cubox_000 dahua_003


1.000
0.900
0.800
0.700
0.600
0.500
0.400
0.300
0.200
0.100
0.090
0.080
0.070
0.060
0.050
0.040
0.030
0.020
0.010
0.009
0.008
0.007
0.006
0.005
0.004
0.003
0.002
0.001

dahua_004 dilusense_000 firstcreditkz_001 gorilla_006 gorilla_007 gorilla_008 hyperverge_001 hyperverge_002


1.000
0.900
0.800
0.700
0.600
0.500
0.400
0.300
0.200
0.100
0.090
0.080
0.070
0.060
0.050
0.040
0.030
0.020
FNIR(N, R, T) =

0.010
0.009
0.008
0.007
0.006
0.005
0.004
FPIR(N, T) =

0.003
0.002
0.001

idemia_008 idemia_009 incode_005 intema_000 kakao_000 kakao_001 maxvision_001 microsoft_5


1.000
0.900
0.800

FRVT
0.700
0.600
0.500
0.400
0.300
0.200
0.100
0.090
0.080
0.070
0.060
0.050
0.040
0.030
0.020

-
False negative identification rate, FNIR(T)
False pos. identification rate
False neg. identification rate

0.010
0.009
0.008

FACE RECOGNITION VENDOR TEST


0.007
0.006
0.005
0.004
0.003
0.002
0.001

microsoft_6 nec_005 nec_006 neurotechnology_009 neurotechnology_010 neurotechnology_012 notiontag_000 ntechlab_008


1.000
0.900
0.800
0.700
0.600
0.500
0.400
0.300
0.200 Dataset:
0.100
0.090
0.080
0.070
0.060
2018 Mugshot−Profile
0.050
0.040
0.030
0.020
N=1600000
0.010
0.009
0.008
0.007
0.006
0.005
0.004
0.003
0.002 Frontal−Frontal Frontal−Profile
0.001

ntechlab_009 ntechlab_010 ntechlab_011 pangiam_000 pangiam_001 s1_002 sensetime_003 sensetime_004


R = Num. candidates examined
N = Num. enrolled subjects

1.000
0.900
0.800
0.700
0.600
0.500
0.400
0.300
0.200
0.100
0.090
0.080
0.070
0.060
0.050
0.040
0.030
0.020
0.010
0.009
0.008
0.007
0.006
0.005
0.004
0.003
0.002
0.001

-
IDENTIFICATION
sensetime_005 sensetime_008 sqisoft_002 t4isb_000 tevian_006 tevian_007 visionlabs_008 visionlabs_010
1.000
0.900
0.800
0.700
0.600
0.500
0.400
0.300
0.200
0.100
0.090
0.080
0.070
0.060
0.050
0.040
0.030
0.020
0.010
0.009
0.008
0.007
0.006
0.005
0.004
0.003
0.002
0.001
T = Threshold

3e−04
1e−03
3e−03
1e−02
3e−02
1e−01
3e−01
1e+003e−04
1e−03
3e−03
1e−02
3e−02
1e−01
3e−01
1e+003e−04
1e−03
3e−03
1e−02
3e−02
1e−01
3e−01
1e+00
vnpt_001 vnpt_002 xforwardai_000 xforwardai_001 xforwardai_002
1.000
0.900
0.800
0.700
0.600
0.500
0.400
0.300
0.200
0.100
0.090
0.080
0.070
0.060
0.050
0.040
0.030
0.020
0.010
0.009
0.008
0.007
0.006
0.005
0.004
0.003
0.002
0.001

3e−04
1e−03
3e−03
1e−02
3e−02
1e−01
3e−01
1e+003e−04
1e−03
3e−03
1e−02
3e−02
1e−01
3e−01
1e+003e−04
1e−03
3e−03
1e−02
3e−02
1e−01
3e−01
1e+003e−04
1e−03
3e−03
1e−02
3e−02
1e−01
3e−01
1e+003e−04
1e−03
3e−03
1e−02
3e−02
1e−01
3e−01
1e+00
T > 0 → Identification
T = 0 → Investigation

False positive identification rate, FPIR(T)

Figure 145: [Mugshot and profile-view dataset] Speed-accuracy tradeoff. For some of the more accurate Phase 3 algorithms the figure plots error tradeoff characteristics
for frontal and profile-view searches into an enrolled set of N = 1 600 000 frontal images. Some algorithms fail on profile-view images with FNIR → 1 - this evaluation did
not ask developers to provide profile-view capability. Some algorithms, on the other hand, give FNIR approaching that for frontal-view searches using c. 2010 algorithms.
Blue lines connect points of equal threshold from which it is evident that some algorithms would give markedly higher false positive outcomes if profile-view images

220
were searched in a system configured for frontal searches. This would be a vulnerability in an access control system.
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 221

Appendix F Search duration

As in and prior tests, this section documents search speeds spanning three orders of magnitude. In applications where
search volumes are high enough, this will have implications for hardware requirements especially for large N or when
search duration is appreciably larger than the time it takes to prepare a template from the search image(s). Further,
given very large (and growing) operational databases, the scalability of algorithms is important. It has been reported
previously [8] that search duration can scale sublinearly with enrolled population size N. Further there has been con-
siderable recent research on indexing, exact [13] and approximate nearest neighbor search [1,13] and fast-search [14,16].

Figure 146 charts the search duration measurements presented earlier in Tables 2 - 4.

. Most algorithms scale linearly. For those in that category, there is a wide range in speed with search durations
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

ranging from 82 milliseconds for a 12 million gallery (for NEC-3) to more than 40 seconds (for Yitu-3, Toshiba-2)
and even higher for less accurate algorithms.

. Some developers (Camvi, Dermalog, EverAI, Innovatrics, and Visionlabs) provide algorithms whose template
search durations grow approimately logarithmically i.e. T (N )ã log N with the constant a varying between imple-
mentations. In the figure this model is fit using the point T (1) = 0, and T (640 000). This very sublinear behaviour
affords extremely fast search times in very large galleries. One caveat for the sublinear algorithms is that their
fast-search data structures can require considerable computation time - on the order of hours - for N in the mil-
lions, and this scales mildy super-linearly, i.e. O(N b ), b > 1. There are exceptions: the Camvi algorithms take
minutes; and Innvovatrics’ scale sublinearly.

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 222
2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

11:12:06
2022/12/18

digidata_000 dermalog_5 tiger_1 microfocus_2 innovatrics_2 vts_000 nec_3


1e+06
6e+05
3e+05
1e+05
6e+04
3e+04
1e+04
6e+03
3e+03
1e+03
6e+02
3e+02

everai_0 visionlabs_3 innovatrics_4 innovatrics_005 camvi_3 3divi_2 3divi_6


1e+06
6e+05
3e+05
1e+05
6e+04
3e+04
1e+04
FNIR(N, R, T) =

6e+03
3e+03
FPIR(N, T) =

1e+03
6e+02
3e+02

microfocus_1 ptakuratsatu_000 3divi_1 visionlabs_4 camvi_2 firstcreditkz_001 camvi_1


1e+06

FRVT
6e+05
3e+05
1e+05
6e+04
3e+04

-
False pos. identification rate
False neg. identification rate

1e+04
6e+03

FACE RECOGNITION VENDOR TEST


3e+03
Search duration in microseconds

1e+03
6e+02
3e+02

visionlabs_011 visionlabs_008 visionlabs_009 intema_000 verijelas_000 visionlabs_010 camvi_5


1e+06
6e+05
3e+05
1e+05
6e+04
3e+04
1e+04
6e+03
3e+03
1e+03
6e+02
3e+02

visionlabs_5 3divi_3 rankone_0 camvi_4 visionlabs_6 mukh_002 dermalog_4


R = Num. candidates examined
N = Num. enrolled subjects

1e+06
6e+05
3e+05
1e+05
6e+04
3e+04
1e+04
6e+03
3e+03
1e+03
6e+02
3e+02

-
IDENTIFICATION
dermalog_3 rankone_009 rankone_010 dermalog_007 rankone_011 rankone_012 aware_5
1e+06
6e+05
3e+05
1e+05
6e+04
3e+04
1e+04
6e+03
3e+03
1e+03
6e+02
3e+02
T = Threshold

rankone_4 innovatrics_007 rankone_007 idemia_0 idemia_008 +05 .6e+06 +06 +06 +07 +05 .6e+06 +06 +06 +07
6.4e 1 3.0e 6.0e 1.2e 6.4e 1 3.0e 6.0e 1.2e
1e+06
6e+05
3e+05
1e+05
6e+04
Dataset: Mugshots
3e+04
1e+04 Measured
6e+03
3e+03 Model: a log N
1e+03
6e+02 Model: a N
3e+02
T > 0 → Identification
T = 0 → Investigation

+05 .6e+06 +06 +06 +07 +05 .6e+06 +06 +06 +07 +05 .6e+06 +06 +06 +07 +05 .6e+06 +06 +06 +07 + 05 +06 +06 +06 +07
6.4e 1 3.0e 6.0e 1.2e 6.4e 1 3.0e 6.0e 1.2e 6.4e 1 3.0e 6.0e 1.2e 6.4e 1 3.0e 6.0e 1.2e 6.4e 1.6e 3.0e 6.0e 1.2e
Enrolled population size, N

Figure 146: [Mugshot Dataset] Search duration vs. enrolled population size. In red are the actual point durations measured on a single c. 2016 core. The blue shows
linear growth from N = 640 000. The green line shows logathmic growth from that point to N = 1 600 000. Note the sublinear growth from algorithms from Camvi,
Dermalog, EverAI, Innovatrics, and Visionlabs. The tiger 1 algorithm is also sublinear, but inaccurate and inoperable at N ≥ 3000000. This capability sometimes comes

223
at the additional expense of converting a linear gallery data structure into whatever fast-search data structure is used. Note that search times are sometimes dominated
by the template generation times shown in Table 26.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

idemia_5 idemia_1 visionlabs_7 idemia_6 idemia_2 rankone_3 rankone_2


11:12:06
2022/12/18

1e+07
6e+06
3e+06
1e+06
6e+05
3e+05
1e+05
6e+04
3e+04

rankone_5 dermalog_6 gorilla_2 rankone_013 idemia_007 synesis_0 ntechlab_1


1e+07
6e+06
3e+06
1e+06
6e+05
3e+05
FNIR(N, R, T) =

1e+05
FPIR(N, T) =

6e+04
3e+04

aware_6 ntechlab_009 ntechlab_008 microfocus_6 microfocus_0 microfocus_5 microfocus_3


1e+07

FRVT
6e+06
3e+06
1e+06
6e+05

-
False pos. identification rate
False neg. identification rate

3e+05

FACE RECOGNITION VENDOR TEST


Search duration in microseconds

1e+05
6e+04
3e+04

microfocus_4 vocord_5 idemia_009 vocord_6 idemia_4 imagus_0 imagus_3


1e+07
6e+06
3e+06
1e+06
6e+05
3e+05
1e+05
6e+04
3e+04

imagus_2 kakao_000 nec_1 imagus_005 imagus_006 imagus_007 isystems_1


R = Num. candidates examined
N = Num. enrolled subjects

1e+07
6e+06
3e+06
1e+06
6e+05
3e+05
1e+05
6e+04

-
3e+04

IDENTIFICATION
dahua_002 dermalog_010 vocord_2 dermalog_009 dermalog_008 ntechlab_5 ntechlab_6
1e+07
6e+06
3e+06
1e+06
6e+05
3e+05
1e+05
6e+04
T = Threshold

3e+04

dahua_003 t4isb_000 aware_0 aware_2 aware_1 dahua_0 +05 .6e+06 +06 +06 +07
6.4e 1 3.0e 6.0e 1.2e
1e+07
6e+06
3e+06
Dataset: Mugshots
1e+06
6e+05 Measured
3e+05
Model: a log N
1e+05
6e+04 Model: a N
3e+04
T > 0 → Identification
T = 0 → Investigation

+05 .6e+06 +06 +06 +07 +05 .6e+06 +06 +06 +07 +05 .6e+06 +06 +06 +07 +05 .6e+06 +06 +06 +07 +05 .6e+06 +06 +06 +07 +05 .6e+06 +06 +06 +07
6.4e 1 3.0e 6.0e 1.2e 6.4e 1 3.0e 6.0e 1.2e 6.4e 1 3.0e 6.0e 1.2e 6.4e 1 3.0e 6.0e 1.2e 6.4e 1 3.0e 6.0e 1.2e 6.4e 1 3.0e 6.0e 1.2e
Enrolled population size, N
Figure 147: [Mugshot Dataset] Search duration vs. enrolled population size. In red are the actual point durations measured on a single c. 2016 core. The blue shows
linear growth from N = 640 000. The green line shows logathmic growth from that point to N = 1 600 000. Note the sublinear growth from algorithms from Camvi,
Dermalog, EverAI, Innovatrics, and Visionlabs. The tiger 1 algorithm is also sublinear, but inaccurate and inoperable at N ≥ 3000000. This capability sometimes comes
at the additional expense of converting a linear gallery data structure into whatever fast-search data structure is used. Note that search times are sometimes dominated

224
by the template generation times shown in Table 26.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

dahua_1 lineclova_002 vocord_1 vocord_0 shaman_4 vocord_3 vocord_4


11:12:06
2022/12/18

1e+07
6e+06
3e+06
1e+06
6e+05
3e+05
1e+05
6e+04

ayonix_2 vixvizion_009 ayonix_1 everai_2 everai_3 everai_1 gorilla_004


1e+07
6e+06
3e+06
1e+06
6e+05
FNIR(N, R, T) =

3e+05
FPIR(N, T) =

1e+05
6e+04

cloudwalk_hr_000 megvii_0 acer_000 rankone_1 tevian_006 tevian_007 eyedea_2


1e+07

FRVT
6e+06
3e+06
1e+06

-
6e+05
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


3e+05
Search duration in microseconds

1e+05
6e+04

eyedea_1 eyedea_3 ayonix_0 veridas_002 rendip_000 veridas_003 pangiam_001


1e+07
6e+06
3e+06
1e+06
6e+05
3e+05
1e+05
6e+04

pangiam_000 ntechlab_4 ntechlab_3 trueface_000 shaman_3 tiger_3 incode_004


R = Num. candidates examined
N = Num. enrolled subjects

1e+07
6e+06
3e+06
1e+06
6e+05
3e+05
1e+05

-
6e+04

IDENTIFICATION
nec_004 nec_006 visionbox_000 imperial_000 sensetime_002 ntechlab_0 incode_005
1e+07
6e+06
3e+06
1e+06
6e+05
3e+05
1e+05
T = Threshold

6e+04

intellivision_001 isystems_3 isystems_0 dahua_004 tevian_2 dermalog_1 +05 .6e+06 +06 +06 +07
6.4e 1 3.0e 6.0e 1.2e
1e+07
6e+06
3e+06 Dataset: Mugshots
1e+06 Measured
6e+05
3e+05 Model: a log N
1e+05 Model: a N
6e+04
T > 0 → Identification
T = 0 → Investigation

+05 .6e+06 +06 +06 +07 +05 .6e+06 +06 +06 +07 +05 .6e+06 +06 +06 +07 +05 .6e+06 +06 +06 +07 +05 .6e+06 +06 +06 +07 +05 .6e+06 +06 +06 +07
6.4e 1 3.0e 6.0e 1.2e 6.4e 1 3.0e 6.0e 1.2e 6.4e 1 3.0e 6.0e 1.2e 6.4e 1 3.0e 6.0e 1.2e 6.4e 1 3.0e 6.0e 1.2e 6.4e 1 3.0e 6.0e 1.2e
Enrolled population size, N
Figure 148: [Mugshot Dataset] Search duration vs. enrolled population size. In red are the actual point durations measured on a single c. 2016 core. The blue shows
linear growth from N = 640 000. The green line shows logathmic growth from that point to N = 1 600 000. Note the sublinear growth from algorithms from Camvi,
Dermalog, EverAI, Innovatrics, and Visionlabs. The tiger 1 algorithm is also sublinear, but inaccurate and inoperable at N ≥ 3000000. This capability sometimes comes
at the additional expense of converting a linear gallery data structure into whatever fast-search data structure is used. Note that search times are sometimes dominated

225
by the template generation times shown in Table 26.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

dermalog_0 tevian_0 incode_2 tevian_1 nec_2 tevian_5 incode_3


11:12:06
2022/12/18

1e+07
6e+06
3e+06
1e+06
6e+05
3e+05
1e+05

yitu_1 yitu_0 cloudwalk_mt_000 cloudwalk_mt_001 kakao_001 ntechlab_010 intsysmsu_000

1e+07
6e+06
3e+06
1e+06
FNIR(N, R, T) =

6e+05
3e+05
FPIR(N, T) =

1e+05

ntechlab_007 nec_0 hbinno_0 ntechlab_011 isystems_2 synesis_003 cyberlink_004

FRVT
1e+07
6e+06
3e+06

-
1e+06
False pos. identification rate
False neg. identification rate

6e+05

FACE RECOGNITION VENDOR TEST


Search duration in microseconds

3e+05
1e+05

cyberlink_005 sensetime_1 gorilla_008 sensetime_0 shaman_1 shaman_0 daon_000

1e+07
6e+06
3e+06
1e+06
6e+05
3e+05
1e+05

tevian_3 tevian_4 realnetworks_006 realnetworks_008 imagus_008 3divi_5 nec_005


R = Num. candidates examined
N = Num. enrolled subjects

1e+07
6e+06
3e+06
1e+06
6e+05
3e+05

-
1e+05

IDENTIFICATION
fincore_000 3divi_0 megvii_2 megvii_1 paravision_005 glory_0 everai_paravision_004

1e+07
6e+06
3e+06
1e+06
6e+05
3e+05
1e+05
T = Threshold

yisheng_0 cogent_1 cyberlink_003 idemia_3 acer_001 +05 .6e+06 +06 +06 +07 +05 .6e+06 +06 +06 +07
6.4e 1 3.0e 6.0e 1.2e 6.4e 1 3.0e 6.0e 1.2e
1e+07
6e+06 Dataset: Mugshots
3e+06
1e+06 Measured
6e+05 Model: a log N
3e+05
Model: a N
1e+05
T > 0 → Identification
T = 0 → Investigation

+05 .6e+06 +06 +06 +07 +05 .6e+06 +06 +06 +07 +05 .6e+06 +06 +06 +07 +05 .6e+06 +06 +06 +07 +05 .6e+06 +06 +06 +07
6.4e 1 3.0e 6.0e 1.2e 6.4e 1 3.0e 6.0e 1.2e 6.4e 1 3.0e 6.0e 1.2e 6.4e 1 3.0e 6.0e 1.2e 6.4e 1 3.0e 6.0e 1.2e
Enrolled population size, N
Figure 149: [Mugshot Dataset] Search duration vs. enrolled population size. In red are the actual point durations measured on a single c. 2016 core. The blue shows
linear growth from N = 640 000. The green line shows logathmic growth from that point to N = 1 600 000. Note the sublinear growth from algorithms from Camvi,
Dermalog, EverAI, Innovatrics, and Visionlabs. The tiger 1 algorithm is also sublinear, but inaccurate and inoperable at N ≥ 3000000. This capability sometimes comes
at the additional expense of converting a linear gallery data structure into whatever fast-search data structure is used. Note that search times are sometimes dominated

226
by the template generation times shown in Table 26.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

cogent_0 microsoft_0 anke_002 shaman_7 shaman_6 irex_000 eyedea_0


11:12:06
2022/12/18

1e+07
6e+06
3e+06
1e+06
6e+05
3e+05
1e+05
innovatrics_0 innovatrics_1 dermalog_2 hyperverge_002 clearviewai_000 shaman_2 anke_0
1e+07
6e+06
3e+06
1e+06
FNIR(N, R, T) =

6e+05
3e+05
FPIR(N, T) =

1e+05
anke_1 cyberlink_001 hyperverge_001 cyberlink_000 gorilla_007 realnetworks_007 lookman_3

FRVT
1e+07
6e+06
3e+06

-
False pos. identification rate
False neg. identification rate

1e+06

FACE RECOGNITION VENDOR TEST


6e+05
Search duration in microseconds

3e+05
1e+05
kedacom_001 tech5_001 synesis_3 gorilla_005 3divi_4 vnpt_002 cogent_006
1e+07
6e+06
3e+06
1e+06
6e+05
3e+05
1e+05
microsoft_1 vnpt_001 microsoft_2 synesis_005 hik_6 hik_5 neurotechnology_6
R = Num. candidates examined
N = Num. enrolled subjects

1e+07
6e+06
3e+06
1e+06
6e+05
3e+05

-
1e+05

IDENTIFICATION
neurotechnology_5 cognitec_006 paravision_007 mantra_000 vigilantsolutions_008 cogent_005 hik_4
1e+07
6e+06
3e+06
1e+06
6e+05
3e+05
T = Threshold

1e+05
neurotechnology_010 deepsea_001 lookman_4 lookman_005 neurotechnology_009 gorilla_006 +05 .6e+06 +06 +06 +07
6.4e 1 3.0e 6.0e 1.2e
1e+07
6e+06
3e+06
Dataset: Mugshots
Measured
1e+06
6e+05 Model: a log N
3e+05 Model: a N
1e+05
T > 0 → Identification
T = 0 → Investigation

+05 .6e+06 +06 +06 +07 +05 .6e+06 +06 +06 +07 +05 .6e+06 +06 +06 +07 +05 .6e+06 +06 +06 +07 +05 .6e+06 +06 +06 +07 +05 .6e+06 +06 +06 +07
6.4e 1 3.0e 6.0e 1.2e 6.4e 1 3.0e 6.0e 1.2e 6.4e 1 3.0e 6.0e 1.2e 6.4e 1 3.0e 6.0e 1.2e 6.4e 1 3.0e 6.0e 1.2e 6.4e 1 3.0e 6.0e 1.2e
Enrolled population size, N
Figure 150: [Mugshot Dataset] Search duration vs. enrolled population size. In red are the actual point durations measured on a single c. 2016 core. The blue shows
linear growth from N = 640 000. The green line shows logathmic growth from that point to N = 1 600 000. Note the sublinear growth from algorithms from Camvi,
Dermalog, EverAI, Innovatrics, and Visionlabs. The tiger 1 algorithm is also sublinear, but inaccurate and inoperable at N ≥ 3000000. This capability sometimes comes
at the additional expense of converting a linear gallery data structure into whatever fast-search data structure is used. Note that search times are sometimes dominated

227
by the template generation times shown in Table 26.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

qnap_002 hik_3 neurotechnology_012 realnetworks_003 yisheng_1 realnetworks_004 aware_4


11:12:06
2022/12/18

1e+07
6e+06
3e+06

1e+06
6e+05
3e+05

neurotechnology_007 noblis_1 neurotechnology_008 yitu_5 maxvision_001 yitu_4 vigilantsolutions_007

1e+07
6e+06
3e+06
FNIR(N, R, T) =

1e+06
6e+05
FPIR(N, T) =

3e+05

neurotechnology_3 pixelall_004 pixelall_003 s1_002 cogent_3 scanovate_001 pixelall_002

FRVT
1e+07
6e+06
3e+06

-
False pos. identification rate
False neg. identification rate

1e+06

FACE RECOGNITION VENDOR TEST


Search duration in microseconds

6e+05
3e+05

scanovate_000 smilart_1 qnap_003 neurotechnology_4 sqisoft_001 smilart_2 smilart_0

1e+07
6e+06
3e+06

1e+06
6e+05
3e+05

siat_0 qnap_001 pixelall_005 deepglint_001 sqisoft_002 cognitec_004 cognitec_005


R = Num. candidates examined
N = Num. enrolled subjects

1e+07
6e+06
3e+06

1e+06
6e+05
3e+05

-
IDENTIFICATION
qnap_000 microsoft_6 microsoft_3 fujitsulab_000 microsoft_5 realnetworks_005 vigilantsolutions_6

1e+07
6e+06
3e+06

1e+06
6e+05
3e+05
T = Threshold

vigilantsolutions_5 s1_003 fujitsulab_001 cognitec_0 cognitec_3 cognitec_2 +05 .6e+06 +06 +06 +07
6.4e 1 3.0e 6.0e 1.2e
1e+07
6e+06 Dataset: Mugshots
3e+06
Measured
1e+06 Model: a log N
6e+05
Model: a N
3e+05
T > 0 → Identification
T = 0 → Investigation

+05 .6e+06 +06 +06 +07 +05 .6e+06 +06 +06 +07 +05 .6e+06 +06 +06 +07 +05 .6e+06 +06 +06 +07 +05 .6e+06 +06 +06 +07 +05 .6e+06 +06 +06 +07
6.4e 1 3.0e 6.0e 1.2e 6.4e 1 3.0e 6.0e 1.2e 6.4e 1 3.0e 6.0e 1.2e 6.4e 1 3.0e 6.0e 1.2e 6.4e 1 3.0e 6.0e 1.2e 6.4e 1 3.0e 6.0e 1.2e
Enrolled population size, N
Figure 151: [Mugshot Dataset] Search duration vs. enrolled population size. In red are the actual point durations measured on a single c. 2016 core. The blue shows
linear growth from N = 640 000. The green line shows logathmic growth from that point to N = 1 600 000. Note the sublinear growth from algorithms from Camvi,
Dermalog, EverAI, Innovatrics, and Visionlabs. The tiger 1 algorithm is also sublinear, but inaccurate and inoperable at N ≥ 3000000. This capability sometimes comes
at the additional expense of converting a linear gallery data structure into whatever fast-search data structure is used. Note that search times are sometimes dominated

228
by the template generation times shown in Table 26.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

cognitec_1 tiger_2 dilusense_000 line_001 decatur_000 vigilantsolutions_1 innovatrics_3


11:12:06
2022/12/18

3e+07
1e+07
6e+06
3e+06
1e+06
6e+05
3e+05
cogent_004 glory_1 revealmedia_000 realnetworks_2 gorilla_3 vigilantsolutions_0 vigilantsolutions_2
3e+07
1e+07
6e+06
3e+06
FNIR(N, R, T) =

1e+06
FPIR(N, T) =

6e+05
3e+05
realnetworks_1 cogent_2 vd_003 vd_002 tongyitrans_0 vigilantsolutions_4 tongyitrans_1

FRVT
3e+07
1e+07
6e+06

-
False pos. identification rate
False neg. identification rate

3e+06

FACE RECOGNITION VENDOR TEST


Search duration in microseconds

1e+06
6e+05
3e+05
vigilantsolutions_3 incode_0 hik_0 aware_3 hik_1 hik_2 sensetime_007
3e+07
1e+07
6e+06
3e+06
1e+06
6e+05
3e+05
sensetime_008 sensetime_006 sensetime_004 kneron_000 vd_0 s1_001 vts_003
R = Num. candidates examined
N = Num. enrolled subjects

3e+07
1e+07
6e+06
3e+06
1e+06
6e+05

-
3e+05

IDENTIFICATION
vts_001 vts_002 hzailu_000 alchera_3 noblis_2 microsoft_4 kneron_001
3e+07
1e+07
6e+06
3e+06
1e+06
6e+05
T = Threshold

3e+05
alchera_2 realnetworks_0 tiger_0 cyberlink_002 neurotechnology_0 neurotechnology_1 +05 .6e+06 +06 +06 +07
6.4e 1 3.0e 6.0e 1.2e
3e+07
1e+07 Dataset: Mugshots
6e+06
Measured
3e+06
Model: a log N
1e+06 Model: a N
6e+05
3e+05
T > 0 → Identification
T = 0 → Investigation

+05 .6e+06 +06 +06 +07 +05 .6e+06 +06 +06 +07 +05 .6e+06 +06 +06 +07 +05 .6e+06 +06 +06 +07 +05 .6e+06 +06 +06 +07 +05 .6e+06 +06 +06 +07
6.4e 1 3.0e 6.0e 1.2e 6.4e 1 3.0e 6.0e 1.2e 6.4e 1 3.0e 6.0e 1.2e 6.4e 1 3.0e 6.0e 1.2e 6.4e 1 3.0e 6.0e 1.2e 6.4e 1 3.0e 6.0e 1.2e
Enrolled population size, N
Figure 152: [Mugshot Dataset] Search duration vs. enrolled population size. In red are the actual point durations measured on a single c. 2016 core. The blue shows
linear growth from N = 640 000. The green line shows logathmic growth from that point to N = 1 600 000. Note the sublinear growth from algorithms from Camvi,
Dermalog, EverAI, Innovatrics, and Visionlabs. The tiger 1 algorithm is also sublinear, but inaccurate and inoperable at N ≥ 3000000. This capability sometimes comes
at the additional expense of converting a linear gallery data structure into whatever fast-search data structure is used. Note that search times are sometimes dominated

229
by the template generation times shown in Table 26.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

neurotechnology_2 aize_001 allgovision_001 allgovision_000 incode_1 cib_000 cubox_000


11:12:06
2022/12/18

6e+07
3e+07
1e+07
6e+06
3e+06
1e+06
6e+05
paravision_009 xforwardai_001 toshiba_0 sensetime_005 toshiba_1 xforwardai_000 alchera_1

6e+07
3e+07
1e+07
FNIR(N, R, T) =

6e+06
3e+06
FPIR(N, T) =

1e+06
6e+05
alchera_0 vd_1 hzailu_001 siat_2 siat_1 sensetime_003 maxvision_000

FRVT
6e+07
3e+07

-
False pos. identification rate
False neg. identification rate

1e+07
6e+06

FACE RECOGNITION VENDOR TEST


3e+06
Search duration in microseconds

1e+06
6e+05
yitu_3 staqu_000 yitu_2 gorilla_1 remarkai_000 remarkai_2 remarkai_0

6e+07
3e+07
1e+07
6e+06
3e+06
1e+06
6e+05
line_000 veridas_001 xforwardai_002 griaule_000 griaule_001 20face_000 alchera_004
R = Num. candidates examined
N = Num. enrolled subjects

6e+07
3e+07
1e+07
6e+06
3e+06
1e+06

-
6e+05

IDENTIFICATION
tech5_002 canon_001 s1_000 canon_002 newland_2 notiontag_000 verihubs−inteligensia_000

6e+07
3e+07
1e+07
6e+06
3e+06
1e+06
T = Threshold

6e+05
gorilla_0 smilart_4 smilart_5 intellivision_002 turingtechvip_001 quantasoft_1 +05 .6e+06 +06 +06 +07
6.4e 1 3.0e 6.0e 1.2e

6e+07 Dataset: Mugshots


3e+07
1e+07 Measured
6e+06 Model: a log N
3e+06
Model: a N
1e+06
6e+05
T > 0 → Identification
T = 0 → Investigation

+05 .6e+06 +06 +06 +07 +05 .6e+06 +06 +06 +07 +05 .6e+06 +06 +06 +07 +05 .6e+06 +06 +06 +07 +05 .6e+06 +06 +06 +07 +05 .6e+06 +06 +06 +07
6.4e 1 3.0e 6.0e 1.2e 6.4e 1 3.0e 6.0e 1.2e 6.4e 1 3.0e 6.0e 1.2e 6.4e 1 3.0e 6.0e 1.2e 6.4e 1 3.0e 6.0e 1.2e 6.4e 1 3.0e 6.0e 1.2e
Enrolled population size, N
Figure 153: [Mugshot Dataset] Search duration vs. enrolled population size. In red are the actual point durations measured on a single c. 2016 core. The blue shows
linear growth from N = 640 000. The green line shows logathmic growth from that point to N = 1 600 000. Note the sublinear growth from algorithms from Camvi,
Dermalog, EverAI, Innovatrics, and Visionlabs. The tiger 1 algorithm is also sublinear, but inaccurate and inoperable at N ≥ 3000000. This capability sometimes comes
at the additional expense of converting a linear gallery data structure into whatever fast-search data structure is used. Note that search times are sometimes dominated

230
by the template generation times shown in Table 26.
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 231

Appendix G Gallery Insertion Timing


This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

11:12:06
2022/12/18

aware_5 camvi_4 dermalog_5


Time (µs) 31 ● ● ●

Time (µs)

Time (µs)
Insertion

Insertion

Insertion
2400
550 ●
30 ● 2300 ●
500 ●
29 ● ● 2200
2100 ●
450 ●
28 ● ● ●
640K 1.6M 3M 6M 640K 1.6M 3M 6M 12M 640K 1.6M 3M 6M 12M
10000
5e+05 ● Measured 5e+05 ● Measured ● Measured
Time (µs)

Time (µs)

Time (µs)
● Model: a log N ● 3e+05 ● Model: a log N 5000 ● Model: a log N
Search

Search

Search
3e+05 ● ● ● ●
Model: a N Model: a N 3000 Model: a N
FNIR(N, R, T) =


1e+05
FPIR(N, T) =

1e+05 ●
5e+04 1000 ●
● ●
5e+04 ● ● ● ● ● ● ● ●
3e+04 500
640K 1.6M 3M 6M 12M 640K 1.6M 3M 6M 12M 640K 1.6M 3M 6M 12M

FRVT
Gallery Size Gallery Size Gallery Size

-
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


dermalog_6 idemia_5 innovatrics_4
10 ● 9.00 ● ● ● ● ●
Time (µs)

Time (µs)

Time (µs)
11200
Insertion

Insertion

Insertion
● ● ●
8.75 ●
8 10800
8.50 ●
6 8.25 10400

4 ● 8.00 ● 10000 ●
640K 1.6M 3M 6M 12M 640K 1.6M 3M 6M 12M 640K 1.6M 3M 6M 12M
● ●
1e+06 ● Measured ● Measured 1e+05 ● Measured
1e+06
Time (µs)

Time (µs)

Time (µs)
● Model: a log N ● Model: a log N ● ● Model: a log N

Search

Search

Search
5e+05 ● 5e+05 ● ● 5e+04 ●
Model: a N Model: a N Model: a N
3e+05 ● 3e+05 3e+04
R = Num. candidates examined
N = Num. enrolled subjects

● ●
1e+05 1e+05 ●
1e+04 ● ●
● ● ● ●
5e+04 5e+04
640K 1.6M 3M 6M 12M 640K 1.6M 3M 6M 12M 640K 1.6M 3M 6M 12M
Gallery Size Gallery Size Gallery Size

-
IDENTIFICATION
rankone_5 visionlabs_7
0.050 ●
Time (µs)

Time (µs)
Insertion

Insertion

33000 ●
0.025
30000 ●
0.000 ● ● ● ● ●
27000 ●
−0.025
T = Threshold

−0.050 24000 ●
640K 1.6M 3M 6M 12M 640K 1.6M 3M 6M 12M
1e+06 ● 1e+06
● Measured ● Measured
Time (µs)

Time (µs)

5e+05 ● Model: a log N ● 5e+05 ● Model: a log N


Search

Search

● Model: a N 3e+05 ● Model: a N


3e+05 ●

1e+05 1e+05 ●
● ●

T > 0 → Identification
T = 0 → Investigation

● 5e+04 ●
5e+04
640K 1.6M 3M 6M 12M 640K 1.6M 3M 6M 12M
Gallery Size Gallery Size

Figure 154: [Mugshot Dataset] Gallery insertion duration vs. enrolled population size. This chart plots the time it takes to insert a single template into a finalized
gallery, illustrated over increasing gallery sizes. For reference, search times on finalized galleries of corresponding sizes are plotted right underneath. Gallery insertion
time plots were generated on algorithms that 1) successfully implemented gallery insertion with no errors and 2) that were run on galleries with N up to 12 000 000.

232
Generally, only the more accurate algorithms were run on galleries with N up to 12 000 000.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

11:12:06
2022/12/18

ayonix_2 everai_3 incode_3


Time (µs) 1.050 ●

Time (µs)

Time (µs)
Insertion

Insertion

Insertion
1.025 30 3.025
1.000 ● ● ● ● ● 25 3.000 ● ● ● ● ●
0.975 20 ● ● 2.975
0.950 ● ● 2.950
640K 1.6M 3M 6M 12M 640K 1.6M 3M 6M 12M 640K 1.6M 3M 6M 12M
● ● 5e+06 ●
● Measured ● Measured 3e+06 ● Measured
Time (µs)

Time (µs)

Time (µs)
● Model: a log N ● ● Model: a log N ● ● Model: a log N
1e+06 1e+06
Search

Search

Search

● Model: a N ● Model: a N ● Model: a N
FNIR(N, R, T) =

5e+05 ● 5e+05 ● 1e+06 ●


FPIR(N, T) =

3e+05 ● 3e+05 ● 5e+05 ●


3e+05
1e+05 ● 1e+05 ● ●
640K 1.6M 3M 6M 12M 640K 1.6M 3M 6M 12M 640K 1.6M 3M 6M 12M

FRVT
Gallery Size Gallery Size Gallery Size

-
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


isystems_3 microfocus_5 nec_2
11 ● ● 8 ● ● ● 7.050
Time (µs)

Time (µs)

Time (µs)
Insertion

Insertion

Insertion
10 ● 7 ● 7.025
7.000 ● ● ● ● ●
9 ● 6 6.975
8 ● 5 ● 6.950
640K 1.6M 3M 6M 12M 640K 1.6M 3M 6M 12M 640K 1.6M 3M 6M 12M
1e+07 ● ● 5e+06 ●
● Measured 1e+06 ● Measured 3e+06 ● Measured
5e+06
Time (µs)

Time (µs)

Time (µs)
● Model: a log N ● Model: a log N ● ● Model: a log N ●
3e+06
Search

Search

Search
● Model: a N ● 5e+05 ● Model: a N ● Model: a N ●
● 1e+06
R = Num. candidates examined
N = Num. enrolled subjects

1e+06 ● 3e+05
5e+05 ● 5e+05 ●
● 3e+05
3e+05 1e+05
● ● ●
640K 1.6M 3M 6M 12M 640K 1.6M 3M 6M 12M 640K 1.6M 3M 6M 12M
Gallery Size Gallery Size Gallery Size

-
IDENTIFICATION
ntechlab_6 tevian_5 vocord_5
13 ● ● 33 ● 16.0 ●
Time (µs)

Time (µs)

Time (µs)
Insertion

Insertion

Insertion
12 ● ● 15.5
11 32
15.0 ●
10 31 14.5
T = Threshold

9
8 ● 30 ● ● ● ● 14.0 ● ● ●
640K 1.6M 3M 6M 12M 640K 1.6M 3M 6M 12M 640K 1.6M 3M 6M 12M
3e+06 ● 3e+06 ● ●
● Measured ● Measured ● Measured
1e+06
Time (µs)

Time (µs)

Time (µs)
● Model: a log N ● Model: a log N ● ● Model: a log N ●

Search

Search

Search
1e+06 ● ● 5e+05 ●
Model: a N 1e+06 Model: a N ●
Model: a N
● ●
5e+05 3e+05
3e+05 5e+05 ● ●

3e+05
T > 0 → Identification
T = 0 → Investigation

1e+05
1e+05 ● ● ●
640K 1.6M 3M 6M 12M 640K 1.6M 3M 6M 12M 640K 1.6M 3M 6M 12M
Gallery Size Gallery Size Gallery Size

Figure 155: [Mugshot Dataset] Gallery insertion duration vs. enrolled population size. This chart plots the time it takes to insert a single template into a finalized
gallery, illustrated over increasing gallery sizes. For reference, search times on finalized galleries of corresponding sizes are plotted right underneath. Gallery insertion
time plots were generated on algorithms that 1) successfully implemented gallery insertion with no errors and 2) that were run on galleries with N up to 12 000 000.

233
Generally, only the more accurate algorithms were run on galleries with N up to 12 000 000.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

11:12:06
2022/12/18

3divi_5 anke_0 hik_5


Time (µs) 21 ● ● ● 11.0 ● ● ● 2.00 ●

Time (µs)

Time (µs)
Insertion

Insertion

Insertion
18 10.5 1.75
15 ● 10.0 1.50
12 9.5 1.25
9 ● 9.0 ● ● 1.00 ● ● ● ●
640K 1.6M 3M 6M 12M 640K 1.6M 3M 6M 12M 640K 1.6M 3M 6M 12M
5e+06 ● ● 1e+07 ●
● Measured 5e+06 ● Measured ● Measured
3e+06 5e+06
Time (µs)

Time (µs)

Time (µs)
● Model: a log N ● 3e+06 ● Model: a log N ● ● Model: a log N ●
Search

Search

Search
● Model: a N ● Model: a N 3e+06 ● Model: a N

FNIR(N, R, T) =

● ●
1e+06 1e+06
FPIR(N, T) =

5e+05 ● ● 1e+06 ●
5e+05
3e+05 5e+05
● 3e+05 ● ●
3e+05
640K 1.6M 3M 6M 12M 640K 1.6M 3M 6M 12M 640K 1.6M 3M 6M 12M

FRVT
Gallery Size Gallery Size Gallery Size

-
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


lookman_3 megvii_1 neurotechnology_5
12.00 ● ● ● ● 20 ●
Time (µs)

Time (µs)

Time (µs)
70
Insertion

Insertion

Insertion
11.75 60 ● ● 15
11.50 ●
50 10
11.25 5
40 ● ● ●
11.00 ● ● ● ●
640K 1.6M 3M 6M 12M 640K 1.6M 3M 6M 12M 640K 1.6M 3M 6M 12M
● ● ●
● Measured 5e+06 ● Measured ● Measured
5e+06 3e+06 5e+06
Time (µs)

Time (µs)

Time (µs)
● Model: a log N ● Model: a log N ● ● Model: a log N
Search

Search

Search
3e+06 ● 3e+06 ●
● Model: a N ● Model: a N ● Model: a N

R = Num. candidates examined


N = Num. enrolled subjects

● 1e+06
1e+06 1e+06
● 5e+05 ● ●
5e+05 3e+05 5e+05
3e+05 ● ● ●
640K 1.6M 3M 6M 12M 640K 1.6M 3M 6M 12M 640K 1.6M 3M 6M 12M
Gallery Size Gallery Size Gallery Size

-
IDENTIFICATION
sensetime_1 shaman_7 synesis_3
28 ● 4.00 ● ● ● ●
Time (µs)

Time (µs)

Time (µs)
Insertion

Insertion

Insertion
2.025 27 3.75
2.000 ● ● ● ● ● 26 3.50
25 ● ● ●
3.25
T = Threshold

1.975 24
1.950 23 ● 3.00 ●
640K 1.6M 3M 6M 12M 640K 1.6M 3M 6M 12M 640K 1.6M 3M 6M 12M
5e+06 ● 5e+06 ● 1e+07 ●
● Measured ● Measured ● Measured
3e+06 3e+06 5e+06
Time (µs)

Time (µs)

Time (µs)
● Model: a log N ● ● Model: a log N ● ● Model: a log N ●
Search

Search

Search
● Model: a N ● Model: a N 3e+06 ● Model: a N
1e+06 ● ● ●
1e+06
5e+05 ● ● 1e+06 ●
5e+05
3e+05 5e+05
T > 0 → Identification
T = 0 → Investigation


3e+05 ● ●
3e+05
640K 1.6M 3M 6M 12M 640K 1.6M 3M 6M 12M 640K 1.6M 3M 6M 12M
Gallery Size Gallery Size Gallery Size

Figure 156: [Mugshot Dataset] Gallery insertion duration vs. enrolled population size. This chart plots the time it takes to insert a single template into a finalized
gallery, illustrated over increasing gallery sizes. For reference, search times on finalized galleries of corresponding sizes are plotted right underneath. Gallery insertion
time plots were generated on algorithms that 1) successfully implemented gallery insertion with no errors and 2) that were run on galleries with N up to 12 000 000.

234
Generally, only the more accurate algorithms were run on galleries with N up to 12 000 000.
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

11:12:06
2022/12/18

alchera_3 cognitec_2 microsoft_5


Time (µs) 15 ● 12 ● ● ●

Time (µs)

Time (µs)
Insertion

Insertion

Insertion
14 ● ● ● 8
13 10 ●
12 ● 6

8
11 ● 4
10 ● 6 ● ● ●
640K 1.6M 3M 6M 12M 640K 1.6M 3M 6M 12M 640K 1.6M 3M 6M 12M
● ● ●
3e+07 ● Measured 1e+07 ● Measured 1e+07 ● Measured
Time (µs)

Time (µs)

Time (µs)
● Model: a log N ● ● Model: a log N ● ● Model: a log N ●
Search

Search

Search
1e+07 ● Model: a N 5e+06 ● Model: a N 5e+06 ● Model: a N
FNIR(N, R, T) =

● ● 3e+06 ●
5e+06 3e+06
FPIR(N, T) =

3e+06 ● ● ●
1e+06 1e+06
1e+06 ● ● ●
640K 1.6M 3M 6M 12M 640K 1.6M 3M 6M 12M 640K 1.6M 3M 6M 12M

FRVT
Gallery Size Gallery Size Gallery Size

-
False pos. identification rate
False neg. identification rate

FACE RECOGNITION VENDOR TEST


noblis_2 realnetworks_2 remarkai_2
17 ● 24 ● 9.00 ● ● ●
Time (µs)

Time (µs)

Time (µs)
Insertion

Insertion

Insertion
23 ● 8.75
16
22 8.50
15 ● ● ● ●
21 8.25
14 ● 20 ● 8.00 ● ●
640K 1.6M 3M 6M 640K 1.6M 3M 6M 12M 640K 1.6M 3M 6M 12M
5e+07 ● ● ●
3e+07 ● Measured ● Measured 5e+07 ● Measured
1e+07 ● 3e+07
Time (µs)

Time (µs)

Time (µs)
● Model: a log N ● Model: a log N ● Model: a log N ●
Search

Search

Search
● ● 5e+06 ● ●
1e+07 Model: a N Model: a N ● Model: a N ●
R = Num. candidates examined
N = Num. enrolled subjects

5e+06 ● 3e+06 1e+07


3e+06 ● ●
● 5e+06
1e+06 3e+06
1e+06 ● ● ●
640K 1.6M 3M 6M 12M 640K 1.6M 3M 6M 12M 640K 1.6M 3M 6M 12M
Gallery Size Gallery Size Gallery Size

-
IDENTIFICATION
tiger_2 toshiba_0 vd_1
14 ● ● ●
Time (µs)

Time (µs)

Time (µs)
Insertion

Insertion

Insertion

13 ● 30
12 ● 20
28 ●
11 ● ●
15 ●
T = Threshold

10 ● 26 ●
9 ● ● ●
640K 1.6M 3M 6M 12M 640K 1.6M 3M 6M 12M 640K 1.6M 3M 6M 12M
● 5e+07 ●
3e+07 ●
● Measured ● Measured ● Measured
1e+07 3e+07 ●
Time (µs)

Time (µs)

Time (µs)
● Model: a log N ● ● Model: a log N ● Model: a log N ●
Search

Search

Search
5e+06 ● Model: a N ● Model: a N ● 1e+07 ● Model: a N
● 1e+07 ●
3e+06 ●
● 5e+06 5e+06 ●
3e+06 3e+06
1e+06
T > 0 → Identification
T = 0 → Investigation

● ● ●
640K 1.6M 3M 6M 12M 640K 1.6M 3M 6M 12M 640K 1.6M 3M 6M 12M
Gallery Size Gallery Size Gallery Size

Figure 157: [Mugshot Dataset] Gallery insertion duration vs. enrolled population size. This chart plots the time it takes to insert a single template into a finalized
gallery, illustrated over increasing gallery sizes. For reference, search times on finalized galleries of corresponding sizes are plotted right underneath. Gallery insertion
time plots were generated on algorithms that 1) successfully implemented gallery insertion with no errors and 2) that were run on galleries with N up to 12 000 000.

235
Generally, only the more accurate algorithms were run on galleries with N up to 12 000 000.
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 236

References

[1] Artem Babenko and Victor Lempitsky. Efficient indexing of billion-scale datasets of deep descriptors. In The IEEE
Conference on Computer Vision and Pattern Recognition (CVPR), June 2016.

[2] L. Best-Rowden and A. K. Jain. Longitudinal study of automatic face recognition. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 40(1):148–162, Jan 2018.

[3] Blumstein, Cohen, Roth, and Visher, editors. Random parameter stochastic models of criminal careers. National
Academy of Sciences Press, 1986.

[4] Thomas P. Bonczar and Lauren E. Glaze. Probation and parole in the united statesm 2007, statistical tables. Tech-
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

nical report, Bureau of Justice Statistics, December 2008.

[5] White D., Kemp R. I., Jenkins R., Matheson M, and Burton A. M. Passport officers’ errors in face matching. PLoS
ONE, 9(8), 2014. e103510. doi:10.1371/journal. pone.0103510.

[6] P. Grother, G. W. Quinn, and P. J. Phillips. Evaluation of 2d still-image face recognition algorithms. NIST Intera-
gency Report 7709, National Institute of Standards and Technology, 8 2010. http://face.nist.gov/mbe as MBE2010
FRVT2010.

[7] P. J. Grother, R. J. Micheals, and P. J. Phillips. Performance metrics for the frvt 2002 evaluation. In Proceedings of
Audio and Video Based Person Authentication Conference (AVBPA), June 2003.

[8] Patrick Grother and Mei Ngan. Interagency report 8009, performance of face identification algorithms. Face
Recognition Vendor Test (FRVT), May 2014.

[9] Patrick Grother, George Quinn, and Mei Ngan. Face in video evaluation (five) face recognition of non-
cooperative subjects. Interagency Report 8173, National Institute of Standards and Technology, March 2017.
https://doi.org/10.6028/NIST.IR.8173.

[10] Patrick Grother, George W. Quinn, and Mei Ngan. Face recognition vendor test - still face image and video
concept, evaluation plan and api. Technical report, National Institute of Standards and Technology, 7 2013.
http://biometrics.nist.gov/cs links/face/frvt/frvt2012/NIST FRVT2012 api Aug15.pdf.

[11] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In 2016 IEEE Conference on
Computer Vision and Pattern Recognition (CVPR), pages 770–778, June 2016.

[12] Gary B. Huang, Manu Ramesh, Tamara Berg, and Erik Learned-Miller. Labeled faces in the wild: A database for
studying face recognition in unconstrained environments. Technical Report 07-49, University of Massachusetts,
Amherst, October 2007.

[13] Masato Ishii, Hitoshi Imaoka, and Atsushi Sato. Fast k-nearest neighbor search for face identification using bounds
of residual score. In 2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017), pages
194–199, Los Alamitos, CA, USA, May 2017. IEEE Computer Society.

[14] Jeff Johnson, Matthijs Douze, and Hervé Jégou. Billion-scale similarity search with gpus. CoRR, abs/1702.08734,
2017.

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification
FRVT - FACE RECOGNITION VENDOR TEST - IDENTIFICATION 237

[15] Ira Kemelmacher-Shlizerman, Steven M. Seitz, Daniel Miller, and Evan Brossard. The megaface benchmark: 1
million faces for recognition at scale. CoRR, abs/1512.00596, 2015.

[16] Yury A. Malkov and D. A. Yashunin. Efficient and robust approximate nearest neighbor search using hierarchical
navigable small world graphs. CoRR, abs/1603.09320, 2016.

[17] Joyce A. Martin, Brady E. Hamilton, Michelle J.K. Osterman, Anne K. Driscoll, , and Patrick Drake. National
vital statistics reports. Technical Report 8, Centers for Disease Control and Prevention, National Center for Health
Statistics, National Vital Statistics System, Division of Vital Statistics, November 2018.

[18] O. M. Parkhi, A. Vedaldi, and A. Zisserman. Deep face recognition. In British Machine Vision Conference, 2015.

[19] P. Jonathon Phillips, Amy N. Yates, Ying Hu, Carina A. Hahn, Eilidh Noyes, Kelsey Jackson, Jacqueline G. Cava-
This publication is available free of charge from: https://doi.org/10.6028/NIST.IR.8271

zos, Géraldine Jeckeln, Rajeev Ranjan, Swami Sankaranarayanan, Jun-Cheng Chen, Carlos D. Castillo, Rama Chel-
lappa, David White, and Alice J. O’Toole. Face recognition accuracy of forensic examiners, superrecognizers, and
face recognition algorithms. Proceedings of the National Academy of Sciences, 115(24):6171–6176, 2018.

[20] Florian Schroff, Dmitry Kalenichenko, and James Philbin. Facenet: A unified embedding for face recognition and
clustering. CoRR, abs/1503.03832, 2015.

[21] Jeroen Smits and Christiaan Monden. Twinning across the developing world. PLOS ONE, 6(9):1–5, 09 2011.

[22] Yaniv Taigman, Ming Yang, Marc’Aurelio Ranzato, and Lior Wolf. Deepface: Closing the gap to human-level per-
formance in face verification. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition,
CVPR ’14, pages 1701–1708, Washington, DC, USA, 2014. IEEE Computer Society.

[23] A. Towler, R. I. Kemp, and D White. Unfamiliar face matching systems in applied settings. Nova Science, 2017.

[24] Working Group 3. Ed. M. Werner. ISO/IEC 19794-5 Information Technology - Biometric Data Interchange Formats - Part
5: Face image data. JTC1 :: SC37, 2 edition, 2011. http://webstore.ansi.org.

[25] David White, James D. Dunn, Alexandra C. Schmid, and Richard I. Kemp. Error rates in users of automatic face
recognition software. PLoS ONE, 10:1–14, October 2015.

[26] Bradford Wing and R. Michael McCabe. Special publication 500-271: American national standard for information
systems data format for the interchange of fingerprint, facial, and other biometric information part 1. Technical
report, NIST, September 2015. ANSI/NIST ITL 1-2015.

[27] Andreas Wolf. Portrait quality - (reference facial images for mrtd). Technical report, ICAO, April 2018.

[28] D. Yadav, N. Kohli, P. Pandey, R. Singh, M. Vatsa, and A. Noore. Effect of illicit drug abuse on face recognition.
In 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 1–7, Los Alamitos, CA, USA, mar
2016. IEEE Computer Society.

2022/12/18 FNIR(N, R, T) = False neg. identification rate N = Num. enrolled subjects T = Threshold T = 0 → Investigation
11:12:06 FPIR(N, T) = False pos. identification rate R = Num. candidates examined T > 0 → Identification

You might also like