In Biometrics, Which Error Rate Matters?


Biometrics error rates are like wine.  The question, “What is a good wine?” cannot be answered out of context.  Not for me, at least.  I need more information like, “What are you eating with it?”  I might not suggest the same wine for a rack of lamb as I would for a filet of sole.

Biometrics’ wine question is, “What is the most important error rate?”  Again, it depends on what you’re doing.  Three error rates are commonly discussed relating to biometrics:  the false accept rate, the false reject rate, and the equal error rate (or crossover error rate):  the point where a biometrics’ false accept rate and false reject rates are equal.

Vendors can tune their false accept and false reject rates:  for example, by adjusting the size of their biometric templates or by adjusting the confidence interval that determines a match.  Biometric verification is a statistical process, not a yes/no comparison, so a more precise comparison produces a greater degree confidence in the verification.  You might think that a greater confidence interval is better but that is not always the case.

Like most everything in biometrics, error rate selection is governed by use cases.  For starters, we can dismiss the equal error rate (EER).  In all my research, I have yet to encounter a use case where having an equal probability of false accept or false reject was optimal.  EER is an interesting academic construct but it’s hard to think of a use case where it matters.

That leaves us with choosing the right false accept or false reject rate, and with choosing the right wine for our rack of lamb.  A greater confidence interval lowers the probability of a false positive:  that someone will be accepted who should not be accepted.  This is highly desirable when authenticating a remote wire transfer of $10 million.  It is the rack of lamb’s Châteauneuf-du-Pape.

Low probability of false accepts is achieved by increasing the probability of false rejects, as shown in the figure below.  Conversely, that high confidence level also increases the probability of false rejects – check out the red vertical line in that figure.  For a convenience use case like smartphone authentication, too many false rejects can spell disaster, which also spells uninstall.  It’s like pairing our red wine with filet of sole.

Crossover Error Rate Compared to False Accept and False Reject Rates

Biometrics Error Rates

(Source: ISC2, International Information Systems Security Certification Consortium)

Instead, convenience use cases demand a reduction of false rejects – the Montrachet for our filet of sole.  Convenience use cases are typically consumer-facing, with only a small fraction of the population to worry about.  For example, the fingerprint authentication on my smartphone realistically need only differentiate me from the rest of my family.  Very few people will attempt to use my phone, so a low confidence interval is enough.  If I encounter a high number of false rejects due to a higher than necessary confidence interval, I will get frustrated and revert to password authentication.

Companies that will deploy biometrics must think about what kind of error rates make sense for their use case.  Higher confidence is not always better.  Pick the right errors for the right use case, and it’s just like picking the right wine for your meal.  Everything works better.

By the way, there is another tremendously important error rate:  failure to enroll.  But that one deserves a blog post of its own.

Comments are closed.