Getting Biometrics Deployments Right
Last week, we came across a VentureBeat article by Andrew Shikiar, Executive Director and CMO of the FIDO Alliance. The FIDO-sponsored article made a number of compelling points, including that biometrics, when properly deployed, solve many of the biggest fraud and cyber security challenges present right now.
However, on the dangers of centralised biometric deployment models the article was misleading in some important respects. The article posited that centrally stored biometrics fundamentally undermined the security of biometric deployments:
“Biometrics are secure, yes. But store them on a server and we’re back to where we started, but even worse because of that whole “can’t change your fingerprint” fact.”
This line of argument is incorrect and completely disregards best practice biometric deployment methods, as well as the new use-cases and applicability of biometrics that require the operator of the system to control the biometric process, not a handset manufacturer.
First, it is important to make clear that best practice for any biometric deployment does not require (or recommend) storing raw biometric data (like images of fingerprints or irises, or intelligible audio formats for voice biometric systems). Rather, the biometric provider uses such data at the point of capture, and then translates such raw data into a derivative biometric model, which is used for future biometric verification but is entirely meaningless to any hacker or adversary.
The reference in the article to the Biostar 2 data breach also completely ignored the fact that the data in question, both biometric and non-biometric data, was almost totally unencrypted. Poor data management practices are nothing to do with biometrics per se. To restate the problem, the breaches here aren’t a biometrics problem, they’re a data management problem.
To be clear, any biometric solution, regardless of modality, should have biometric models, i.e. the digital derivative only, encrypted at rest and in transit. That is best practice, regardless of whether a centralised model or an on-device model is used. An encrypted digital representation of a voice, face, fingerprint or any other modality cannot be used to identify a person, cannot be re-engineered back into its original raw data form and therefore cannot be used by attackers. The assertion that centralised biometric databases contain images of peoples faces, fingerprints or audio files is (or should be) completely false, in any competently implemented biometric deployment.
The Indian Aadhar example quoted was also nothing to do with biometrics, it was around accessing data to create national identity cards, i.e. physical cards. Those cards are then checked against a centralised ID for all manner of services. And herein lies the very reason for centralised processing systems; usability, accountability and ownership/management.
Large organisations of all types, whether banks, government agencies, telcos, healthcare and many others have multiple channels with which to communicate with their customers. Contact centres, for instance, are increasingly adopting voice biometrics to authenticate callers in a fast and seamless manner, and one which eliminates weak knowledge-based questions. This is, of course, not possible in an on-device model. Nor is it possible in the increasing usage of AI chatbots on IVR channels, web channels and even apps. Continuous authentication, ensuring actors haven’t changed during a transaction process, is also not possible.
The myriad advantages and opportunities that a centralised biometric processing model offers organisations have long been understood and readily embraced. Using a reputable solution that applies best practice in data management; the same as for any solution that processes or stores sensitive data, not just biometric solutions, ensures all of the upside advantages of a centralised model with no additional risk over and above an on-device solution.