Tag Archives: facial biometrics

Facial Biometrics: Liveness and Anti-Spoofing

Most of us understand how fingerprinting works, where we compare a captured fingerprint, from a crime scene for example, to a live person’s fingerprint to determine if they match. We can also use a fingerprint to ensure that the true owner, and only the true owner, can unlock a smartphone or laptop. But could a fake fingerprint be used to fool the fingerprint sensor in the phone? The simplest answer is yes unless we can determine if the fingerprint actually came from a living and physically present person, who might be trying to unlock the phone. 

In biometrics, there are two important measurements, Biometric Matching and Biometric Liveness. Biometric matching is a process of identifying or authenticating a person, by comparing their physiological attributes to information that had already been collected. For example, when that fingerprint matches a fingerprint on file, that’s matching. Liveness Detection is a computerized process to determine if the computer is interfacing with a live human and not an impostor like a photo, a deep-fake video, or a replica. For example, one measure to determine Liveness includes determining whether the presentation occurred in real-time. Without Liveness, biometric matching would be increasingly vulnerable to fraud attacks that are continuously growing in their ability to fool biometric matching systems with imitation and fake biometric attributes. Attacks such as “Presentation Attack”,  “spoof”, or “bypass” attempts  would endanger a user without proper liveness detection. It is important to have strong Presentation Attack Detection (PAD) as well the ability to detect injection attacks (where imagery bypasses the camera) as these are ways to spoof the user’s biometrics. Liveness determines if it’s a real person while matching determines if it’s the correct, real person.  

With today’s increasingly powerful computer systems, have come increasingly sophisticated hacking strategies, such as Presentation and Bypass attacks. There are many varieties of Presentation attacks, including high-resolution paper & digital photos, high-definition challenge/response videos, and paper masks. Commercially available lifelike dolls are available, human-worn resin, latex & silicone 3D masks, as well as custom-made ultra-realistic 3D masks and wax heads. These methods might seem right out of a bank heist movie, but they are used in the real world, successfully too. 

There are other ways to defeat a biometric system, called Bypass attacks. These include intercepting, editing, and replacing legitimate biometric data with synthetic data, not collected from the persons biometric verification check. Other Bypass attacks might include intercepting and replacing legitimate camera feed data with previously captured video frames or with what’s known as a “deep-fake puppet”, a realistic-looking computer animation of the user. This video is a simple but good example of biometric vulnerabilities, lacking any regard for Liveness.

The COVID19 Pandemic provides significant examples of Presentation and Bypass attacks and resulting frauds. Pandemic Stay-at-Home orders, along with  economic hardships, have increased citizen dependence on the electronic distribution of government pandemic stimulus and unemployment assistance funds, creating easy targets for fraudsters. Cybercriminals frequently utilize Presentation and Bypass attacks to defeat government website citizen enrolee and user authentication systems, to steal from governments across the globe which amounts in the hundreds of billions of losses of taxpayer money

Properly designed biometric liveness and matching could have mitigated much of the trouble Nevadans are experiencing. There are various forms of biometric liveness testing:

  • Active Liveness commands the user to successfully perform a movement or action like blinking, smiling, tilting the head, and track-following a bouncing image on the device screen. Importantly, instructions must be randomized and the camera/system must observe the user perform the required action. 
  • Passive Liveness relies on involuntary user cues like pupil dilation, reducing user friction and session abandonment. Passive liveness can be undisclosed, randomizing attack vector approaches. Alone, it can determine if captured image data is first-generation and not a replica presentation attack. Significantly higher Liveness and biometric match confidence can be gained if device camera data is captured securely with a verified camera feed, and the image data is verified to be captured in real-time by a device Software Development Kit (SDK). Under these circumstances both Liveness and Match confidence can be determined concurrently from the same data, mitigating vulnerabilities.  
  • Multimodal Liveness utilizes numerous Liveness modalities, like 2 dimensional face matching in combination with instructions to blink on command, to establish user choice and increase the number of devices supported. This often requires the user to “jump through hoops” of numerous Active Liveness tests and increases friction.  
  • Liveness and 3-dimensionality. A human must be 3D to be alive, while a mask-style artifact may be 3D without being alive. Thus, while 3D face depth measurements alone do not prove the subject is a live human, verifying 2-dimensionality proves the subject is not alive. Regardless of camera resolution or specialist hardware, 3-dimensionality provides substantially more usable and consistent data than 2D, dramatically increasing accuracy and highlights the importance of 3D depth detection as a component of stronger Liveness Detection.

Biometric Liveness is a critical component in any biometric authentication system. Properly designed systems require the use of liveness tests before moving on to biometric matching. After all, if it’s determined the subject is not alive, there’s little reason to perform biometric matching and further authentication procedures. A well-designed system that is easy to use allows only the right people access and denies anybody else.  

Care to learn more about Facial Biometrics? Be sure to read our previous releases Exploring Facial Biometrics. What is it? and Facial Biometrics – Voluntary vs Involuntary.

About the authors:

Jay Meier is a subject matter expert in biometrics & IAM, and an author, tech executive, and securities analyst. Jay currently serves as Senior Vice President of North American Operations at FaceTec, Inc. and is also President & CEO of Sage Capital Advisors, LLC., providing strategic and capital management advisory services to early-stage companies in biometrics and identity management. 

Meyer Mechanic is a recognized expert in KYC and digital identity. He is the Founder and CEO of Vaultie, which uses digital identities to create highly fraud-resistant digital signatures and trace the provenance of Legal and financial documents. He sits on DIACC’s Innovation Expert Committee and has been a voice of alignment in advancing the use of digital identity in Canada.

Additional contributions made by members of the DIACC’s Outreach Expert Committee including Joe Palmer, President of iProov Inc.

Facial Biometrics – Voluntary vs. Involuntary

Contributions made by members of the DIACC’s Outreach Expert Committee

One of the nicest things about this pandemic is wearing our sweatpants all day, every day, while working. Right? Wouldn’t it be nice to wear sweatpants to the office, when this is all over? Sure, but that’s unlikely, at least if we want to keep our jobs. So, we choose to forgo our private sweatpants rights, to keep our more public jobs. Privacy and how we protect it has never been more important in our daily lives. Yet, a balance is necessary, between privacy and security, in our ever increasingly digital lives. With that, there is a rather public debate about privacy and the ethical use of biometrics. In particular, face biometrics can be misunderstood and feared with respect to our Right to Privacy. This article is an attempt to create a better understanding of it and, hopefully, reduce fear of such a promising and effective technology.  

Our use of biometrics enables our safety. At its most basic level, a biometric is a physiological trait or attribute that is unique to an individual person. When we recognize the faces of our loved ones, friends, and colleagues with our own eyes, we are using biometrics to help us build relationships and trust. Biometrics are, in a way, fundamental to human interaction and society. Today, biometrics can be used in person or automated, to increase convenience and security in various situations. 

There are many types of biometrics that we use daily. Face images are, by far, the most common. Any time somebody requests to see our driver’s license, passport, or ID badge, we are using biometrics. Most governments include face biometrics, in the form of a photograph, in their driver’s license, passport, or other national ID databases, as well as on the credential, itself. We use face biometrics, attached to a credential, to establish trust, when we access privileges, like driving a car, government services, commercial air travel and even buying liquor.   

Today there is some confusion and fear about biometrics, what they are, and what they can actually do. There are only two basic functions that a biometric can perform, Identification and Authentication. When we first meet somebody, we may associate their name with their face. When we see that face we can identify that person by accessing the information in our mind. “Hi, I’m Jennifer” associates a name to face and allows us to identify them. When we arrive at a restaurant, we might scan customers to see if we recognize any of them. We identify Jennifer by recognizing their face in the crowd of people. In another instance, we can authenticate Jennifer by recalling information about them when presented with their face. If Jennifer were to knock on our front door, we can authenticate them with our own simple sight. Fundamentally these are the only two functions of a biometric. A properly designed face biometric system should include high confidence liveness checks, to differentiate between a real human face and a photo (for example) and face matching, in order to prove that a person is in fact who they claim to be. Stay tuned for our next blog where we’ll talk about liveness checks, anti-spoofing, and other methods to prevent various types of fraud.

In this age of information hyperbole, perhaps it’s understandable that there is some public misunderstanding of how these important technologies can and should be used. Big Brother surveillance and Privacy are central to debates about the ethical use of face biometrics. However, when examined thoroughly, we find that most, if not all of the ethical debate centers on how the biometrics are used and, more specifically, whether the biometric is used Voluntarily or Involuntarily.  

We tend to prefer freedom of choice, with our personal privacy. Sometimes we guard it. Other times, we agree to forgo some privacy, to access and enjoy many privileges of our society. For example, when applying for a credit card loan, the bank logically prefers to understand whether or not we are likely to repay the loan. To control its fraud and compliance risk, it requires knowing who it’s lending to and whether we are an existing customer, or potential fraudster, posing as a customer. So, we volunteer to divulge our private identities and backgrounds. We also allow the bank to compare our identity to those of existing customers, to learn if we are an existing customer or a fraudulent account holder at the bank. The bank Identifies us as a new customer (or not), as well as a legitimate customer (or not). It’s our choice to participate in this identification process, or not. It’s voluntary. If we agree, once we are approved, the bank requires us to Authenticate ourselves, every time we try to access the valuable privilege. In this case, we voluntarily relinquish our privacy Right, to get something of value in return. Moreover, the process includes both Identification and Authentication in a safe, productive and ethical way.  

However, in some cases, the privilege we want to enjoy is so valuable (granting it is so risky) that the service provider may be tempted to invade our privacy, without our consent, to protect it. Of course, there are some bad people out there. The problem is that we can’t tell who is good and who is bad, unless we specifically identify them all. To this end, law enforcement may wish to investigate and identify everyone within a certain group, like those walking down the street. The City of London, England, for example, installed CCTV cameras on most city street corners, to survey walkers-by for criminal activity. Moreover, the CCTV system was equipped with biometric facial recognition technology, to learn if any passers-by were in the criminal mugshot database. The City used Involuntary face recognition surveillance to Identify anyone in the field of view. Innocent passers-by had no choice but to submit to such a surveillance investigation and many view this as a clear violation of privacy.  

Similar to any tool, there is nothing inherently ethical or unethical about face biometrics, however these tools must be used responsibly. In a voluntary scenario, both Identification and Authentication can be important functions when used to protect our privacy and enable privileges that could not be offered without the presence of a human authentication method. This can help break down barriers to accessing government services or performing international business (by using authentication to reduce instances of fraud) or even protecting your own assets by granting access to your files via authentication of your unique profile. However in an involuntary scenario, potentially unethical scenarios exist that require deep conversations and eventually regulation about the balance between safety and privacy.

Understanding the difference between Voluntary and Involuntary facial recognition is a foundational issue that is helping us set the boundaries for the ethical use of biometrics by both governments and businesses. How that data is stored and used are all tied into this same problem. While there is nothing inherently ethical nor unethical about face biometrics, understanding and regulating its acceptable use is an important step to building public confidence in this trust enabling tool.

Care to learn more about this topic? Be sure to read our last release Exploring Facial Biometrics. What is it?