Evolution of Biometrics - A Brief Timeline
When was the first time you heard of Biometric technology? Was it one of the Star Trek episodes? Or in Blade Runner? From fingerprint scanning to iris scans and facial recognition, we have seen the use of sophisticated biometrics for identification and authorization on big screens for over 50 years. For many, it was an alien concept and a science fiction (Sci-Fi) used in movies portraying the advanced futuristic world. However, biometrics, once a fantastic theme of Sci-Fi movies, has become a reality. Today, we don’t even think twice before using FaceID to unlock or fingerprints to authorize apps on the phone. But this increase in the use of biometrics hasn’t happened overnight. The technology evolved over a long period.
In this post, we'll look at the evolution of biometrics technology and how it has progressed over time. We'll also discuss where this technology is headed and what new capabilities we can expect.
A brief introduction to biometrics technology
Biometric technology, often called biometrics or biometric authentication, uses a person's physical and behavioral characteristics to carry out identification and authentication.
The identification or authentication system gathers biological sampling and extracts and converts unique characteristics into digital data. This digital data is used to identify and authorize a person by the recognition system based on different templates available. Biometrics can be divided into two types i) Physical characteristics and ii) Behavioral characteristics.
Physical characteristics include fingerprints, palm prints, eyes (retina and iris), body odor, facial skin, skin pores, DNA, blood vessels, hand textures, etc. They are referred to as biometric modalities.
Behavioral characteristics include signatures, voice, facial expressions, the gait of walking, keystroke patterns, writing style, and more such unique behaviors.
Biometrics technology has a long history, dating back to ancient times.
The first account can be found in Babylon in 500 BC. According to Archeological research, there is evidence that Business transactions in Babylon recorded on clay tablets included fingerprints.
The first recorded use of biometric identification was in the 14th century when Chinese merchants used fingerprints to verify the identity of their business partners. According to Chinese historians, there was a custom to put inked fingerprints to sign legal documents such as loan and debt contracts.
Biometric technology continued to evolve over the centuries; however, in the late 19th century, the proper systems for using biometrics for identification were developed. And since then, it has undergone many changes and advancements.
Let’s dive deep into the evolution of biometrics from the 1800s to the 2000s.
The 1800s – Exploration of the Biometrics
The 19th century saw the development of the biometrics classification system, from the Bertillon system to the implementation of Henry’s Fingerprint classification system and the establishment of fingerprint databases.
- Sir William Herschel implemented the first systematic method for capturing finger and hand images. By getting the handprints of the employees on the back of their contracts, he could identify the actual workers on payday.
- The Bertillon system was developed using specific body measurements and photographs to identify criminals and offenders. Law enforcement authorities across the globe adopted this system. However, the system was soon abandoned as it was found that some criminals had similar body measurements.
- In 1869, Sir Francis Galton produced an in-depth study of fingerprints. The classification system that uses fingerprints to identify people is still in use today. Galton designed a form for recording inked fingerprint impressions and defined three main pattern types: loops, those patterns tend to curve back upon themselves; whorls, those patterns tend to be circular; and arches, those patterns which form no loops or circles. Each of the ten fingers was labeled with an L, W, or A, depending on its pattern type. The letters representing the patterns in the right hand's index, middle, and ring fingers were grouped together, followed by the letters representing the patterns in the corresponding fingers of the left hand. These letters were then followed by the letters representing the patterns in the right thumb and little finger and the left thumb and little finger. A person with the fingerprint patterns Loop, Loop, Arch, Whorl, Loop in the right hand and Whorl, Loop, Whorl, Loop, Loop in the left hand would, under this system, have a classification of LAWLWLLLWL. This series of letters was recorded on the fingerprint form, and the forms were filed alphabetically by classification.
- Sir Edward Henry, the Bengal Police force’s General Inspector, collaborated with Sir Francis Galton to devise a method of classifying and storing fingerprint information so it could quickly and efficiently be used.
Although most of the work done on using biometrics as a way for identification in the 1800s was on fingerprints and hand images, this period laid the road towards exploring new techniques.
The 1900s - Era of Scientific Research into Biometrics
It wouldn’t be wrong to say that research in modern biometrics began in the late 20th century and evolved into high-tech scanners that are almost 100 percent accurate today.
- The concept of liveness detection is the converse of the Turing Test, which was proposed by Alan Turing in 1950. The Turing test measures a computer’s ability to exhibit human-like behavior. While in liveness detection, a machine has to determine whether it is interacting with a live human or a bot.
- The research on acoustic speech and phonic sounds started in the 1960s and became the forerunner to modern voice recognition. Then in 1969, the Federal Bureau of Investigation (FBI) stressed the automation of fingerprint identification systems by funding the study of minutiae points to map unique patterns and ridges.
- In 1975, the first fingerprint scanners were created. The FBI financed the creation of these prototypes, which could extract points from fingerprints.
- However, the cost of digital storage was prohibitively expensive. As a result, the National Institute of Science and Technology (NIST) focused on developing compression algorithms. NIST's work on fingerprint-matching algorithms led to M40, which the FBI adopted as its first operational matching algorithm. M40 significantly reduces the search time for human technicians by narrowing down potential matches. Further developments in fingerprint technology have improved the accuracy and speed of identification to almost 100 percent and 2 seconds, respectively. NIST aimed to advance facial, speech, and visual recognition by filing patents for iris identification and subcutaneous blood vessel patterns. To further this goal, they digitized mugshots and stored them in databases.
- During the 1990s, biometrics underwent a boom period. Mathematical equations showed that less than one hundred data points could be used to differentiate facial images. In response, the National Security Agency (NSA) formed the Biometric Consortium. Further, the Department of Defence partnered with Defense Advanced Research Products to fund the development of face recognition algorithms for commercial use.
The 2000s - Adoption of Biometrics
While the 19th and 20th centuries are considered the era of research in biometrics, we saw a boom in the 21st century. In the 1990s, an enormous amount of work was done on developing techniques and algorithms for using biometrics, but most of this was on research for governmental institutions. We can divide the evolution of biometrics in the 21st century into two parts. The 2000s to 2010s, when solid systems and organizations for developing testing biometrics on a large scale were created, and the 2010s onwards, when the technology started to become available to the masses. Starting from early 2000, some major technological breakthroughs are:
The Face Recognition Vendor Test (FRVT) is a large-scale technology evaluation of commercially available biometric systems. Sponsored by various US government agencies, the FRVT has been conducted three times – in 2000, 2002, and 2006. In addition to face recognition, the FRVT model has also been used to evaluate fingerprint (2003) and iris recognition (2006). The main purpose of the FRVT is to assess performance on large databases.
In 2000, The FBI established a biometrics-based degree program at West Virginia University in consultation with professional organizations such as the International Association for Identification. This is the first accredited biometrics-based degree program in the United States. The program is designed to give students the skills and knowledge necessary to work in the field of biometrics.
In January 2001, a face recognition system was installed at the Super Bowl in Tampa, Florida, to identify " wanted " people entering the stadium. The demonstration failed to find any individuals that fit this description but misidentified up to twelve innocent sports fans. After Congressional and media inquiries, biometrics and its related privacy concerns became more well-known to the general public.
In 2001, Dorothy E. Denning, a member of the National Cyber Security Hall of Fame, coined the term “Liveness” in her Article: It’s “liveness,” not secrecy, that counts. Dorothy is also known as the Godmother of Liveness. Today, facial recognition and face verification aren’t even complete without a liveness check.
To promote interoperability and data interchange between different applications and systems, ISO established JTC1 /SC37 in 2002. Many other such organizations were established to develop biometrics technology throughout the 2000s, but most of these efforts were to develop biometrics for government use.
In 2003, A non-invasive biomedical measurement for determination of the liveness for use in fingerprint scanners was developed by the Biomedical Signal Analysis Laboratory at Clarkson University/West Virginia University. This software-based method processes the information already acquired by a capture device, and the principle of this technique is the detection of perspiration as an indication of liveness.
The Face Recognition Grand Challenge (FRGC) was established in 2006. FRCG evaluates the latest facial recognition algorithms that use high-resolution images, 3D face scans, and iris images, and the results indicated high accuracy.
The first breakthrough in commercial usage was when Google enabled voice search for the mobile version of Google Maps on BlackBerry Pearl phones in 2008.
This capability was also extended to Nokia phone users who could access it through the Google Mobile app. And by November of the same year, iPhone users could also utilize voice search.
While businesses and researchers were looking for commercial use cases of biometrics technology, the Indian government launched a large-scale identification program. The unique identification program was launched to capture fingerprints and iris scans of the population to link it to the national identity card, known as the Aadhar card. Today reportedly, the biometric data of 1.2 billion Indian citizens is mapped and linked to an Aadhar card. Although still considered controversial, this led to the adoption of biometric technology.
In 2010, Facebook advanced its algorithms and deployed a major update implementing facial recognition algorithms that identify people who may appear in the photos that Facebook users update on the network. Search, and matching algorithms find similar images in a large database of images uploaded daily on Facebook. Although it wasn’t highly accurate back when it was introduced, the algorithm has become more accurate in correctly identifying people in the images.
In 2011, the CIA used facial recognition technology and DNA to confirm Osama bin Laden's remains with 95 per cent accuracy.
By 2013, thousands of biometric recognition algorithms were patented in the US alone, and the usage of biometrics was commercialized from Airport security to financial institutions and attendance management. However, it wasn’t until the launching of Touch ID that biometrics gained traction in the digital realm.
Apple integrated Touch ID on new iPhones in 2013.
Even though there were smartphones that had fingerprint sensors before Apple’s Touch ID, Apple’s introduction made it more popular, and other phones started including fingerprint sensors that were used to unlock the phone. Integrating biometrics technology in smartphones ignited the interest of the business and people in biometric technology. Mobile app developers started including fingerprints to authorize access to the applications. This opened a gateway for using biometrics in the digital world.
While fingerprints were the center of attention during the early 2010s, mass-level research was ongoing to develop accurate algorithms for other biometric modalities, such as facial recognition and iris scan. According to NIST, the error rate of facial recognition algorithms was measured at 1/24, which was a considerable enhancement compared to the algorithms developed earlier. However, the technology wasn’t accurate enough to be adopted for more crucial tasks, such as authorizing users to log into financial apps.
Facial recognition became familiar with the launch of Apple’s FaceID, passport e-gates at airports, and MasterCard’s identity check. These solutions implement 1 to 1 matching algorithms that were relatively more accurate than 1-to-N matching recognition algorithms used to identify a person from a more extensive database.
- 1-to-1 matching - In 1:1 matching, a record from the data source can be assigned to one record from the reference source. In simple terms, when you try to unlock your phone, the algorithm matches your face/fingerprint only with your face/fingerprint data captured when you first enrolled.
- 1 to N matching - In 1:N matching, a record from the data source is searched against a database of multiple reference sources. For example, when you use fingerprints for attendance in your office, your fingerprint data is matched against a database of fingerprints of all of your colleagues.
One of the most significant recent advancements in biometrics is the advent of 3D facial recognition. This technology uses a camera to map a person's face in three dimensions, making it much more difficult to bypass than traditional 2D facial recognition systems. Incorporating 3D facial recognition scanners in smartphones made it possible for digital service providers to authenticate and authorize a person digitally. This advancement led to the adoption of biometrics for onboarding, validating, and authorizing users. According to NIST’s findings, most modern facial recognition systems using 3D facial scans have highly accurate outcomes reaching a FAR of 1/125 million. However, with advances in technology, the malicious actors also became more sophisticated. The developments of presentation attacks, video spoofing, deep fakes, and 3D masks pose a continuous threat to the authenticity of biometric systems.
Enter AI and Liveness Detection
Many researchers believe that the mass adoption and acceptance of biometrics as an identification and authorization technique combines advancements in multiple technologies, such as hardware and artificial intelligence. One such advancement is liveness detection. The research on developing liveness check algorithms started in the early 2000s. The liveness detection technology back then was slow and inaccurate since deterministic algorithms for searching and matching operations were used. With evolution and research on neural networks and deep learning, it has become sophisticated enough to be used against malicious attacks in later parts of the 2010s. In 2020, active and passive liveness detection algorithms show FAR 0.18. In the past two years, with the inclusion of depth sensors, it became even more accurate that the possibility of spoofing an identity without an actual human in front of the camera is 1 out of 80k.
Here’s an overview of the increase in biometric accuracy over time:
Current Advancements - Your Biometrics your control
This year-over-year advancement in enhancing biometrics for identification, authentication, and authorization has led to the wide-scale adoption of biometric technology. The technology that was once mere science fiction is now widely being used. From using your fingerprints for user identification to logging into your computer to using your Face ID to access your iPhone, most people in the world, wherever they are, will likely use some form of biometric authentication every day. Some countries, such as India, China, Taiwan, and Singapore, even require biometric identification to access social services, pay taxes, and/or vote.
This large-scale adoption, however, gave birth to controversy over the control of biometrics data and malicious attacks by bad actors. Many believe that governments and organizations having access to someone’s biometrics data could lead to disasters. China’s use of emotional detection AI to identify Uyghur Muslims is one example of how biometrics could lead to the fiasco. China collects biometric data – DNA, fingerprints, blood types, voice patterns, facial imagery – to correlate it with employment, gender, age, travel history, criminal history, and religious practices. Many in the industry consider it the equivalent of modern-day slavery, where your biometrics could be used against you at any time.
Another challenge over using biometrics is theft or illegal access to biometrics data. A bad actor can get access to your biometrics data and use it to misappropriate your finances or conduct any illegal stuff to frame you as a criminal. While AI technology has helped make biometrics advanced, on the other hand, malicious attackers have been using the same technology to orchestrate more sophisticated attacks.
One of the recent developments is new deep fakes that are really hard to identify. A new research paper from Horvitz says that interactive and compositional deepfakes are two growing classes of threats. In a Twitter thread, MosaicML research scientist Davis Blaloch described interactive deepfakes as “the illusion of talking to a real person. Imagine a scammer calling your grandmom who looks and sounds exactly like you.” Compositional deepfakes, he continued, go further with a bad actor creating many deepfakes to compile a “synthetic history.”
To cope with these security and privacy challenges, researchers and innovators in the biometric space are developing new innovative ways to utilize biometrics so that the privacy and security of the data aren’t compromised.
One of biometrics's most recent and innovative uses in the crypto industry, where sharing or using personal data has always been seen as Taboo. To use biometrics to authenticate without revealing or sharing the user's personal information, Humanode developed crypto-biometrics. This technology uses biometrics (facial recognition) to verify that the person is unique and alive without revealing or sharing personal information. The system utilizes various encryption techniques to encrypt biometric data before it leaves the user’s device and ZK knowledge proofs to validate that the user is actually who they claim to be in a decentralized manner instead of storing information on centralized servers. Although still really new to the industry, crypto-biometric technology is anticipated to solve multiple problems with current authorization processes.
Other than this, the use of multi-modal biometrics authentication is another significant development. Instead of relying on one biometric factor, multi-modal authentication offers more comprehensive security against spoofing and hacking. One of the prominent examples is BioID’s multi-modal authentication system that uses facial and eye (periocular) recognition to authenticate a user.
Another advancement in making biometrics more secure is research and development in fighting malicious attackers. Currently, many research institutes and companies are working on developing algorithms to fight spoof attacks. One recent progress is claimed by Intel. According to the sources, Intel claims that its product, FakeCatcher has a FAR 1/25 from deep fakes.
Adopting biometrics at a large scale could also be considered an advancement. The current market situation makes one thing clear: biometrics have become more accessible than they were ten years ago. It wasn’t long ago when big enterprises and strong development teams used facial recognition and liveness checks. With advancements like crypto biometrics, the democratization of technology has become possible. Small enterprises and startups can now benefit from the technology for security, protection, and convenience.
What does the Future of Biometric Technology Look Like
In a rapidly advancing world, we have seen more technological advancements in recent years than in the last century. The same is the case with biometric technology. From external biometric modalities (fingerprint, iris, face, hand geometry) to internal biometrics (DNA recognition, ECG, dental biometrics), a lot of research is being done to use human body parts to identify and authenticate human beings. In fact, some modalities, such as DNA, are already being used by law enforcement agencies around the globe. However, with technical restrictions and high operation costs, they aren’t still mainstream.
Other experiments, such as implanting microchips and developing biosensors, are also under consideration and advanced research is ongoing to develop convenient and cost-effective technologies for making implants accessible on a wide scale. However, due to their controversial nature, the world is still disinclined to accept them. Whether such technologies will become mainstream in the future is still a question. But the overall future of biometric technology looks bright. In fact, the biometrics market is expected to reach $3.6 billion by 2026, according to StategyR. This promising growth shows that biometric technology is the way to the future, and in the coming years, we will see more advancement in the technology and its adoption.
Biometrics has undergone many changes and advancements over the years.
Since its inception, biometrics technology has undergone many changes and advancements. Early incarnations of this technology were crude and often unreliable. However, thanks to computing power and miniaturization advances, modern biometrics are much more accurate and reliable. Thus advancement led to the use of biometrics for various purposes, including diagnosing diseases, identifying perpetrators, detecting emotions, and much more. We will discuss the use cases of biometrics in our next article.
Related Sources
- https://www.nist.gov/
- https://www.researchgate.net/publication/225476487_Emerging_biometric_modalities_A_survey
- https://medium.com/paradigm-research
- https://www.jumio.com/app/uploads/2018/07/netverify-liveness-detection.pdf
- https://www.captechu.edu/blog/evolution-of-biometrics
- https://www.biometricupdate.com/201802/history-of-biometrics-2