Author: Amaya Novas-Peña
January 30, 2025
In a startling recent review, researchers at the Mozilla Foundation discovered that modern vehicles have the ability to collect vast amounts of unnecessary personal data, including intimate details of passengers' sexual activity, medical information, and personal conversations. Through a combination of microphones, cameras, trackers, sensors, and connected devices, cars can capture up to 25 gigabytes of personal data per hour, creating inference-based behavioural profiles on the intelligence, abilities, and interests of users.
Even more troubling, is that while users have little ability to delete or opt out of data collection, 84% of brands researched share this data with third parties, 76% have the ability to sell this personal information, and over half can provide information to government and law enforcement bodies upon a mere informal request. Although some local laws require brands to disclose how data is collected and used, there is limited regulation in the U.S. dictating how manufacturers encrypt the personal information stored in their vehicles, which is particularly concerning given how easily internet-connected cars can be remotely hacked and controlled.
This intrusive data collection and lack of adequate consumer protection exemplifies a broader, worrying trend of corporations harvesting their users' biometric data. With highly anticipated projects promoted by tech moguls such as Sam Altman’s Worldcoin, which aims to create a global digital identity system using iris scanning, and Elon Musk’s Neuralink, an implantable brain-computer interface, there is an urgent need to examine how biometric data collection affects the fundamental right to privacy and the risks posed by data misuse.
Understanding Biometric Data Collection
Biometric data refers to an individual’s unique characteristics, which are divided into two distinct categories: biological and behavioural. Biological biometrics encompass unchangeable physical characteristics such as fingerprints, facial features, and DNA patterns. Conversely, behavioural biometrics are the more subtle characteristics unique to how people act, including patterns in voice, gait, and typing rhythm (Jain et al.). Biometric data can be derived from a variety of sources, including pictures, video and audio recordings, and physical scans, and can even depict and describe. Unlike passwords or identification cards, these characteristics are inherently personal and immutable, making their protection particularly crucial.
Biometric recognition is the automated or manual use of these physical, chemical, and behavioural attributes to establish an individual's identity. Biometric authenticators rely on pattern recognition systems that extract salient features from a sample of an individual’s human characteristics taken by capture devices. These features are then represented as biometric templates which can be compared against others’ templates stored in a database (Melzi et al. 3,6). The results of this comparison can be used either to validate or establish a person's identity. Unlike passwords, a perfect match is not required, as changes in sensing conditions or slight alterations to physical characteristics are common. Hence, calculations are based on the proximity of biometric data acquired in authorization requests to the stored data of a particular user compared to its similarities to other users’ templates (Jain et al. 7-8).
The primary strengths of biometric recognition and authentication are its universality, uniqueness, and general permanence. Everyone has biometric information that is specific to themselves and difficult to change. As such, it cannot be forgotten, lost, or transferred to others, and is challenging to forge (Melzi et al. 2). Moreover, biometric information can be easily measured, recorded, and stored and does not require that an individual remember or bring anything with them other than themselves (Miller). However, the convenience of integrating biometric data into everyday life does not come without risk.
Where is Biometrics Data Used?
Biometric authentication is now widely used across many sectors, driven by the demand for reliable identity verification in large-scale systems. Unlike passwords or ID cards, biometric data is difficult to steal or share, making it a more secure option to prevent unauthorized access to sensitive information and resources (Jain et al. 1-3).
The most common use of biometric data is fingerprint and facial recognition on phones and mobile apps, providing an additional layer of cybersecurity as a phone password or to access online banking, shopping, entertainment or social media apps. However, biometric data use extends far beyond personal devices, with applications from automated border control systems and e-banking services to unlocking home doors, and even age verification at bars and nightclubs. Moreover, governments and corporations use biometric authorization for entry into offices, time card authentication, or to access specific restricted areas and materials. The healthcare sector has widely embraced biometric solutions, implementing them in hospitals to reduce wait times and ease the burden on overworked staff. Additionally, mHealth apps have been promoted to allow patients to monitor their conditions from home. The global healthcare biometrics market is expected to reach $24.9 billion by 2031 (Mendolla 927). Biometric systems integrated with artificial intelligence (AI), most commonly facial recognition technology, are increasingly relied upon by the government for policing, border security and immigration management, cyberintelligence, and warfare (Hu 283).
Recent data suggests that citizens in the United States are only moderately comfortable providing their biometric information and are aware and concerned about potential harms involved with biometric systems. Thus, a lack of transparency regarding the collection and protection of this sensitive data may erode consumer trust.
The Data Broker Ecosystem
Behind the convenience of biometric authentication lies a sophisticated network of data brokers profiting from the sale of consumers’ personal information. Data brokerage is the practice of buying, aggregating, licensing, sharing and selling data that brokers gather from firms they own or software they control, from other companies, or from publicly available records. This information is compiled into databases or used to develop detailed user profiles, then sold or shared with third parties (Sherman 2).
When interacting with the online world, users leave a digital footprint of personal information that can be repurposed and monetized. This can occur without the user's knowledge, as consent for data collection is often obtained using “dark patterns” designed to mislead, confuse, or be overly time-consuming to coax users into accepting terms they would not otherwise agree to. Although the data provided from a single source may not disclose substantial information, when many data sources are combined, brokers and other third parties can easily find and identify users’ key traits, including their biometric information (DiPersio 40).
Data broker companies claim that they help businesses' marketing campaigns, assist academic research, and help governments manage crises. While the potential applications of their data may be noble, data brokers' method of acquisition is questionable and the potential misuse of their data may place individuals at risk (DiPersio 41). Aside from biometric data, data brokers also hold sensitive information such as race, ethnicity, gender, immigration status, and political beliefs that could compromise an individual's privacy and civil rights protections if shared (Sherman 9).
For example, information obtained by a multitude of data brokers from the Life360 app included the location data of U.S. Planned Parenthood Clinics, subjecting these firms to a class action lawsuit alleging that users' location data was sold without permission. In 2022, the Federal Trade Commission (FTC) filed a lawsuit against Kochava, one of the world's largest mobile data brokers, for selling geolocation data that included sensitive locations from mobile phones without users' consent that was not anonymized and is easily linkable to individual identities.
In the U.S., it is currently legal to buy data through data brokers (DiPersio 40). Moreover, companies are not required to inform individuals that their data is being used to micro-target them with advertisements. People often do not know what data is being collected in the first place, much less the methods by which they can erase or correct information (Sherman 9). In contrast, in the EU and UK, the law mandates that there must be a legal basis to process personal data, data can only be used for the purpose disclosed, and consent can be withdrawn at any time.
Human Rights Concerns
The rapid advancement of biometric technologies has intensified personal data collection, sparking serious concerns about privacy, data security, and the risks of bias and discrimination. The United Nations enshrined the right to privacy in the 1948 Universal Declaration of Human Rights. However, without robust safeguards and regulations, biometric systems could undermine this right, jeopardizing individual autonomy and the freedom to manage one’s own data.
Privacy
The primary risk posed by biometric data arises from the potential exposure of sensitive personal information if it is not adequately anonymized (Valdivia et al. 1408). The collection, storage, and use of personally identifiable features such as fingerprints, facial scans, and iris patterns not only raise privacy concerns, but also may expose sensitive details about an individual’s health, beliefs, and affiliations. Biometric data can reveal attendance at medical, religious, political, or union-related events. Information such as race, gender, ethnicity, age, health conditions, aptitudes, demeanour, and even image representations can be reconstructed from features stored in biometric data. Thus, if multiple systems containing biometric data are compromised, cross-matching may reveal many more personal attributes about an individual than most intend to be public (Melzi et al. 3).
After the U.S. Supreme Court’s Dobbs v. Jackson Women’s Health Organization decision that struck down the constitutional right to an abortion, there has been increasing concern that law enforcement may be able to access health information to track individuals who may be seeking reproductive care, raising concerns of the protections of biometric data and requirements of informed consent under the Health Insurance Portability and Accountability Act of 1996 (HIPAA). In response, legislators proposed the My Data, My Body Act which aims to protect personal reproductive health data collected by apps and websites (Mendolla 929).
Although large tech companies such as Apple and Google permit users to opt out of trackers, the industry's general lack of transparency leaves individuals vulnerable to significant privacy risks. The growing reliance on online advertising encourages the unrestrained collection and sale of personal information, creating a system of constant surveillance. This has led U.S. lawmakers to urge the FTC to investigate the role of large tech companies in facilitating the sale of mobile phone data for targeted marketing (Mendolla 932).
Bias and Discrimination
The potential for bias to arise in the domain of machine learning is well understood, and these risks are carried into its application to biometric data. From a mathematical perspective, it is impossible to have complete fairness and a lack of bias in the datasets used to train algorithms, and thus biometric identification systems inevitably replicate these biases (Valdivia et al. 1408).
Biometric recognition systems can sometimes yield completely inaccurate results. Reports published by the National Institute of Standards and Technology (NIST) found that facial recognition systems are more likely to produce false positive matches with African and East Asian faces compared to Eastern European faces, with women compared to men, and with elderly and children compared to middle-aged adults. Further studies found that gender classifiers perform better on male compared to female faces and better on lighter compared to darker faces, with error rates up to 34% higher for darker-skinned females than lighter-skinned males. Moreover, biometric systems may have difficulty interpreting the features of disabled people, which may limit the ability of certain individuals to access services that rely on biometric authorization.
Algorithmic bias is particularly concerning as law enforcement agencies increasingly use automated technologies to identify suspects. Although facial recognition is not a new tool for law enforcement, FBI biometric databases that were once limited to fingerprints and DNA from criminal investigations now also contain driver’s license photos in several states, including data from law-abiding citizens (Garvie et al. 2). Third-party facial recognition databases, sold by vendors such as Clearview AI that “scrape” photographs of faces from public social media platforms, have been purchased by over 600 police forces nationwide and were used to identify January 6 rioters. Notably, Clearview reached a settlement in a case brought forward by the American Civil Liberties Union (ACLU), agreeing not to make its faceprints available to most private entities in the U.S. to protect the safety of marginalized communities (Taetzsch 137). Law enforcement's application of facial recognition systems is clearly not without flaws. In 2022, a black man in Baltimore was mistakenly identified by facial recognition software as a suspect in the case of second-degree assault and sent to a detention centre for over a week. The following year, another black man was wrongly arrested in Louisiana for theft after facial recognition technology incorrectly identified him.
In addition to generating false positives that result in unwarranted searches and arrests, there are concerns that this technology could be used to invade the privacy of peaceful protesters or individuals without criminal accusations. U.S. law enforcement agencies are not required to disclose the use of facial recognition in their investigations, as it is intended to generate leads rather than serve as the sole basis for probable cause, which is concerning because certain groups may be disproportionately affected by entry into the system. A study of over 100 police departments found that African Americans are more likely to be stopped by law enforcement and subjected to facial recognition searches. This is likely due to mugshot databases containing a higher number of African Americans and the higher error rate in identifying individuals with darker skin tones (Hu 300).
In the EU, the EURODAC biometric database to determine which country is responsible for an asylum seeker’s application has been criticized as a method seeking to immobilize, control, and obstruct the movement of migrants through their fingerprints. Likewise, the United Nations High Commissioner for Refugees (UNHRC) has been criticized for requiring iris scans in a refugee camp in Jordan to register migrants and provide cash assistance through Irisguard’s EyePay system, which enables payments through Ethereum blockchain technology. Moreover, the UK has proposed using facial recognition smartwatches to monitor migrants with criminal records. The use of these biometric systems to refuse asylum seekers may be used to criminalize migrants and infringe on their freedom of movement (Valdivia et al. 1408).
Hacks and Leaks
As companies and governments continue to build mass biometric databases of customers, employees, and citizens, they become prime targets for hackers and cybercriminals. Since biometric data protects highly private and critical resources, it can be exploited for fraud or to expose sensitive information for financial gain (Miller). Data brokers are especially vulnerable to breaches as they compile data from various sources. From a national security standpoint, there is little to stop data brokers from providing information on U.S. citizens, including military and government personnel, to foreign entities (Sherman 10).
Another notable risk is identity theft, as hackers can use stolen biometric information to access an individual's banking information and digital wallets (Mendolla 928). Likewise, institutional data breaches are becoming more common. In 2015, the U.S. Office of Personnel Management (OPM) was the victim of a massive breach of databases containing information such as Social Security numbers, names, dates and places of birth, and addresses of over 21 million individuals including state officials, and hackers gained access to the fingerprint data of 5.6 million people.
Biometric data is almost impossible to destroy. Whereas an individual affected by a breach involving a password can change it to prevent further leaks, once malicious actors have their hands on an individual's biometric information, it can be possessed, utilized, and sold forever.
Other Concerns
Advances in artificial intelligence have significantly improved machine learning, streamlining the collection of biometric data and the implementation of more advanced updates. The integration of AI into this process raises concerns, as it can rapidly and continuously monitor, gather, and transmit biometric information to third parties without consumers’ awareness. Another challenge is the proliferation of deepfakes, synthetic media that can digitally impersonate individuals by replicating their images and qualities such as their face, voice, and mannerisms, to commit fraud, defame, or harass them. This carries large risks; for example, it could place an innocent individual who was not actually present in video footage of the scene of a crime (Miller).
Furthermore, a troubling emerging practice is the harvesting of biometric data from populations in developing countries. The Worldcoin project, for instance, has faced criticism for targeting low-income communities in the Global South by taking high-resolution images of their body, face, and irises, in exchange for cash or crypto tokens. Worldcoin’s representatives are alleged to have used deceptive marketing practices, collected more personal data than it acknowledged, and failed to obtain meaningful informed consent, raising ethical questions about data colonialism.
Despite these risks, the growing ease of biometric data collection and integration into daily life has fostered a troubling indifference toward sharing highly sensitive information (Mendolla 926, 931).
The State of Privacy Protection in the United States
The United States lacks a unified federal privacy law, leading to a fragmented legislative approach to data protection and leaving areas of consumer information vulnerable to exploitation (Mendolla 934). As a result, there is no federal regulation governing biometric privacy, with protections instead relying on a patchwork of state, local, and industry-specific laws. Consequently, restrictions on how this data is collected, used, and sold are either weak, vague, or nonexistent (Miller).
Neither privacy nor biometrics is mentioned in the text of the Constitution, although these concepts may be protected by its authority. Various amendments recognize “zones of privacy”, although their legal boundaries are not clearly defined (Woodward 260). The First Amendment protects freedom of speech, press, religion, and association, the Fourth Amendment protects people from unreasonable searches and seizures by the government, and the Fifth Amendment protects against self-incrimination. The majority of constitutional privacy interests are covered by the Fourteenth Amendment which states that no State should “deprive any person of life, liberty, or property, without due process of law” (360). While the Constitution regulates government-sector action, it only provides limited protection for the actions of private individuals (359-360).
The Supreme Court has recognized three zones of privacy that may apply to biometric data, vaguely categorized into three forms of fundamental rights: physical privacy, decisional privacy, and information privacy (Woodward 362). However, the Supreme Court has not found constitutional privacy rights in personal information given voluntarily to private parties, thus, a private party in possession of biometric information may have the right to disclose it (376). Furthermore, court decisions have determined that in noncriminal contexts, individuals have limited constitutional protections over their biometrics and in criminal cases. Courts have also found that biometrics do not implicate constitutional privacy concerns, as those features are constantly exposed to the public (363-4).
The 1974 Privacy Act regulates federal government agencies’s collection, maintenance, use, and dissemination of personal information, subjecting data collectors to specific responsibilities and conferring certain rights to data subjects to protect them from unwarranted invasions of privacy. The Privacy Act also grants citizens the right to access the data stored by its agencies and requires agencies to inform citizens of the purpose of collecting their data (Taetzsch 130). However, the Act does not regulate the use of personal information by state and local-level government agencies or by the private sector for any individual who isn't a U.S. citizen or has not been lawfully admitted for permanent residence. Although the Privacy Act does not specifically mention biometrics, it applies to all records held in record systems (Woodward 369-370).
The Patchwork of Biometric Privacy Laws
While federal regulation governs industry-specific data privacy such as HIPAA with healthcare-related data, the Children's Online Privacy Protection Act (COPPA) to protect children's data on online platforms, and even the Gramm-Leach-Bliley Act (GLBA) with financial-related data, few states have specific regulation covering biometric data (Vuyk 66).
Although the FTC can take action against companies that violate consumer’s privacy and security, it has limited power over the overlapping and sometimes contradictory protections for biometric data in the current legal framework (Taetzsch 130). In 2023, the FTC released a policy statement on the use of biometrics in the consumer market, describing the implications for general consumers such as problems with privacy, data security, bias and discrimination (Vuyk 64). The policy statement made it clear that the FTC would scrutinize the practices of companies collecting and using biometric data for information or marketing purposes to ensure compliance with Section 5 of the FTC Act prohibiting “unfair or deceptive acts or practices in or affecting commerce”.
Only four states currently have regulations protecting biometric data. Illinois was the first to enact a law specific to biometric data with its 2008 Biometric Information Privacy Act (BIPA), which encouraged Washington and Texas to follow suit with the Washington Biometric Privacy Protection Act (WBPPA) and the Texas Capture or Use of Biometric Identifier Act (CUBI) (Mendolla 940-5). The 2018 California Consumer Privacy Act included biometric data under its definition of sensitive personal information subject to its provisions. Given the significant presence of tech companies in California, the Act's provisions may extend to residents outside the state who use products developed there (Taetzsch 132). In the past year, eleven additional states have introduced biometric-specific privacy bills, though none have passed. While this reflects growing recognition of the need to protect biometric data, differences in definitions, enforcement methods, and remedies may exacerbate the current fragmented approach to privacy laws (Mendolla 941).
Cities have also taken it upon themselves to establish local laws regulating the use of biometric identification technologies. In 2019, San Francisco enacted an ordinance banning police and government agencies from using facial recognition technology. The following year, Portland prohibited the use of face recognition technologies by private entities in places of public accommodation. In 2021, New York City amended its city code, requiring establishments to provide notice that they collect biometric identifiers.
Legislative Progress
In 2021, the Fourth Amendment is Not for Sale Act was introduced that, if passed, would prevent the government from purchasing information from third parties that collect and sell information that may be obtained through deceptive means. This may include biometric data.
A bill called the American Data Privacy Protection Act (ADPPA) introduced in 2022 offers a promising shift toward federal regulation of data collection and processing and is awaiting to be voted on in the House of Representatives (Taetzsch 134). The data covered under the ADPPA would include any information or device that can be reasonably linked to a natural person, and the Act outlines a special category for sensitive data including biometric information. The ADPPA lists seventeen instances in which companies are allowed to collect and use consumer data, seeking to make data collection as minimal as possible (Taetzsch 135-7). This data-minimization approach prevents the entities it covers from collecting, processing, or transferring covered data beyond what is reasonably anticipated by consumers and requires that consumers be informed about the reasons behind the collection of their data (Mendolla 952).
A Model for the United States? The European Union’s GDPR
While the majority of nations have comprehensive privacy laws, only around thirty countries regulate the collection and use of biometric data to some degree (Vuyk 54). Currently, the European Union's 2016 General Data Protection Regulation (GDPR) represents the most robust biometric data legislation, reflecting the right to privacy and other fundamental freedoms included in the European Convention on Human Rights.
The GDPR defines biometric data broadly as personal data relating to the physical, physiological or behavioural characteristics of a natural person which enable their unique identification, with the implicit understanding that technology and data types may continue to evolve (Mendolla 950). The GDPR divides key parties into controllers, those who control how data is used and why, and thus carry the burden of legal responsibility, and processors (those who process data on behalf of controllers). Article 3 of the GDPR specifies that its jurisdiction applies to entities outside the European Union if they process data relating to the provisions of goods and services to or the behaviour of EU citizens. In relation to biometric data, Article 6 establishes instances in which the use of this information may be permitted, for example when complying with a legal obligation or when explicit consent is given, and Article 9 prohibits the processing of genetic and biometric data to uniquely identify a natural person. A primary strength of the GDPR is its framing of data rights as individual rights, with articles 13-15 focusing on subjects’ right to access their data, and articles 21-22 focusing on the right to opt-in and out of automated decision-making (Hu 298). Under these provisions, EU citizens must have the ability to withdraw their consent, in which circumstance companies must relinquish access to previously stored data and cease any future data collection (Vuyk 65-6).
The GDPR imposes strict data protection principles, such as pseudonymization to ensure that stored personal data cannot be used to identify specific individuals and encryption to heighten security (Melzi et al. 2). Moreover, the GDPR prohibits sharing of biometric information with third parties without explicit consent. In the event of a breach, the GDPR mandates that controllers must report incidents to regulators within seventy-two hours of discovery and perform highly detailed post-breach assessments. In the case of non-compliance, the GDPR has a tiered fine system determined partially by the entity's global annual revenue, allowing governments to collect up to 4% of a company’s global revenue and force changes to data-collection policies.
Despite being lauded as the most comprehensive privacy regulation, some have voiced concern that given the significant penalties the GDPR imposes for noncompliance, its key terminology is vague or overly expansive. Others believe it will limit growth, innovation, and information sharing. Critics state that the standards are too difficult for small businesses to meet, as the cost to acquire the technology necessary for compliance could be up to $1 million. Additionally, the GDPR might prevent the collection of data that may aid law enforcement, hindering investigations of fraud, sexual predation, or terrorism. Lastly, critics contend that rather than protect an individual’s legal right to privacy, the GDPR simply ensures data is protected from the technical issues of corruption, compromise, or loss.
In addition to the GDPR, in April 2021, the EU proposed an Artificial Intelligence Act to regulate AI. The proposal highlights AI-integrated biometric systems as high-risk technologies, recognizing that the outcomes of these algorithms may lead to discriminatory results and the exclusion of particular groups (Hu 290). In addition to prohibiting biometric categorization systems that infer sensitive attributes such as race and sexual orientation, the AI Act proposes to ban the use of simultaneous facial recognition in public spaces, with the exception of targeted searches for potential victims of crimes, threats to the life or physical safety of natural persons, or suspected terrorist acts and criminal offences (Valdivia et al. 1418).
Recommendations for Federal Biometric Data Regulation
While biometric authorization offers undeniable convenience for users and companies, it raises significant privacy concerns. The U.S.’s lack of stringent regulation allows companies to implement these technologies with few compliance barriers, but this convenience comes at the potential cost of compromising individuals' most personal data. To address these challenges, clear federal regulation on oversight responsibilities for biometric standards, the conditions under which such data can be used, and the parties authorized to access it must be implemented. The GDPR’s foundational principles are a strong model for this framework.
A federal law governing biometric data in the United States requires considering the concerns of both consumers and corporations. In order to prioritize the rights of consumers to control their data while balancing the practical benefits of biometric authorization and tracking , a national privacy law is a necessary prerequisite to biometric-specific privacy regulation. Federal privacy laws should preempt state laws to reduce the confusion and difficulties companies operating in multiple states face under the current inconsistent patchwork model (Mendolla 954).
Lack of federal privacy law aside, a national biometric data policy would need to cover all private businesses and individuals that might use this technology. It would not need to govern entities and institutions that already have sector-specific regulations or government agencies that require biometric data for national security. However, legislation should ensure that law enforcement agencies have reasonable suspicion of criminal activity before conducting facial recognition searches. Stronger legislative guidelines could restrict facial recognition searches to mugshot databases, requiring court orders for accessing ID photos in serious crimes (Garvie et al. 4). Police must adopt transparent policies, mandate privacy impact assessments, and ensure regular audits to address issues of misuse, inaccuracy, and bias (Garvie et al. -6).
Rather than be all-encompassing and overly restrictive, federal biometric privacy laws should be a foundation upon which states can continue to build their own biometric privacy policies to prevent its application from becoming overbearing and impractical. Important to this is a clear definition of what biometric data entails that accurately captures its uniquely sensitive nature. Hence, a data-minimization approach similar to the GDPR is preferable as it limits data collection to only the most necessary information with the informed consent of users (Mendolla 956). The establishment of independent oversight bodies could aid in monitoring biometric data collection and use.
Furthermore, federal biometric data regulation must mandate stricter technical safeguards, as these systems demand privacy-protection measures exceeding those applied to other types of personal data. The GDPR's requirements for anonymization and encryption are essential for protecting biometric data but fall short of addressing all challenges. Anonymization can diminish the practical utility of biometric data, while traditional encryption cannot accommodate natural variations in physical and behavioural traits. (Melzi et al. 4,6). Hence additional measures are necessary to guarantee irreversibility and unlinkability. Techniques to achieve these goals include cancelable biometrics, which apply consistent distortions to biometric data while preserving comparability, and soft-biometric minimization or protection, which either removes original attributes or blocks their extraction (Melzi et al. 7).
Conclusion
The convenience offered by biometric authentication must not come at the cost of our fundamental right to privacy. As these technologies continue to evolve, it becomes increasingly crucial to establish regulation that protects individual privacy and prevents discrimination, while still promoting innovation. People should be able to engage with the digital world without fear that their most sensitive information could be compromised and with confidence that companies will be held accountable for protecting their data.
Glossary
Algorithmic Bias: Systematic errors in AI algorithms leading to unfair outcomes, affecting marginalized groups disproportionately.
American Data Privacy Protection Act (ADPPA): A proposed federal law introduced in 2022 aimed at regulating data collection and processing. It emphasizes data minimization and includes biometric information as sensitive data, offering protections to ensure companies collect only necessary data with consumer consent.
Artificial Intelligence (AI): Advanced algorithms enabling machines to perform tasks like learning, decision-making, and pattern recognition, often integrated with biometric systems.
Artificial Intelligence Act: A proposed EU regulation introduced in 2021 to oversee AI technologies. It categorizes AI-integrated biometric systems as high-risk and proposes restrictions on facial recognition in public spaces.
Biometric Data: Unique physical, chemical, or behavioral characteristics of individuals used to identify them, such as fingerprints, facial features, DNA patterns, voice, or typing rhythm.
Biometric Information Privacy Act (BIPA): A 2008 Illinois law that provides stringent regulations on the collection, use, and storage of biometric data, serving as a model for other states.
Biometric Recognition: The use of biometric data to confirm or establish an individual's identity, often through automated systems that compare collected data to stored templates.
Biometric Templates: Digital representations of extracted biometric features used for comparison in recognition systems.
Cancelable Biometrics: Techniques that distort biometric data to prevent the reconstruction of the original data while maintaining its utility for authentication.
Children's Online Privacy Protection Act (COPPA): A federal law that regulates data collection practices for children under 13 years old on online platforms.
Cybersecurity: Measures to protect systems, networks, and data from cyberattacks, particularly crucial for biometric databases vulnerable to hacking.
Dark Patterns: Design techniques that manipulate or mislead users into actions they might not take knowingly, such as consenting to data collection.
Data Brokers: Entities that collect, aggregate, and sell personal data, including biometric information, often from various sources like public records, businesses, or software applications.
Data Colonialism: Exploiting populations, often in developing countries, by collecting their data without proper consent or ethical considerations, often under unequal power dynamics.
Data Privacy: The protection of personal information from unauthorized access, use, or dissemination.
Deepfakes: Synthetic media, often created using artificial intelligence, that imitates an individual's appearance, voice, or actions, potentially for malicious purposes.
Digital Footprint: The trail of personal information individuals leave behind during online activities, which can be collected and analyzed for various purposes.
Encryption: The process of converting data into a secure format to prevent unauthorized access, particularly important in protecting sensitive information like biometric data.
EURODAC: A European Union biometric database used for storing asylum seekers' fingerprints to determine the responsible member state for asylum applications.
European Convention on Human Rights: A treaty established to protect human rights and fundamental freedoms in Europe, serving as a basis for privacy rights under the GDPR.
Facial Recognition Technology: AI-driven systems that identify individuals by analyzing their facial features, widely used in security, surveillance, and commercial applications.
Federal Trade Commission (FTC): A U.S. agency responsible for protecting consumers and ensuring fair business practices. It oversees data privacy to some extent under Section 5 of the FTC Act.
Fourth Amendment is Not for Sale Act: A 2021 proposed U.S. law that aims to prevent government entities from purchasing data, including biometric information, collected through deceptive means.
General Data Protection Regulation (GDPR): European Union legislation establishing strict data protection rules, including for biometrics.
Gramm-Leach-Bliley Act (GLBA): A federal law that governs the collection, disclosure, and protection of consumers’ financial information by financial institutions.
Health Insurance Portability and Accountability Act (HIPAA): U.S. legislation protecting sensitive health information, which may intersect with concerns about biometric data privacy.
My Data, My Body Act: Proposed U.S. legislation aiming to protect personal reproductive health data, including biometric information, from misuse.
Privacy Act of 1974: U.S. legislation regulating federal agencies' handling of personal data, granting individuals certain rights over their information.
Pseudonymisation: A process that replaces personal data with unique identifiers to reduce the risk of identification.
Capture or Use of Biometric Identifier Act (CUBI): A Texas law regulating the collection and use of biometric identifiers, following the precedent set by Illinois' BIPA.
Universal Declaration of Human Rights (UDHR): A foundational United Nations document declaring the right to privacy, among other rights, which may be threatened by biometric data collection.
Washington Biometric Privacy Protection Act (WBPPA): A Washington state law aimed at protecting individuals’ biometric data.
Works Cited
Allied Market Research. “Healthcare Biometrics Market To Surge $24.9 Billion by 2031.” NASDAQ OMX’s News Release Distribution Channel, 21 Mar. 2023. ProQuest, https://www.proquest.com/docview/2788715415/citation/DDB3A109900A42BFPQ/1.
Belanger, Ashley. “Data Broker Allegedly Selling De-Anonymized Info to Face FTC Lawsuit after All.” Ars Technica, 6 Feb. 2024, https://arstechnica.com/tech-policy/2024/02/data-broker-selling-de-anonymized-info-to-face-ftc-lawsuit-after-all/.
Brink, Ronna N., and Rebecca I. Scollan. Usability of Biometric Authentication Methods for Citizens with Disabilities. 1919MG19-AA, MITRE Corporation, Sept. 2019, p. 40. Zotero, https://www.mitre.org/sites/default/files/2021-11/prs-19-1396-usability-biometrics-for-disabilities.pdf.
Buolamwini, Joy, and Timnit Gebru. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” Proceedings of the 1st Conference on Fairness, Accountability and Transparency, PMLR, 2018, pp. 77–91. proceedings.mlr.press, https://proceedings.mlr.press/v81/buolamwini18a.html.
Caliskan, Aylin, et al. “Semantics Derived Automatically from Language Corpora Contain Human-like Biases.” Science (New York, N.Y.), vol. 356, no. 6334, Apr. 2017, pp. 183–86. PubMed, https://doi.org/10.1126/science.aal4230.
Caltrider, Jane, et al. “*Privacy Not Included: A Buyer’s Guide for Connected Products.” Mozilla Foundation, 6 Sept. 2023, https://foundation.mozilla.org/en/privacynotincluded/articles/its-official-cars-are-the-worst-product-category-we-have-ever-reviewed-for-privacy/.
Clark, Anna Mercado, and Mario Fadi Ayoub. “Biometrics in the Workplace: Privacy Challenges and a Roadmap for Successful Compliance.” Rochester Business Journal, vol. 36, no. 44, Feb. 2021, pp. 9–10.
DiPersio, Denise. “Selling Personal Information: Data Brokers and the Limits of U.S. Regulation.” Proceedings of the Workshop on Legal and Ethical Issues in Human Language Technologies @ LREC-COLING 2024, edited by Ingo Siegert and Khalid Choukri, ELRA and ICCL, 2024, pp. 39–46. ACLWeb, https://aclanthology.org/2024.legal-1.7.
Duewer, David L. Face Recognition Vendor Test (FRVT) Part 8: Summarizing Demographic Differentials. NIST IR 8429, National Institute of Standards and Technology, 2022, p. NIST IR 8429. DOI.org (Crossref), https://doi.org/10.6028/NIST.IR.8429.
“Facial Recognition Tool Led to Mistaken Arrest, Lawyer Says.” AP News, 2 Jan. 2023, https://apnews.com/article/technology-louisiana-baton-rouge-new-orleans-crime-50e1ea591aed6cf14d248096958dccc4.
Fowler, Geoffrey A. “Driving Surveillance: What Does Your Car Know about You? We Hacked a 2017 Chevy to Find out. - The Washington Post.” Washington Post, 17 Dec. 2019, https://www.washingtonpost.com/technology/2019/12/17/what-does-your-car-know-about-you-we-hacked-chevy-find-out/.
“FTC Report Shows Rise in Sophisticated Dark Patterns Designed to Trick and Trap Consumers.” Federal Trade Commission, 15 Sept. 2022, https://www.ftc.gov/news-events/news/press-releases/2022/09/ftc-report-shows-rise-sophisticated-dark-patterns-designed-trick-trap-consumers.
Garvie, Clare, et al. The Perpetual Line-Up. Georgetown Law Center on Privacy & Technology, 18 Oct. 2016, https://www.perpetuallineup.org/.
Grande, Allison. “Top Privacy Developments Of 2022: Midyear Report.” Law360, 22 July 2022, https://www.law360.com/articles/1513282/top-privacy-developments-of-2022-midyear-report.
Grant,Bloomberg, Nico. “Google to Let Android Users Opt out of Tracking, Following Apple.” Fortune, https://fortune.com/2021/06/03/google-android-users-opt-out-of-tracking-apple/. Accessed 20 Nov. 2024.
Greenberg, Andy. “Millions of Vehicles Could Be Hacked and Tracked Thanks to a Simple Website Bug.” Wired. www.wired.com, https://www.wired.com/story/kia-web-vulnerability-vehicle-hack-track/. Accessed 20 Nov. 2024.
“OPM Now Admits 5.6m Feds’ Fingerprints Were Stolen By Hackers.” Wired. www.wired.com, https://www.wired.com/2015/09/opm-now-admits-5-6m-feds-fingerprints-stolen-hackers/. Accessed 20 Nov. 2024.
Greenleaf, Graham. Global Tables of Data Privacy Laws and Bills (6th Ed January 2019). 3380794, Social Science Research Network, 9 Feb. 2019. papers.ssrn.com, https://papers.ssrn.com/abstract=3380794.
Guo, Eileen, and Adi Renaldi. “Deception, Exploited Workers, and Cash Handouts: How Worldcoin Recruited Its First Half a Million Test Users.” MIT Technology Review, 6 Apr. 2022, https://www.technologyreview.com/2022/04/06/1048981/worldcoin-cryptocurrency-biometrics-web3/.
Heiman, Matthew R. A. “The GDPR and the Consequences of Big Regulation.” PEPPERDINE LAW REVIEW, vol. 47.
“High-Level Summary of the AI Act | EU Artificial Intelligence Act.” Future Life Institute, 27 Feb. 2024, https://artificialintelligenceact.eu/high-level-summary/.
Hill, Kashmir. “The Facial-Recognition App Clearview Sees a Spike in Use after Capitol Attack.” The New York Times, 9 Jan. 2021. NYTimes.com, https://www.nytimes.com/2021/01/09/technology/facial-recognition-clearview-capitol.html.
“The Secretive Company That Might End Privacy as We Know It.” The New York Times, 18 Jan. 2020. NYTimes.com, https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html.
Hu, Margaret. “Biometrics and an AI Bill of Rights.” Duquesne Law Review, vol. 60, 2022, p. 283.
Jain, Anil K., and Arun Ross. “Introduction to Biometrics.” Handbook of Biometrics, edited by Anil K. Jain et al., Springer US, 2008, pp. 1–22. Springer Link, https://doi.org/10.1007/978-0-387-71041-9_1.
Katsanis, Sara H., et al. “U.S. Adult Perspectives on Facial Images, DNA, and Other Biometrics.” IEEE Transactions on Technology and Society, vol. 3, no. 1, 2022, pp. 9–15. Scholars Portal Journals, https://doi.org/10.1109/TTS.2021.3120317.
Kelly, Nicola. “Facial Recognition Smartwatches to Be Used to Monitor Foreign Offenders in UK.” The Guardian, 5 Aug. 2022. The Guardian, https://www.theguardian.com/politics/2022/aug/05/facial-recognition-smartwatches-to-be-used-to-monitor-foreign-offenders-in-uk.
Khan, Lina M., et al. “Policy Statement of the Federal Trade Commission on Biometric Information and Section 5 of the Federal Trade Commission Act.” Federal Trade Commission, 18 May 2023, https://www.ftc.gov/system/files/ftc_gov/pdf/p225402biometricpolicystatement.pdf.
Kumar, Tajinder, et al. “Examining the Vulnerabilities of Biometric Systems: Privacy and Security Perspectives.” Leveraging Computer Vision to Biometric Applications, Chapman and Hall/CRC, 2024.
Layton, Roslyn, and Julian McLendon. “The GDPR: What It Really Does and How the U.S. Can Chart a Better Course.” The Federalist Society, 29 Oct. 2018, https://fedsoc.org/fedsoc-review/the-gdpr-what-it-really-does-and-how-the-u-s-can-chart-a-better-course.
Mehorter, Kelly. “Life360 Secretly Sells Users’ Geolocation Data to Third Parties, Class Action Claims.” ClassAction.Org, 18 Jan. 2023, https://www.classaction.org/news/life360-secretly-sells-users-geolocation-data-to-third-parties-class-action-claims.
Melzi, Pietro, et al. “An Overview of Privacy-Enhancing Technologies in Biometric Recognition.” ACM Comput. Surv., vol. 56, no. 12, Oct. 2024, p. 310:1-310:28. ACM Digital Library, https://doi.org/10.1145/3664596.
Mendolla, Mackenzie K. “A Blurry Lens: Assessing the Complicated Legal Landscape of Biometric Privacy through the Perspective of Mobile Apps.” Seton Hall Law Review, vol. 54, 2024 2023, p. 923.
Miller, Sterling. The Basics, Usage, and Privacy Concerns of Biometric Data. 20 July 2022, https://legal.thomsonreuters.com/en/insights/articles/the-basics-usage-and-privacy-concerns-of-biometric-data.
Nakashima, Ellen. “Hacks of OPM Databases Compromised 22.1 Million People, Federal Authorities Say.” Washington Post, 10 July 2015. www.washingtonpost.com, https://www.washingtonpost.com/news/federal-eye/wp/2015/07/09/hack-of-security-clearance-system-affected-21-5-million-people-federal-authorities-say/.
Nations, United. “Universal Declaration of Human Rights.” United Nations, United Nations, https://www.un.org/en/about-us/universal-declaration-of-human-rights. Accessed 20 Nov. 2024.
Noone, Greg. “The Controversial Rise of Biometrics for Migrants.” Tech Monitor, 20 Oct. 2022, https://www.techmonitor.ai/policy/privacy-and-data-protection/biometrics-safe-data-protection.
Press, Eyal. “Does A.I. Lead Police to Ignore Contradictory Evidence?” The New Yorker, 13 Nov. 2023, https://www.newyorker.com/magazine/2023/11/20/does-a-i-lead-police-to-ignore-contradictory-evidence.
Robb, Drew. “The Future of Biometrics in the Workplace.” SHRM, 22 Feb. 2022, https://www.shrm.org/mena/topics-tools/news/technology/future-biometrics-workplace.
Rosner, Lisa Joy. “Council Post: How Biometric Data Will Shift The Privacy Conversation.” Forbes, https://www.forbes.com/councils/forbescommunicationscouncil/2019/07/02/how-biometric-data-will-shift-the-privacy-conversation/. Accessed 20 Nov. 2024.
Sample, Ian. “What Are Deepfakes – and How Can You Spot Them?” The Guardian, 13 Jan. 2020. The Guardian, https://www.theguardian.com/technology/2020/jan/13/what-are-deepfakes-and-how-can-you-spot-them.
Schuman, Evan. “Biometrics Are Even Less Accurate than We Thought.” Computerworld, 5 Dec. 2022, https://www.computerworld.com/article/1615663/biometrics-are-even-less-accurate-than-we-thought.html.
Sherman, Justin. “Data Brokers and Sensitive Data on U.S. Individuals.” Duke Stanford Cyber Policy Program, 2021. Zotero, https://techpolicy.sanford.duke.edu/wp-content/uploads/sites/4/2021/08/Data-Brokers-and-Sensitive-Data-on-US-Individuals-Sherman-2021.pdf.
Slefo, George P. “Got $1 Million? You’re That Much Closer to Being GDPR Compliant.” Ad Age, 11 Dec. 2017, https://adage.com/article/digital/gdrp-privacy-costing-media-companies/311582.
Taetzsch, Emily. “Privacy Purgatory: Why the United States Needs a Comprehensive Federal Data Privacy Law.” Journal of Legislation, vol. 50, no. 1, Jan. 2024, p. 121.
Tazzioli, Martina. The Making of Migration: The Biopolitics of Mobility at Europe’s Borders. SAGE Publications, Inc., 2020. SAGE Knowledge, https://doi.org/10.4135/9781526492920.
Valdivia, Ana, et al. “There Is an Elephant in the Room: Towards a Critique on the Use of Fairness in Biometrics.” AI and Ethics, vol. 3, no. 4, Nov. 2023, pp. 1407–22. Springer Link, https://doi.org/10.1007/s43681-022-00249-2.
Vuyk, Isabel. “The Unregulated World Of Your Most Personal Of Personal Information: A Proposal For A Federal Biometric Information Privacy Law.” The University of Cincinnati Intellectual Property and Computer Law Journal, vol. 9, no. 1, Mar. 2024. COinS, https://scholarship.law.uc.edu/ipclj/vol9/iss1/3.
Wang, Chen, et al. “User Authentication on Mobile Devices: Approaches, Threats and Trends.” Computer Networks, vol. 170, Apr. 2020, p. 107118. ScienceDirect, https://doi.org/10.1016/j.comnet.2020.107118.
Winters-Miner, Linda A., et al. “Chapter 17 - The Predictive Potential of Connected Digital Health.” Practical Predictive Analytics and Decisioning Systems for Medicine, edited by Linda A. Winters-Miner et al., Academic Press, 2015, pp. 975–88. ScienceDirect, https://doi.org/10.1016/B978-0-12-411643-6.00045-4.
Woodward, John D. “The Law and the Use of Biometrics.” Handbook of Biometrics, edited by Anil K. Jain et al., Springer US, 2008, pp. 357–79. Springer Link, https://doi.org/10.1007/978-0-387-71041-9_18.