
The Equality and Human Rights Commission (EHRC) has formally declared its position that the Metropolitan Police Service’s use of live facial recognition (LFR) technology is unlawful, marking a significant escalation in the ongoing legal and ethical debate surrounding biometric surveillance in the United Kingdom1. This intervention by the UK’s human rights regulator adds considerable weight to existing legal challenges and amplifies concerns from civil liberties groups about the absence of a robust legislative framework governing police use of this powerful technology.
This development is not isolated. It occurs against a backdrop of rapid expansion in LFR deployment across the UK. The Metropolitan Police has recently installed fixed LFR cameras in Croydon, a first for London, which are activated during specific police operations2. Police forces nationwide, including Northamptonshire, Essex, Bedfordshire, and Hampshire, are now actively using or trialing various forms of facial recognition technology, moving it from a limited experiment to a standard policing tool4. This widespread adoption is happening despite a successful 2020 legal challenge against South Wales Police, which set a precedent by ruling their use of LFR was unlawful due to inadequate guidance and oversight4.
Technical Operation and Efficacy Claims
The Metropolitan Police defends its use of LFR by citing operational successes and implemented safeguards. The technical process involves cameras scanning faces in real-time and comparing them against a pre-defined “watchlist.” According to the Met’s lead for facial recognition, Lindsey Chiswick, the technology has facilitated over 1,000 arrests. In the last year alone, the system scanned approximately 1.5 million faces, resulting in 459 arrests—a rate of roughly one arrest per 3,300 scans. The force also acknowledges that more than half of all “true matches” did not lead to an immediate arrest, suggesting operational decisions beyond a simple algorithm match. A critical safeguard highlighted by the police is the permanent deletion of biometric data belonging to individuals who do not match any watchlist entry2.
The composition and management of these watchlists are a central point of contention. As outlined by Liberty, a fundamental rights organization, these lists can contain images of anyone, sourced from anywhere, including social media platforms. The criteria for inclusion are based on “common law powers” and often subjective judgments by officers, such as having “reasonable grounds” to suspect an individual may offend. This lack of stringent, legally-defined parameters for watchlist creation is a core issue identified by the EHRC and legal analysts34. The Met’s own policy document confirms these operational criteria, which critics argue grant excessive discretion4.
Bias, Error Rates, and Real-World Impact
A major technical and ethical challenge for LFR is the proven risk of algorithmic bias and false positives. The Metropolitan Police frequently cites a report by the National Physical Laboratory (NPL) that found its system accurate with “no significant bias.” However, a deeper analysis by the Ada Lovelace Institute reveals critical limitations in this study. The NPL report was a snapshot of a single software version under ideal conditions and crucially found a “statistically significant higher rate of false positives observed for Black people under certain threshold settings.” The report itself lacks the authority to mandate specific threshold settings, leaving this critical variable to individual officer discretion6.
This technical flaw has direct, harmful consequences. A case study highlighted by Saunders Law involves Shaun Thompson, a Black community worker who was wrongly flagged and detained for 30 minutes near London Bridge in February 2024. He is now pursuing legal action against the force. This incident is not an anomaly; campaign group Big Brother Watch claims over 3,000 people have been wrongly identified by such systems nationwide45. Furthermore, an analysis of deployment data indicates that the Met has disproportionately used LFR in boroughs with higher-than-average Black populations, compounding the risk of discriminatory outcomes4.
The Regulatory Vacuum and Expansion into New Frontiers
The central argument from critics is that this technology is being deployed in a “regulatory wild west.” There is no primary legislation in the UK specifically designed to govern police use of live facial recognition. Forces currently rely on a patchwork of existing laws like the Data Protection Act 2018 and the Human Rights Act 1998, combined with internal guidance, which legal experts argue is insufficient for the profound privacy and equality implications of LFR34. This stands in stark contrast to the European Union’s approach under the AI Act, which implements a strict, risk-based legislative framework including prohibitions on real-time remote biometric identification in public spaces6.
This lack of clear regulation is not limited to policing. The private sector is rapidly adopting biometric surveillance. Retailers like Asda, Southern Co-op, and Frasers Group are using systems from providers like Facewatch to combat shoplifting, a practice that has itself faced legal challenges56. Even more concerning is the rise of “inferential biometrics,” such as emotion recognition. Network Rail has trialed Amazon’s emotional analytics technology at eight UK rail stations, and similar systems have been used in recruitment and education, despite a lack of scientific consensus on their validity. These systems often fall outside the stricter “special category” data protections of UK GDPR because they are not used for identification, creating a major regulatory blind spot6.
Relevance to Security Professionals
For security architects and operational teams, the proliferation of biometric surveillance systems represents a significant shift in the threat landscape and data governance challenges. The technical specifications of these systems, including their data processing locations, retention policies, and encryption standards, are critical from an infrastructure security perspective. The potential for these systems to be compromised, leading to the exfiltration of sensitive biometric databases, constitutes a severe risk. Furthermore, the proven inaccuracies and biases inherent in the algorithms raise serious questions about the integrity of the data being used for security decisions, potentially leading to false positives that waste resources or false negatives that create security gaps.
The regulatory uncertainty creates compliance headaches. Organizations considering or deploying such technologies must navigate a fragmented oversight environment involving the Information Commissioner’s Office (ICO), the Biometrics and Surveillance Camera Commissioner, and potential future legislation. The ICO has already demonstrated its willingness to act, as seen when it ordered Serco Leisure to stop using FRT to monitor employee attendance6. Security leaders must implement rigorous data protection impact assessments, ensure transparency in their use of biometrics, and adhere to the core principles of data minimization and purpose limitation to mitigate legal and reputational risk.
Conclusion and Future Implications
The intervention by the Equality and Human Rights Commission against the Metropolitan Police is a pivotal moment, signaling that the current ad-hoc approach to governing live facial recognition is untenable. The technical evidence of racial bias, the real-world harm caused by errors, and the rapid, unlegislated expansion into both public and private sectors create a pressing need for a comprehensive legal framework. The Ada Lovelace Institute’s recommendation for a “comprehensive, legislatively backed biometrics governance framework” overseen by an independent regulator with clear enforcement powers appears to be the necessary path forward6.
Without decisive legislative action, the UK risks fostering a pervasive surveillance infrastructure that erodes public trust, disproportionately impacts minority communities, and operates without the necessary legal safeguards. For the security community, this situation underscores the importance of building systems with ethics, accuracy, and robust data governance at their core, rather than treating them as an afterthought. The technical capabilities of biometrics are advancing faster than the policies that govern them, and closing this gap is one of the most significant challenges at the intersection of technology, security, and human rights.