Facial recognition: An ethical policing tool?

By Samuel Woodhams | Digital rights researcher and journalist

Facial recognition technology made headlines again last month as researchers at the University of Cambridge, UK said that the UK police’s use of the technology was unethical and potentially unlawful. The report from the Minderoo Centre for Technology and Democracy urged police to stop using live facial recognition (LFR) in public spaces and said trials by the Metropolitan Police and South Wales Police failed to meet the “minimum legal and ethical standards.”

The report highlighted what it called a lack of transparency, accountability, and oversight surrounding the use of the technology, while noting that it “poses threats to human rights, especially for racialised and marginalised communities.”

WATCH: Xinjiang to London: Chinese surveillance tech in the UK

The researchers are far from the first to highlight the harms of facial recognition technology. More than 200 civil society organisations, along with the UN High Commissioner on Human Rights and the European Parliament, have called for a ban on its use in public spaces. The risks are also becoming more well-known publicly thanks to the likes of Coded Bias, a Netflix documentary that covers Joy Buolamwini’s crucial work at the Massachusetts Institute of Technology (MIT) exposing the discriminatory impact and racial biases contained within the technology.

But despite the criticism, facial recognition technology is still being used by police forces across the world, from Colombia to China, while meaningful regulation lags behind.

 

A surveillance camera is seen near a Chinese flag in Shanghai, China

A surveillance camera is seen near a Chinese flag in Shanghai, China August 2, 2022. REUTERS/Aly Song

 

In the UK, the legality of the tech has been called into question several times over the past few years. In 2020, the South Wales Police’s use of the technology was found to be unlawful by the Court of Appeal. The year before, the Automated Facial Recognition Technology Bill was introduced to ban the use of the technology in public places, though it was never ratified.

Despite this, some UK police forces remain steadfast in their support of the technology. Responding to the recent report from The University of Cambridge, Mark Travis of South Wales Police told the Guardian that “the whole aim of using facial recognition technology is to keep the public safe.”

“I believe the public will continue to support our use of all the available methods and technology to keep them safe, providing what we do is legitimate and proportionate,” he said.

But accurately gauging public opinion is difficult as there’s been little meaningful public consultation on the topic. Meanwhile, mounting evidence indicates that it’s far less effective than some police forces would like to think. In fact, according to the Metropolitan Police’s own figures, just nine arrests were made across five operations between February 2020 and July 2022, during which more than 125,000 peoples’ faces were scanned.

There has been some success in limiting police forces’ use of facial recognition internationally. More than 20 cities in the United States have moved to restrict the use of the technology, while constraints have also been introduced in BelgiumMorocco and Luxembourg.

However, it remains to be seen how long these regulations will stay in place. Several US cities have already undone their restrictions. And, even if meaningful regulations on LFR are passed, there are countless other technologies being developed that could come with similar problems.

 

A CCTV security surveillance camera overlooks a street as people walk following the spread of the coronavirus disease (COVID-19) in Beijing, China May 11, 2020. REUTERS/Thomas Peter

A CCTV security surveillance camera overlooks a street as people walk following the spread of the coronavirus disease (COVID-19) in Beijing, China May 11, 2020. REUTERS/Thomas Peter

Beyond facial recognition

 

While live facial recognition dominates headlines, there are other types of technology with strikingly similar capabilities that often fly under the radar.

Last year, the UK’s Metropolitan Police spent £3 million on retrospective facial recognition technology (RFR). RFR uses facial recognition to scan images already collected by CCTV, rather than scanning people in real-time. In an article I wrote last year, experts warned that the technology can be used in much the same way as LFR and has many of the same risks. Despite this, there are almost no limitations on its use in the UK and abroad.

For some police forces, simply detecting faces in a crowd doesn’t go far enough. There are now several products that not only identify people, but also analyse peoples’ emotions. The technology, which is already being used in the heavily monitored region of Xinjiang, China, can then supposedly help police predict crime.

Like a lot of new surveillance technology, however, it’s unlikely to live up to the hype and the technology has repeatedly been shown to be inaccurate. Worse yet, it’s been accused of being based on “racist pseudoscience” that could lead to higher rates of discriminatory policing.

The UK’s Information Commissioner’s Office (ICO) has warned against the use of such technology in the UK. But as attempts to regulate live facial recognition show, meaningful restrictions on potentially dangerous surveillance technology can take years to establish.

With new guidance from the ICO on biometric technologies expected next spring, it’s crucial that it looks ahead and offers proactive, comprehensive and meaningful guidance on facial recognition and other forms of biometric surveillance that may soon become staple parts of contemporary policing.

Recommended Reading

Khari Johnson, How Wrongful Arrests Based on AI Derailed 3 Men’s Lives, Wired, March 7, 2022.

Discussions of facial recognition can often overlook the human impact. This article shows how wrongful arrests caused by facial recognition software damage peoples’ lives and highlights the importance of regulating its use.

Evani Radiya-Dixit, A Sociotechnical Audit: Assessing Police Use of Facial Recognition, Minderoo Centre for Technology and Democracy, University of Cambridge

The full report from the University of Cambridge offers detailed information regarding the ethical and legal problems surrounding the police’s use of the technology. It also includes an audit that could be adopted for analysing future trials of the technology in the UK.

Lauren Rhue, Emotion-Reading Tech Fails the Racial Bias Test, January 3 2019.

This article from back in 2019 shows that racial bias within facial recognition software is far from new. The study found that emotion-reading technology considers black faces to be angrier than white faces, something that could have awful consequences in a policing context.

Nicol Turner Lee and Caitlin Chin, Police Surveillance and Facial Recognition: Why Data Privacy Is Imperative for Communities of Color, Brookings Institution, April 12, 2022.

This report for the Brookings Institution makes the case for stronger privacy protections in the United States to help limit the risks of facial recognition technology, particularly as they relate to communities of colour.