HUMAN rights groups, including Black Lives Matter UK, are demanding the new Metropolitan Police chief end his force’s use of “inaccurate and highly invasive” facial recognition technology.
In a letter to Met Commissioner Sir Mark Rowley, sent on his first day on the job today, several major organizations have called on him to ditch the tech, which they claim has an 87 percent failure rate.
The force began trialling the use of live facial recognition technology in the capital in 2016, before rolling out the software more widely earlier this year.
The tech is usually deployed in crowded areas and has been used to scan hundreds of thousands of faces at protests, sporting events and even concerts.
It works by scanning the faces of everyone in range and comparing them in real time with a database of people on a watch list.
Signatories of the letter, which also include Liberty and Big Brother Watch, said the tech wrongly matched individuals on the police’s watch list in almost nine out of 10 cases.
Cases of false alerts include that of a 14-year-old schoolboy and a French exchange student who had only been in the country for a few days.
The campaigners also repeated concerns that the technology is being deployed in areas with a higher proportion of people from ethnic minorities and is even less accurate for women.
Big Brother Watch director Silkie Carlo, whose organization has carried out observations of recognition tech deployments, said: “Millions of Londoners’ faces have been scanned by facial recognition cameras without their consent and without many parliamentarians’ awareness.
“If the new commissioner is serious about fighting crime effectively while addressing discrimination in policing, he cannot endorse the use of a technology with an 87 percent failure rate, that pointlessly drains police resources, and is well known to have issues with racist and sexist misidentifications, many of which we’ve witnessed.”
Liberty director Martha Spurrier warned the Met’s use of the technology was violating people’s rights and threatening their freedoms.
Police use of live facial recognition technology has been subject to legal actions, with the Court of Appeal ruling in 2020 that South Wales Police’s use of the technology was unlawful and violated people’s human rights.
A Met spokeswoman said the technology has helped to “locate dangerous individuals and those who pose a serious risk to our communities.”
She added that the Met’s own calculations of the number of false alerts was between 0 to 0.08 per cent.
.