The U.K. is one of the most surveilled countries in the world. Closed-circuit television (CCTV) cameras are everywhere, from shops to buses to private homes. In the past couple of years, artificial-intelligence (AI) techniques, such as neural networks, have propelled large-scale automated facial recognition. Combined with the U.K.’s huge network of existing CCTV cameras, this could hold vast potential for security applications.
However, there are worrying implications for privacy. Biometric facial data taken from photos or CCTV footage is particularly troubling because it can be taken without the person’s consent or cognizance. This data may be collected indiscriminately — whether or not the subject is matched with a “watch list” — and it allows people to be located and their movements to be tracked. It’s easy to imagine photos of crowds with every face linked to the person’s identity, then potentially connected to all kinds of other data about them.
If we are going to trade our privacy for increased security, the benefits must outweigh the costs. Is facial-recognition technology actually effective in catching criminals?
Police forces around the U.K. have performed trials of live facial-recognition technology in public places such as shopping malls, sports stadiums, and busy streets. The country’s Metropolitan Police have undertaken 10 trials of the technology so far, using dedicated camera equipment along with NEC’s NeoFace algorithm, which measures the structure of each face, including the distance between the eyes, nose, mouth, and jaw. According to NEC, this algorithm can tolerate poor-quality images such as compressed surveillance video; the company claims the ability to match images with resolutions as low as 24 pixels between the eyes.
NeoFace won four consecutive U.S. National Institute of Standards and Technology “Face in Video Evaluation” (NIST FIVE) awards for performance, but it hasn’t done terribly well in the field. In the eight Metropolitan Police deployments of the technology between 2016 and 2018, 96% of the Met’s facial-recognition matches misidentified innocent members of the public as being on a watch list.
A major criticism of the trials has been that the public was not sufficiently alerted to them and was not given the opportunity to opt out. The BBC filmed one passerby in North London covering his face. The police photographed him anyway and gave him a £90 fine for disorderly conduct.
South Wales Police’s more extensive trials of the technology were marginally more successful: Only 91% of the matches made were misidentifications of innocent members of the public. Images of all passersby, matched or not, were stored by the police for 31 days.
This trial prompted one member of the public to take legal action against South Wales Police. The person had been photographed while out shopping and while at a peaceful anti-arms protest. He claims that automated facial-recognition technology violated his right to privacy, interfered with his right to protest, and breached data-protection laws (judgment is due this fall).
A recent panel discussion at the AI conference CogX in London tackled the ethics of automated facial-recognition technology. Martha Spurrier, director of human rights organization Liberty, highlighted research findings that surveillance distorts even non-criminal behavior.
“This technology affects all of us,” she said. “It affects how we all interact with public spaces. It affects whether we decide to go to a protest, whether we decide to go and pray somewhere, or whether we hang out with people who might have been in trouble with the police.”
Well-known issues with bias in facial recognition whereby women and people of color are more likely to be misidentified by algorithms are also a cause for concern, she said. “[We know that] law enforcement marginalizes and demonizes particular communities. And this technology, whether it’s biased in and of itself, will be used by people where bias is already entrenched. We unfortunately have no answer to dealing with police racism. So we shouldn’t equip [the police] with even more invasive tools to be able to enact that racism on an even greater scale.”
Griff Ferris, legal and policy officer for civil liberties and privacy organization Big Brother Watch, said at the conference that the public should be concerned about trials of live facial recognition like the ones carried out by U.K. police. “It is mass surveillance; it biometrically scans every single face within a public space, checking against the watch list … it treats your face like a fingerprint or DNA swab,” Ferris said. He asked the audience how many would be have been willing to give fingerprints or have DNA swabbed when entering the auditorium (no hands went up). “That’s how it treats a public space or a concert or a protest. That’s what it’s doing, and it’s doing it without your knowledge and without your consent.”
Shaun Moore, CEO of facial-recognition company Trueface, was the sole representative of a facial-recognition company on the CogX panel. Trueface started with a smart doorbell in 2013 but now provides facial recognition to banks for secure ATM transactions, to the hospitality industry for VIP recognition, and to law enforcement.
“The [U.S.] government does not need facial recognition to surveil you,” he said. “They have social media. They have your phone records. They have your credit card statements. So you really need to think about this more holistically than just surveillance.”
While he conceded that testing the technology on people who didn’t consent was “inappropriate,” he said that there was no way around it if we want the technology to work. “That is the only way to collect real information, and without collecting real information, you will not know where the issues lie, so it’s a bit of a [quandary]. But without real deployment of this technology with people who don’t know that it’s watching them, you will not get the information you need to see if it’s accurate enough to identify people that you are looking for.”
Moore said that Trueface prides itself on enforcing responsible use of its technology, primarily by screening customers carefully and supplying to opt-in applications. “We sell directly to our customers into their local infrastructure, so it’s not a cloud API; you can’t just go onto our site and buy our technology,” he said. “We spend at least six months with our customers to know exactly how they’re using it, and we work in … primarily opt-in applications.”
Trueface’s technology protects those who haven’t opted in by blurring all faces in real time, said Moore. Imagery is not collected from faces other than for the moment that the system performs its matching, and non-matching faces are completely erased from the system. Moore also pointed out that facial recognition does not mean storing images of faces; the Trueface system makes a mathematical representation of a face in a proprietary way, so if someone hacked into a database, it wouldn’t be possible to reverse-engineer the numbers to plot a particular face.
Meanwhile, other private sector companies are divided on whether the technology in its current form should be used for surveillance applications. Amazon has provided its Rekognition facial-recognition tool to law enforcement across the United States, despite an open letter from leading AI researchers calling for it to stop on the grounds that the practice is discriminatory. Microsoft has said that it doesn’t want its facial-recognition technology to be used by law enforcement, with Microsoft President Brad Smith calling for regulation of the technology.
WHAT DOES THE LAW SAY?
“What’s really clear is that [U.K.] policy in this area is a hot mess,” said Hetan Shah, executive director of the Royal Statistical Society. “It is really complicated, and that’s why we don’t have easy answers in this space.”
U.K. law stipulates the conditions for police collection of fingerprints and DNA and what happens to that data after it is taken. It does not cover any other form of biometrics, such as facial-recognition images and data. While this type of data may be loosely covered under the U.K. Data Protection Act or the European General Data Protection Regulation (GDPR), biometric data other than DNA and fingerprints is not named specifically in either law. A High Court ruling in 2012 found that it was unlawful for the U.K.’s Home Office to store facial images of innocent people, but the Home Office only deployed a workaround, offering to delete the images of “unconvicted persons” who applied to have them removed.
The Ada Lovelace Institute, of which Shah is deputy chair, was created to investigate ethical issues related to AI technology. The Institute has commissioned a study of public perception of facial-recognition technology used in public places.
“One thing we’re talking to stakeholders about is calling for a moratorium on this technology, which we think is in the private sector’s interest as much as citizens’ interest, because if we get the regulation wrong, we’ll quickly be in the wrong place,” Shah said, citing a precedent set by the moratorium on genetic profiling in the insurance industry. “In the same way that we’ve seen the science of genetically modified foods get set back 10 years because the public in Europe wouldn’t support it, we need to find a path that gets social license to operate. The best way to do that is to pause on technology and to have this discussion in many forums around the [U.K.] and think through them overall.”
The situation overseas varies widely. San Francisco is one of the first cities to have explicitly restricted use of the technology, following a grassroots campaign supported by the American Civil Liberties Union of Northern California. At the other extreme, frequently cited as a worst-case scenario, is China. While that country is keen to show off sunglasses equipped with facial-recognition cameras on its police officers, there has been an international outcry about the use of facial recognition to track its Uighur Muslim community. China reportedly uses facial-recognition technology on an enormous, nationwide network of CCTV cameras to classify the ethnicity of its citizens, with alarms sounding if too many Uighurs appear in one area, for example. This level of social control, while alarming to Western ideals, is perfectly possible with today’s technologies.
While this is far from the situation on British streets, without any form of regulation, it’s easy to imagine the technology invading people’s right to privacy or, at worst, being deliberately misused.
The low accuracy and high rates of misidentification demonstrated in U.K. trials are particularly concerning, especially for women and people of color, who are disproportionately misidentified by today’s facial-recognition algorithms. The “if you have nothing to hide, you have nothing to fear” argument may no longer hold true. ■