Facial Recognition vs ‘Super-Recognizers’: Humans have the edge over tech?
An elite police team in London with the ability to recognize thousands of different people is outperforming the technology — for now.
London is one of the most watched cities in the world: Its inhabitants are caught on camera about 300 times a day on average, and the British capital has become a testbed for police use of live facial recognition. But the technology, which powers a multibillion-dollar market for security firms and building management, has troubling limitations. To show it up even more, a special team of human officers have, anecdotally, been doing a better job than the cameras.
London’s Metropolitan Police conducted 10 trials of live facial recognition from 2016 to 2019, using face-matching software from Japanese technology firm NEC Corp. and cameras mounted on surveillance vans and poles. But the system made positive matches in just 19% of the cases, according to an independent study of the trials by the University of Essex. The majority of the time, the software was wrong.
There were other problems, according to Pete Fussey, who co-authored the study. Police had what he called a “deference to the algorithm,” tending to agree with whatever the software suggested. “I saw that so many times,” he says. “I was in a police van and I remember people saying ‘If we’re not sure, we should just assume it’s a match.’” Fussey also never saw an officer’s assessments overturned if they agreed with the software. Instead, they were often overturned if they disagreed with the system.
A spokeswoman for the Met said that facial recognition searches “can be part of a careful investigative process with any match being an intelligence lead for the investigation to progress.” She declined to say how many arrests had been made as a result of the technology.
Fortunately, there may be a human answer. One evening during the trials, when officers were parked near a theater in Central London, their laptop beeped to show a person of interest nearby. Up popped a mugshot, alongside a still of the man who’d just walked past their cameras. The new camera image was grainy, and the officers mulled over whether it was a match. Then someone spoke up. “That’s not him. No way.” The officer was a visiting super-recognizer, and he turned out to be correct.
Super-recognizers make up a tiny team of London police who are human counterparts of facial recognition software. They literally never forget a face and are skilled at identifying people in a crowd. About 1% of the population have the ability, according to Josh Davis, an academic from the University of Greenwich who has studied super-recognizers.
The Met’s super-recognizers have worked as a dedicated unit for about a decade. They’ve had some highs — a spate of positive press attention — and some lows, including the time in 2019 when the U.K.’s Forensic Science Regulator dismissed their work as unscientific.
But Patrick O’Riordan, one of the first super-recognizers on the Met, says that on a good week, each member of his team will positively identify 20 to 30 people on a watch list when reviewing video footage of robberies or while attending live events. In 2019, he helped stop more than a dozen pickpockets from entering a large music festival in London by spotting them in the crowds as they entered. His team also made about 20 positive matches of known gang members a day during 2019’s Notting Hill Carnival, one of London’s biggest annual events.
Compare that to the 11 matches from the cameras and software over four years.
Humans, of course, have their own biases and make mistakes, too. And the prowess of O’Riordan and his team is anecdotal. With a lack of research into error rates, it’s unclear if super-recognizers perform worse when, for instance, identifying people of different races. Facial recognition also has higher error rates for people with darker skin.
Replacing fallible technology with fallible humans isn’t the answer. But putting them together might be. Humans often babysit algorithms when the stakes are high. For instance, Meta Platforms Inc. (the new corporate name of Facebook) employs thousands of people to check its software’s ability to spot misinformation and hate speech — and it should hire many more. Policing is as high-stakes as you can get, with the potential to ruin lives with every mistake. That means it needs to invest in more skilled people to oversee any new artificial intelligence tech it implements.
I disagree with the Met’s use of live facial recognition. It’s invasive and far too error-prone. But there’s no denying that the technology is growing — venture capital investment in facial recognition has been steady over the past few years, according to Crunchbase. And smaller facial-recognition players are also filling the vacuum left by Big Tech companies like Amazon.com Inc. and IBM Corp. who’ve stopped selling it.
If London’s police continue to use it — which looks likely — it should capitalize on its years of experience using staff to spot faces, since they are much less likely to “defer” to an algorithm than the officers who so often did during their live facial-recognition trial. To do otherwise is to divert too much power to machines. That usually doesn’t end well.
There are a number of online tests to determine if you have the ability.
Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author of “We Are Anonymous.”
For all the latest Technology News Click Here