Police agencies worldwide are trialing new automated facial recognition technology designed to increase arrests and prevent crime from occurring by catching suspicious persons before they strike.
Clearly the upside is huge. Software hunts for the target, acquires it, tracks it and reports back to the authorities, whose job then is to merely scoop the bad person up. But so far that dream remains just that, a dream. The reality is that these systems are pretty terrible at identifying all but the clearest of images taken under the best conditions. One such system deployed in London actually has a 98% error rate so far, with no plans to pull it from the arsenal, despite having almost no value to the police.
The Register UK reports that at a hearing in the London Assembly, top cop Cressida Dick was called to explain why the police are spending time on this tech when it is currently facing legal challenges and is seen as widely unpopular by the general population. She responded that the people expect cops to stay on top of technology, and emphasized that the trials have been extremely controlled so as to protect the public. And protect them they must, I imaging, when the tech is only 2% accurate! She later when on to broadly pan the technology even while defending it, saying “It’s a tool, it’s a tactic. I’m not expecting it to result in lots of arrests”.
Critics say such technology violates basic human rights to privacy and freedom of expression, but even more concerning is what could happen if someone is falsely identified. The Met police suggested that they are only searching for a very small number of violent offenders, but didn’t go so far as to guarantee that no one else will be caught up in the dragnet.