NEW best practice principles for the use of facial recognition technology by law enforcement have been released by the World Economic Forum.
The proposed governance framework will be piloted in the Netherlands to see if it can mitigate the risks inherent in facial recognition surveillance.
The white paper represents the first global multistakeholder effort to manage risks to citizens of facial recognition technology, with a partnership of the International Criminal Police Organization (INTERPOL), the Centre for Artificial Intelligence and Robotics of the United Nations Interregional Crime and Justice Research Institute (UNICRI), and the Netherlands police.
Netherlands head of special police operations Marjolein Smit-Arnold Bik said the country’s police will begin testing the assessment questionnaire in early 2022.
“Building and maintaining trust with citizens is fundamental to accomplish our mission and we are well aware of the various concerns related to facial recognition,” he said.
“In this regard, being the first law enforcement agency to test the self-assessment questionnaire is a means to reaffirm our commitment to the responsible use of facial recognition for the benefits of our community.”
In September Australia’s two most populous cities began trialling facial recognition technology by police as part of the Covid-19 self-isolation program, sparking alarm amongst the majority of the population.
Little-known tech firm Genvis Pty Ltd said on a website for its software that New South Wales and Victoria, home to more than half of Australia’s 25 million population, were trialling its facial recognition products.
Genvis said the trials were being conducted on a voluntary basis.
Under the system being trialled, people respond to random check-in requests by taking a ‘selfie’ at their designated home quarantine address.
If the software, which also collects location data, does not verify the image against a “facial signature”, police may follow up with a visit to the location to confirm the person’s whereabouts.
The Australian Human Rights Commission has said facial recognition should not be used in the country until effective legal safeguards are implemented.
It is not yet known whether the newly released best practice guidelines will be used in the trials.
INTERPOL director of operational support and analysis Cyril Gout said the framework has been co-designed to serve as a unique reference to law enforcement in 194 member countries on the responsible and transparent use of facial recognition.
“We will support its implementation through our global police network to increase awareness of this important biometric technology,” Mr Gout said.
“Almost 1,500 terrorists, criminals, fugitives, persons of interest or missing persons have been identified since the launch of INTERPOL’s facial recognition system in 2016.”
Head of UNICRI’s Centre for Artificial Intelligence and Robotics, Irakli Beridze said ensuring the human rights compliant use of FRT in a way that is strictly necessary and proportionate to meet legitimate policing aims is of immense importance.
“We are pleased to contribute to this valuable initiative to develop a robust governance framework for the use of facial recognition in the context of criminal investigations,” Mr Beridze said.
“[We] believe that it will also be an important source for our broader joint work with INTERPOL on the responsible use of artificial intelligence by law enforcement.”