When a system misidentifies a person, he or she could be wrongly detained by law enforcement.
And if a system is worse at identifying particular groups than others (perhaps because of insufficient training data), people belonging to these groups will be more likely to fall victim to wrongful identification.
There are fundamental questions of privacy, consent, and function creep.
How can people be sure their facial data is not being shared?
What protections are there for people that are of no interest to the police?
Could ‘potential’ criminals be detained before committing a crime?
These concerns have been long recognized within the industry.
6 guiding principles for face recognition
The Biometrics Institute was set up in 2001 to promote the responsible and ethical use of biometrics. “Often, legislation can’t keep up,” explains Chief Executive Isabelle Moeller. “The technology is moving so fast that it’s complicated to provide the right framework in time.”
Meanwhile, bodies such as the Biometrics Institute are working hard to help organizations grapple with the big questions.
“What does it mean if we collect people’s data? How do we store that information securely? These are the types of questions we’re seeking to answer on a day-to-day basis,” says Moeller.
In 2019, the Institute updated its privacy guidelines to factor in the growth of artificial intelligence, drones, and more sophisticated facial recognition systems. It continues to build on the work being done by organizations across the board.
One of these exemptions is consent (specific, informed, and unambiguous).
However, the regulation does not define consent in great detail and has yet to be tested thoroughly.
There are fundamental questions of privacy, consent and function creep. How can people be sure their facial data is not being shared?
In public places, where the tech is used for surveillance or targeted advertising, it is hard to see how consent can be explicit.
A venue might display a sign explaining that facial recognition will be used.
But is consent ‘freely given’ in this scenario?
Can anyone opt out?
What if a person enters without seeing the sign?
Industry insiders believe that transparency is necessary to the ethical development of facial recognition – and that regulation, such as GDPR, provides it.
ABI Research’s Dimitrios Pavlakis says: “Data protection is vital for face recognition, citizens need to know how their data is being used. Innovation and technological progress should not exclude responsibility."
Frederic Trojani, chairman of the Security Identity Alliance, agrees with Pavlakis: “The way biometric data will be used should be explicitly explained to the people. Regulations need to set clear rules on individual privacy and data protection. People need answers to questions such as: will my data be stored? For what reason? For how long? Do I have the right to erase it? GDPR is a good example of how to do this.”
The move was seen as a gesture because the police department does not currently deploy facial identification.
Nevertheless, industry insiders believe regulators should reserve judgment before outlawing the tech outright. Joseph Hoellerer, Senior Manager, Government Relations, at the Security Industry Association, says: “We view any moves to ban facial recognition as premature and problematic."
“Lawmakers need to look at the issue holistically. They need the full picture before acting. The best way to characterize face recognition in law enforcement is to see it as one of many available tools. It should not be used as the sole basis for apprehending someone. But it should not be rejected either.”
Melissa Doval, CEO of Kairos, agrees that there are important ethical questions to address facial recognition.
US-based Kairos offers technology that companies can use to apply facial identification to their own databases. Developers can use Kairos’s APIs to match the same face or even detect whether a face is present.
Does facial recognition have an ethics issue? Melissa Doval, Kairos
Used ethically, facial recognition can improve citizen security by helping the police detect and catch criminals faster.
It might also help prevent crime before it has occurred.
But regulations need to be put in place to limit its use to well-identified and legitimate cases.
Completely banning this technology seems hasty, which is why cities such as London have established expert panels to examine the issues.
Doval believes any debate should consider factors beyond the technology alone. “These are human questions,” she says. “Facial recognition systems are not like connected cars, for example, where the vehicle itself might need to make a potentially life or death decision. Facial recognition merely feeds the information that a person acts on.”
She is confident that society will eventually settle on an ethical framework for the tech.
But in the meantime, Kairos is selective about its client base.
It turns away customers daily and works with those that apply facial recognition to matching/authentication and not surveillance.
For more information regarding our services and solutions contact one of our sales representatives. We have agents worldwide that are available to help with your digital security needs. Fill out our contact form and one of our representatives will be in touch to discuss how we can assist you.
Please note we do not sell any products nor offer support directly to end users. If you have questions regarding one of our products provided by e.g. your bank or government, then please contact them for advice first.