Facial Recognition: the front line of security versus privacy

New Zealand Security Magazine, June-July 2019

Facial Recognition
Transparency, accountability and oversight for facial recognition is critical to preventing government misuse.

With the city of San Francisco recently banning authorities’ use of facial recognition surveillance, privacy and accuracy concerns are starting to catch up with developments in the technology, writes chief editor Nicholas Dynon

Three distinct speeds appear to have emerged in the race to adopt facial recognition surveillance technologies: (i) ‘full-speed ahead’ towards a smart-city future, (ii) ‘slowly but surely’ stressing oversight and regulation, and (iii) ‘straight to pit lane’ banning of what is seen as a biased and inaccurate biometric collection technology that threatens civil liberties.

While New Zealand’s approach appears to be middle-of-the-road, many countries within the Asia Pacific region are speeding full-tilt towards incorporating facial recognition into new smart city projects utilising IoT to deliver infrastructure that is sustainable, efficient and secure by design. On the flip side, San Francisco has last month become the first city in the world to ban it’s authorities’ deployment of the technology.

Full speed ahead

From government agencies to private use, there is an increasing demand for advanced analytics-enabled surveillance systems for safety and security purposes. With facial recognition, organisations possess the capability of identifying and tracking individuals on their premises, and to flag visitors who are of interest. 

Facial recognition can be used to identify anyone from repeat customers to shoplifters, from missing persons to potential terrorists. Machine vision smarts make these systems quick and contactless, often able to process many moving facial images at once.

For these reasons, facial recognition cameras are seen as a cornerstone of the surveillance capability of the archetypal ‘smart city’. According to a McKinsey Global Institute report, the deployment of such smart applications results in 30 to 40 percent fewer crime incidents.

Many private operators, such as in the retail sector, utilise facial recognition for security and business intelligence purposes. Law enforcement and border security agencies also see its obvious benefits, and the emerging technology appears to be finding comparatively strong footholds in jurisdictions where there are fewer privacy barriers.

Straight to pit lane

“Police praise the technology’s power to improve investigations, but many agencies also try to keep their methods secret,” Jon Schuppe recently observed in NBC News. “In New York, the police department has resisted attempts by defense attorneys and privacy advocates to reveal how its facial recognition system operates.”

“Sometimes people arrested with the help of facial recognition aren’t aware that it was used against them. Because police don’t treat facial recognition as evidence for presentation in court, the technique does not often turn up in public documents and has not been the subject of many judicial rulings.”

Such is the potential abuse of the technology by government authorities that tech capital San Francisco took the step in May of banning the use of facial recognition software by the police and other agencies.

The result of an 8-to-1 vote of the city’s Board of Supervisors, the ban makes San Francisco the first major American city to implement such a prohibition, which comes in the wake of civil liberty group concerns that potential abuse of the technology could lead the US down the track of becoming an oppressive surveillance state.

Enjoying this article? Consider a subscription to the print edition of New Zealand Security Magazine.

And it’s not just local government that’s applying the handbrake. James Vincent, writing in The Verge, noted in April that Microsoft had turned down a request from law enforcement in California to use its facial recognition technology in police body cameras.

“Speaking at an event at Stanford University, Microsoft president Brad Smith said the company was concerned that the technology would disproportionately affect women and minorities,” wrote Vincent. ”Past research has shown that because facial recognition technology is trained primarily on white and male faces, it has higher error rates for other individuals.”

Slowly but surely

According to the American Civil Liberties Union (ACLU), the spread of facial recognition technology represents a “serious threat to civil liberties and civil rights”. 

“Transparency, accountability and oversight for facial recognition is critical to preventing government misuse,” commented the ACLU’s North California chapter.

“Companies developing facial-recognition software need to consider how their products enable dragnet surveillance, discriminatory enforcement, and abuse. Then those companies should take action to protect civil rights. Communities should be passing local laws to make sure that discriminatory surveillance systems are not secretly deployed in their neighborhoods.”

Echoing ACLU comments, Dr Shaun Ryan from the University of Canterbury, told RNZ recently that there is always a margin of error associated with AI-enabled surveillance. 

He said privacy safeguards also need to be considered, including who has access to the footage. “Often companies with these types of system will keep videos so that if they do make mistakes, they can use it to improve the algorithms,” he said.

Ultimately, a recorded facial image is a biometric record – a physical or behavioural human characteristic, such as facial image, fingerprints, voice, iris scan or gait, that is used to digitally identify a person.

“As the potential and application of biometric technology multiplies, making sure people’s privacy is protected has never been more important,” Biometrics Institute Chief Executive Isabelle Moeller commented ahead of her organisation’s annual Asia Pacific conference in Sydney last month.

It is believed that these guidelines are the first comprehensive, universal privacy guidelines for biometric collection.

Made up of sixteen principles ranging from non-discrimination to maintaining a strong privacy environment, the guidelines follow the launch in March of the Institute’s Ethical Principles for Biometrics, which cover:

  • Redress and complaints by people who have suffered discrimination, humiliation or damage as a result of biometric-related systems
  • Stronger privacy protection for data collection by automated systems, especially for minors and those with disabilities
  • Advice on managing subcontractors
  • The role of audits and privacy impact assessments
  • Managing data breaches
  • The right of citizens to have their biometric and record amended or deleted.

“Even if facial recognition software is highly accurate, there will still be times when it can get things wrong,” says New Zealand’s Privacy Commissioner (OPC). “Therefore any organisation or business using facial recognition technology needs to undertake a high level of scrutiny over how accurate it is and how thoroughly it has been tested for use in New Zealand.”

Some factors about facial recognition that the Privacy Commissioner suggests considering include:

  • What is the lawful purpose for using the technology? (principle one of the Privacy Act)
  • How will you notify people that you are using the technology? (principle three)
  • Will the technology be used in a way that might be unfair or unreasonably intrusive? (principle four)
  • Will the personal information be stored securely? (principle 5)
  • How will you accommodate an individual’s right to access the information about them? (principle 6)
  • How will you accommodate an individual’s right to correct information about them, if it is wrong? (principle 7)
  • How will you make sure the information collected is up-to-date and accurate? (principle 8)
  • How long will you keep the information for? (principle 9)
  • What will be your reasons for disclosing the information? (principle 11).

According to the Privacy Commissioner, organisations need to take the risk of misidentification seriously, and ask themselves what controls and processes they can put in place to minimise that risk.

RiskNZ