David Horsburgh CPP PSP PCI, Managing Director at Security Risk Management, explores the legal and ethical issues around the use of facial recognition – a technology New Zealand still appears to be making its mind up about.
Live street surveillance systems have traditionally identified events in real time that require intervention or have been used as a post-event investigation tool. In contrast, live facial recognition technology targets the individual comparing people against a database of faces – a watchlist.
This key point of difference is a precipitator to increasing international opposition to the technology. In May 2019, the City of San Francisco became the first US city to ban the use of facial recognition technology by local government agencies. In October 2019, the city of Berkeley followed suit, and it is expected that a number of other cities in the US will ban the technology.
In May 2019 during live facial recognition trials by the UK Metropolitan Police a man was charged with disorderly behaviour as he walked past a police van that was equipped with facial recognition cameras – because he hid his face. In August 2019, a Swedish school was fined 20,000 euros by the Swedish Data Protection Authority for using facial recognition to check pupil attendance at the school.
My research on the topic focuses on two key questions:
- Does the state agency use of facial recognition breach the New Zealand Bill of Rights Act 1990 (NZBoRA) and the Privacy Act 1993?
- Does the private sector use of live facial recognition technology breach the Privacy Act?
Fischer and Green (1998) define security as: a stable, relatively predictable environment in which an individual or group may pursue its ends without disruption or harm and without fear of disturbance or injury.
I argue that the Fischer and Green definition needs to be viewed from a broad perspective. Does, for example, the definition include the protection of human rights including the right to privacy, freedom of expression, association and movement, and freedom from discrimination?
Interpretations of ‘security’ are influenced by our individual roles within society. As a police officer I spent three years on the ‘Mr Asia’ investigation and during that period in 1978 we got new legislation that enabled us to install listening devices inside premises – and I was a great supporter of that.
The New Zealand Security Intelligence Service (NZSIS) in the 1960s, ‘70s and ‘80s in particular focused on trade union officials and peace activists as being threats to our society. I can honestly say that during my service with the Police and NZSIS the privacy rights of others were not high on my priority list, but that I now take a somewhat different view.
Security needs to be ‘proportionate’ and I can’t emphasise the importance of that word enough. I will demonstrate that under New Zealand legislation and also under international law to which we are a signatory there is a requirement for security to be proportionate. Security mitigation strategies should be proportionate to the identified threat.
People need to be treated with dignity and respect and state agencies, especially, need to recognise human rights enshrined in the Bill of Rights Act 1990 (NZBoRA).
UK Metropolitan Police live facial recognition field trials
These trials took place between 2017 and 2019. Six of those field trials between June 2018 and May 2019 were independently reviewed by a US university. The review had the support of the Metropolitan Police.
The review personnel undertook an ethnographic research approach using observation and interaction methodologies. They were entitled to attend police briefings and debriefings and to take part in observations in the field.
The review questioned the legality of live facial recognition by law enforcement. It found that there was no explicit statute authorising the use of facial recognition technologies. Conversely the Police claimed that they had an implicit authority to use it because they had a statutory function of protecting the public order and the prevention of crime.
The review considered that the interference with human rights, including the right to privacy, must be in accordance with the law, must pursue a legitimate aim, and must be necessary in a democratic society – and this view flows into New Zealand legislation under the NZBoRA.
The finding of the review was that the term “in accordance with the law” requires explicit statute. When I refer to Omar Hamed’s case later in this article, the term explicit statute becomes of great significant in the New Zealand context.
The review found that implicit legal authority – as claimed by the police – is insufficient and does not meet the legal threshold of “in accordance with the law”. It found that police had failed to establish that live facial recognition was necessary in a democratic society.
The human right requirement is intended to ensure that the measures useful for the protection of public order and the prevention of crime do not inappropriately undermine other rights, including those necessary for the effective functioning of a democratic society, such as the right to private life, the right to freedom of expression, and the rights to freedom of assembly and association.
In the UK there are further legal impediments to the use of facial recognition. The European Union’s General Data Protection Legislation (GDPR) currently applies in the UK. Whether it will apply after Brexit remains to be seen, but under GDPR the collection of sensitive biometric data that can uniquely identify people without their explicit consent is prohibited.
The UK also has the Data Protection Impact Assessment, which creates a legal requirement for sensitive data processing. Automated facial recognition involves processing biometric data to uniquely identify an individual and as such it falls into the category of sensitive data processing.
Enjoying this article? Consider a subscription to the print edition of New Zealand Security Magazine.
UK case law has found that mere observation of people in a public place without further analysis does not give rise to interference with the rights to private life (Perry v The United Kingdom). However, private life considerations arise if instead of mere monitoring, areas of a public scene are recorded and analysed (P.G. and J.H. v The United Kingdom).
In Omar Hamed’s case, Justice Sir Peter Blanchard found that you do not have a reasonable expectation, under most circumstances, of privacy in a public place. Chief Justice Dame Sian Elias took a very different view.
On 22nd August 2019, the Financial Times reported that the European Commission is planning regulation that will give EU citizens explicit rights over the use of facial recognition technologies as part of the overhaul in the way Europe regulates artificial intelligence. The aim is to limit the indiscriminate use of facial recognition technology by companies and public authorities.
New Zealand Perspective
In May 2018, a Dunedin man was mistakenly identified as a shoplifter in a New World supermarket, and he was detained by staff. The customer was advised that they had identified him as a known shoplifter – an allegation that was subsequently proven to be incorrect. The Otago Daily Times reported that the store was using facial recognition technologies, although the store claims that the technology was not used in this particular case.
In another case in December 2016, a woman entered a supermarket and examined hams on display just prior to Christmas. She subsequently left the shop without making a purchase. The following day a friend of hers who worked at another supermarket advised her that her photograph had been placed in the supermarket noticeboard and staffroom.
She had been identified by the manager when she examined the hams as a suspicious person. He had done a frame grab of the video, and it had not only been put up in the staffroom of that supermarket but had also been put up in the staffroom of a number of other supermarkets in the city.
The manager said that this had happened because her behaviour had made him suspicious, but he agreed that there was no evidence to support his suspicion.
The use of live facial recognition technology is being used increasingly in retail outlets in New Zealand. Image databases known as ‘watchlists’ are being used by retailers not only to identify people who have been convicted of shoplifting or who have been trespassed from their premises, but also to identify people suspected of being potentially involved in crime.
The use of the technology in this way by retail outlets increases the likelihood of Privacy Act breaches.
The New Zealand Police report that they use facial recognition technology for post-event investigation and do not have any current plans to use it in a live format. However, there is an increasing rollout of street surveillance systems throughout New Zealand and an increasing number of these are being live streamed to police stations. It is not unrealistic to expect that there will be an increasing desire by the police to use live facial recognition technology in the future.
State agency use of facial recognition technologies has the potential to breach both the NZBoRA and the Privacy Act.
New Zealand Bill of Rights
Applicable sections of the NZBoRA are Sections 16 (Freedom of Peaceful Assembly), Section 17 (Freedom of Association), Section 18 (Freedom of Movement), Section 19 (Freedom from Discrimination), and most importantly Section 21 (Unreasonable Search and Seizure). Section 21 states that everyone has the right to be secure from unreasonable search or seizure whether of the person, property or correspondence or otherwise.
There is an argument that you do not have a reasonable expectation of privacy in a public place. Senior judiciary in this country have differing views as to whether that position is correct.
The Law Commission has found that public surveillance is not explicitly addressed by the Search and Surveillance Act. Key points from the Commission’s review of that Act include that public surveillance:
- is often used for general screening purposes and this is relevant to a proportionality assessment underlying Section 21.
- may impact on the privacy of large numbers of people many of whom may not be suspected of any wrongdoing.
- may have a chilling effect on the freedom of expression.
The Law Commission further states that the free expression of opinions and exchange of information is one of the fundamental underpinnings of our society. If public surveillance became commonplace it would spell the end of the practical obscurity that many people take for granted when they move about in public. Widespread monitoring of the general population by the state, said the Commission, must be avoided.
Omar Hamed and Others v The Queen
This Supreme Court case involved the police undertaking a covert operation on Tuhoe land, with Chief Justice Elias writing at length on her interpretation of Section 21 of the NZBoRA and on New Zealand’s obligations under Article 17 of the International Covenant on Civil and Political Rights.
According to Article 17, “no one shall be subject to arbitrary or unlawful interference with his privacy, family, home or correspondence nor to unlawful attacks on his honour or reputation. Everyone has the right to the protection of the law against such interference or attacks.”
Elias expressed the view that Section 21 is a constraint on state activity; it protects personal freedom and dignity from unreasonable and arbitrary state intrusion providing for the right to be let alone. Of importance, Elias stated that in principle there is no reason why activity in a public place should by virtue of that circumstance alone be outside the protection of Section 21 of the NZBoRA.
She discussed at length the UK decision Regina v Somerset County Council Ex Parte Fewings and Others, in which it is stated that for private reasons the rule is that you may do anything you choose that is not prohibited by the law.
So, normal citizens they can do whatever they like as long as there isn’t a law that says you can’t do it. But, says Elias, for public bodies, the rule is opposite and so of another character altogether – that the action taken by state agencies must be justified by positive law. This links back to the review conducted in the UK about assertive law being required for use of facial recognition.
Elias went on to say that a police search that is not authorised by law is unlawful and that unlawful police search is itself unreasonable search contrary to Section 21 of the NZBoRA. Importantly, she further stated that intrusive search is not a power to be treated as implicit in general statutory policing powers. She said, “I consider that the police cannot undertake surveillance lawfully in the absence of specific authority of law.”
Our further link to the findings of the independent review in the UK is Section 5 of our NZBoRA, which states that the rights and freedoms contained in the Bill of Rights may be subject only to such limits prescribed by law as can be demonstrably justified in a free and democratic society.
So, here we’ve got the same terminology that is used in the Covenant on Civil and Political Rights – “justified in a free and democratic society”.
In Omar Hamed’s case, Justice Blanchard took a different view to Elias. He stated that with some exceptions filming in a public place is not a breach of Section 21. He did, however, state that if filming involved technological enhancements, such as the use of infrared illumination, that a potential breach may occur.
So, in the context of a darkened alley at night you cannot see what’s happening, but if you use infrared technology then you breach Section 21 because the darkness would provide for a reasonable expectation of privacy in that alleyway.
Bearing in mind that that the Omar Hamed case concluded in 2011, the question therefore arises as to whether the use of facial recognition technology falls within the meaning of Blanchard’s description of technological enhancements and as such breaches Section 21 of the NZBoRA.
Watchlists are a dataset against which a template captured by facial recognition is compared. The following questions need to be considered in the design of a watchlist:
- Do entries on the watchlist only relate to persons convicted of serious crimes?
- Is it permitted to put on a watchlist people who have been identified as being able to provide information relevant to an active criminal investigation?
- Can watchlist compilations be abused, including against people who are political activists exercising their democratic rights to protest? (Case involving a public protest in the US against police use of force on a minority group where facial recognition was used to identify the people taking part in the protest).
- Would the intelligence community have any influence over watchlist compilation?
- Would the database be expanded to include driver licences and passport photographs of law-abiding citizens?
In a recent public meeting conducted by Victoria University Wellington, a visiting law lecturer from the US stated that over 50 percent of all US adults are now enrolled on facial recognition databases, and that’s normally via their driver’s licence photographs. For example, the Michigan police database contains over 50 million driver’s licence photographs and mugshots.
The dangers of a database compilation are highlighted by a case involving the Sensible Sentencing Trust. As reported by the Privacy Commissioner in Case Note 294302, the Sensible Sentencing Trust wrongly labelled a man on its website as being a paedophile. The man had the same first and last names of a convicted paedophile, but his middle name was different.
The Trust published his photograph on its website for a period of two years, and it wasn’t until a customer of the person told him that he was no longer welcome to visit the shop because he was a convicted paedophile that he knew about it.
The Privacy Commissioner reported that the innocent person had been implicated in a terrible crime; that his reputation was tarnished, he had suffered emotional harm, and that he had been placed at risk of violence. This case is now before the Human Rights Commission who can impose a penalty of up to $200,000.
The Privacy Act has 12 principles. The first principle is that the collection of information must be for a lawful purpose and it must be necessary. I can’t stress the importance of that term enough – ‘necessary’.
Widespread surveillance for general screening purposes may be considered unnecessary, especially when the surveillance involves biometric scanning of law-abiding members of the public. Under Principle 2, the collection must be directly from the individual concerned. Public sector agencies have an escape clause for this on the grounds that compliance may interfere with the maintenance of the law. This escape clause is not available to private sector agencies.
Private sector claims that compliance would prejudice the purpose of collection, or is not reasonably practicable, are likely to fail on the basis that the technology uses a scattergun approach. Non-compliance on the grounds that the individual concerned is not identifiable will also likely fail.
Principle 3 requires the individual to be aware of the collection, the purpose of collection, and the right to access the information. The Office of the Privacy Commissioner has passed a view on Principle 3 in relation to facial recognition, and it says that there should be signage and messages posted that inform customers that facial recognition technology is being used and the reasons why it is being used.
Principle 4 states that the collection must not be by unlawful or unfair means or intrude to an unreasonable extent on the personal affairs of the individual concerned. For both public and private agencies this principle is likely to be problematic because facial recognition technologies under most circumstances are likely to be considered unfair and to intrude to an unreasonable extent onto the personal affairs of the individual.
Principle 8 is also particularly relevant. The information (contained on a watchlist) must be accurate, up to date, complete, relevant and not misleading. Watchlists based on mere suspicion are likely to fail Principle 8 requirements. Trials of facial recognition technology have been shown to be biased and inaccurate with extraordinarily high false positive rates.
Limits on use as described in Principle 10 may also be problematic if state agencies intend to use driver’s license images for a purpose other than what the image was originally obtained for
As stated earlier, the Metropolitan Police conducted six trials between June 2018 and May 2019. The trials generated 42 matches, and the people matched were approached by police officers in the field. Those matches resulted in an 81 percent false positive rate.
In 2017 live facial recognition was used at the Cardiff Champions League final. 2,470 people were identified by facial recognition as being criminals. Subsequent research showed that there was a 92 percent false positive rate.
In 2017 at the Notting Hill carnival, live facial recognition had a false positive rate of 98 percent. Although not used in the Metropolitan Police Service trials, live facial recognition software can be integrated into police body worn cameras and city-wide surveillance camera networks, and that data may be subjected to automated analysis.
It is technically feasible to create a database containing a record of each individual’s movements around a city. The potential for this kind of use raises serious human rights concerns because the data may involve false positives, and could be used to identify unusual patterns of movement, participation at specific events or meeting with particular people
There is significant public interest in the issue of potential bias and discrimination. Concerns include discriminatory practices based on input data and watchlist composition, and bias built into live facial recognition technology on the basis of sex, race and colour.
Different algorithms and different applications assert biases in different ways, which requires analysis on an application-by-application basis. Independent tests on 127 commercially available facial recognition algorithms have identified gender bias where there are fewer false positives for men relative to women and racial bias where there is a higher false positive rate for dark-skinned females.
An MIT study in the US showed that darker skinned women are misidentified as men 31 percent of the time. There are twice as many false positive rates for blacks as there are for whites.
The US Justice Department is using artificial intelligence when somebody is arrested to predict whether they are likely to reoffend in the next two years, and that analysis is used to determine whether they get bail or they don’t get bail.
Studies have shown that the assessment of the potentiality for a criminal to reoffend within two years has a significant bias against African Americans.
I’d like to finish with the UK Camera Commissioner’s statement on the use of facial recognition, and regarding video surveillance.
The Commissioner stated that overt surveillance is becoming increasingly intrusive on the privacy of citizens, in some cases more so than aspects of covert surveillance because of the evolving capabilities of the technologies. The use of live facial recognition has the potential to impact upon ECHR rights and thereby influence the sense of trust and confidence within communities.
The Commissioner said that it should be a fundamental consideration of any relevant authority intending to deploy live facial recognition that a detailed risk assessment process is conducted and documented as to the operational risks, community impact risk, privacy and other human rights risks, and any risks associated with it prior to deployment.
Such risks should be considered as part of the decision-making processes associated with the necessity and proportionality of its use.
This is an abridged transcription of a presentation delivered by David Horsburgh at the ASIS New Zealand Chapter Auckland Members Breakfast Meeting in November 2019.