Safe Cities and the runaway surveillance economy

New Zealand Security Magazine - August-September 2020

surveillance and privacy
Are operators of CCTV making the public adequately aware their data is being collected?

Chief Editor Nicholas Dynon explores ‘safe cities’ and the rapid deployment of obtrusive surveillance technologies – and lack of associated public debate and legislation – that is fuelling their unchecked rise.

According to some accounts, the term “smart city” was born out of a conversation between the Clinton Foundation and Cisco in 2005, according to others it was coined by IBM in 2008. 

Whatever its beginnings, it has been the Internet of Things (IoT) that has powered the rise of smart cities, with data collected by IoT sensor technologies promising to make city services more efficient, sustainable, and accessible.

Smart cities sound are, as the name suggests, smart. Immense volumes of data collected from IoT sensors placed around a city allow authorities to measure anything from traffic congestion to footfall, fire hydrant flow rates to garbage waste levels, and services usage to citizen engagement.

With the ability to measure how a whole civic population consumes, moves and engages, authorities have a truly historic opportunity to manage their cities better than ever before.

The IoT has also fuelled the rise and rise of an apparent offshoot of the smart city – the ‘safe city’.

Although it could be argued that the concept has its origins in the United Nations’ UN-Habitat Safer Cities Programme, launched in 1996 at the request of African Mayors seeking to tackle urban crime and violence in their cities, the ‘safe city’ is largely seen as part of – and enabled by the technologies of – the smart city.

In their 2017 research, for example, Maroš Lacinák and Jozef Ristvej classify the safe city as a ‘subset’ of the smart city. 

A safe city, they state, “by the integration of technology and natural environment increases the effectiveness of processes in the field of safety, in order to reduce crime and terror threats, to allow its citizens life in [a] healthy environment [with] simple access to healthcare, and to achieve readiness and quick response to… emergencies.”

A perhaps clearer definition is provided by Paul Bremner of technology research provider Omnia. “Integrating critical security information from a range of sources onto a consolidated IT platform,” he writes “the safe city aims to promote interoperability among law enforcement, emergency services, and other agencies. 

“By doing so, the safe city hopes to streamline operations, unify emergency response, and increase situational awareness among all stakeholders involved in the management of a city’s security, allowing law enforcement to respond to incidents more quickly and efficiently.”

According to Omnia, the global safe city market was valued at $21.6 billion in 2019, and will grow to $35.8 billion in 2024. It’s big business, with the Asia Pacific region leading the charge, accounting for 45 percent of total global revenue in 2019. 

Of the seven technology groups that make up the safe city market, the largest by far is video surveillance. This is due, writes Bremner, to the heavy reliance of the security industry on video, and to upgrades to video surveillance equipment due to aging video surveillance infrastructure, especially in the Western Hemisphere.

The New Zealand IoT Alliance, a group within New Zealand technology sector advocate NZTech, argues that the economic opportunity to New Zealand of IoT technology is enormous.

According to their 2018 report The Internet of Things: Accelerating a Connected New Zealand, better use of IoT could “create at least $2.2 billion in net economic benefit for New Zealand over the next 10 years.”

“By managing traffic flows to reduce congestion, deterring crime using intelligent lighting and cognitive CCTV, enhancing public transport and using adaptive city lighting both for aesthetics and safety,” states the report, “IoT can make a city a more desirable place to be.’

According to the report, data collected from IoT sensors can assist authorities to create evidence-based policy. 

“In Wellington, a Safe Cities Programme uses cognitive CCTV and overlays data from police, social welfare, DHB and organisations such as City Mission. The data is used operationally to help make the city safer and the council is also using it to inform its new policy on homelessness.”

In December 2018, Spark reported that it had transformed Madden Street in Auckland’s Wynyard Quarter into a ‘smart street’, commenting that “smart streets are a stepping stone to what will be a fully operational smart city, where the Internet of Things (IoT) delivers a platform for safer, more mobile and productive communities.”

New energy efficient lamps were being installed along Madden Street, each containing a built-in ‘Smart City Module’ providing “Public WiFi and smart CCTV, with options to extend into more functions including pedestrian counters and Public Alert speakers.”

Privacy

Interestingly, the term ’privacy’ appears in the New Zealand IoT Alliance’s report 27 times. It identifies concerns around privacy as one of the top three impediments to IoT deployments in New Zealand.

“The current Privacy Act focuses on the principle of data minimisation, where organisations

are encouraged to keep the minimum set of personal data on its customers,” the report states. “However, in an IoT environment we are awash in a sea of data. Data minimisation is becoming fundamentally outdated, so how do we regulate and manage privacy in an IoT world?”

It’s a challenging question. And one that in the New Zealand context remains open.

It’s not so open, however, in many other jurisdictions. In its research into the global safe city market, Omnia states that the growth in the market in Asia Pacific is driven – among other things – by “a focused top-down governance structure able to marshal support for safe city initiatives and easily quash objections.”

The data-rich, privacy-poor growth in smart city markets has not gone unnoticed by privacy advocates, including Privacy International.

“Beyond the marketing term – that companies have been using to sell the idea of a city that becomes more efficient, more sustainable and more secure by using technology – what smart cities are really about is the collection of data in the public space by government and the private sector to provide services,” states a Privacy International smart cities explainer.

“When left unchallenged, public-private surveillance partnerships can eventually normalise surveillance and place us all on watchlists. They can have a negative impact on our right to protest, our ability to freely criticise the government and express dissenting ideas.”

Smart cities, says Privacy International, “are being designed and implemented based on little or no evidence, and without conducting impact assessments on human rights and in particular the right to privacy.”

Maya Shwayder, writing for Digital Trends, notes that an An ExpressVPN poll in February 2020 found that more than two-thirds (68 percent) of Americans are concerned with the “growing use of facial-recognition technology,” and 78 percent with its potential abuses.”

Facial recognition

Facial recognition-enabled video has become synonymous with safe city surveillance, and concerns over the privacy implications of facial recognition is a topic that frequently appears in articles published in NZSM.

Among these articles, the most authoritative and well-researched are those by leading New Zealand security consultant David Horsburgh PSP CPP PCI.

Asked recently by NZSM about his current thoughts on the use of facial recognition enabled

CCTV in public spaces, David commented that there “still appears to be a lack of debate and rules around its use and whether our society needs this level of intrusiveness in our daily lives.”

“My recent research has identified a number of large private retail companies using facial recognition in their shops,” he told NZSM. “The composition of watch lists, the databases used to match a known person against live views from CCTV cameras, is of great concern.”  

“There are no rules about placing a person into a watch list and companies are compiling the lists on suspicion rather than judicially established facts.”

Criticisms of high false positive rates continue, he noted, quoting Rachel Dixon, Privacy and Data Protection Deputy Commissioner at the Office of the Victorian Information Commissioner in Australia. “One of the challenges we have is overcoming the instances when a false positive is assigned, Dixon is reported to have stated. “It is hard to shake somebody’s perception that is wrong after a machine has said it is right.”

This is echoed by the Office of the Privacy Commissioner in New Zealand. According to the OPC website, where facial recognition is deployed to prevent shoplifting, “if a person is misidentified, they may continue to be branded an offender by your business or organisation, when the information is wrong as happened in this case.”

“When it comes to identifying people accused of a crime, getting it wrong can have a severe impact on the person affected.”

David also draws attention to comments by Clare Garvie, a Senior Associate at the Center on Privacy and Technology at Georgetown Law in the US in relation to the lack of transparency around facial recognition deployments.

“With very few exceptions there are no laws that govern the use of this technology either at a federal level or state and local levels. As a result, this technology has been implemented largely without transparency to the public, without rules about auditing or public reporting, without rules around who can be subject to a search. Law enforcement agencies themselves have said this [facial recognition technology] creates a very real risk of people being chilled, not feeling comfortable participating in public protest or public speech, particularly contentious speech, speech that calls into question police activity.”

In New Zealand, David states, the Privacy Act is meant to facilitate people ascertaining what personal information is held by the State, especially to allow an analysis of its accuracy.

“The use of personal information, such as photographic images for a  watch list compilation, without enforceable robust policies and procedures, attacks the fundamental purpose of protections for the individual in a free, open and democratic society.”

Who is responsible?

Globally, it appears commonplace for jurisdictions to be leaving it up to the market and to operators of smart city sensors, such as facial recognition-enabled surveillance cameras, to come up with solutions to privacy-related concerns. 

According to James Ward, a data and privacy lawyer, the American model of self-regulation has resulted in ‘information capitalism,’ which turns basic human activity into a commodity. Can it be left to an ‘information capitalist’ market to satisfactorily resolve privacy issues?

In a 2018 report in Stuff.co.nz revealing the use by supermarkets of facial recognition to identify shoplifters, NZSA CEO Gary Morrison was quoted as saying that facial recognition technology was popular internationally and should be expected to become more widely adopted in New Zealand.

“As long as retailers met their obligations to adequately signal it was being used, it should not be a concern to consumers, he said.  “If it’s not used properly, that’s an issue.”

But this begs the question, to what extent are CCTV operators fulfilling their obligations to consult with and notify affected stakeholders, such as an otherwise unsuspecting public?

According to a December 2019 Stuff.co.nz report, New Zealand Police failed to consult Privacy Commissioner John Edwards in relation to a new facial recognition system to be rolled out in 2020. NZ Police had enlisted US company Dataworks Plus to design an upgrade to the Automated Biometric Identification Solution (ABIS) system

Edwards was reported by Stuff as saying, “I would expect to be consulted on a matter that has such potentially significant privacy concerns, and involves new technology developed in a completely different jurisdiction with a completely different population.”

While the OPC publishes guidelines on the use of CCTV – Privacy and CCTV: A guide to the Privacy Act for businesses, agencies and organisations – that links back to the Privacy Principles, these are guidelines only.

The Guide, for example, stipulates that organisations using CCTV – among other things – must consult with potentially affected stakeholders prior to deployment, develop a ‘CCTV Policy’, erect signage that describe why the CCTV system is being used, and publish more detailed CCTV notices in newspapers or via the organisation’s website.

To what extent do organisations actually do this? If you were to stroll down Auckland’s Queen Street or along Wellington’s Lambton Quay, how aware would you have been made by the various private and public organisations that deploy cameras in these areas that your image was being recorded? And how many of these cameras are facial recognition enabled?

I’m guessing it’s unlikely you would have been feeling very aware. And this is just to do with consultation and notification. What about the other areas covered by the OPC’s Guide, such as how CCTV data is to be stored, managed, protected and retained, and who should have access to it?

According to a November 2014 post on the New Zealand OPC blog, Paul Chadwick, the first Privacy Commissioner for the Australian State of Victoria is reported to have stated that there are three things we have long known about surveillance:

  • digital technologies reduce the cost of surveillance and make it easier to undertake surveillance on a mass scale
  • surveillance can serve legitimate purposes but it can also pose serious risks
  • to manage the risks we need a framework of law – confidence in that framework gives legitimacy to the trade-off between privacy and security in democracies.

While the legislation around safe city surveillance technologies remains fluffy, there remains the risk that the tech will be used in ways that breach peoples’ privacy and that run counter to democratic principles.

The smart city as a civic community in which technological advances enable city services to be more efficient, sustainable, and accessible, is a worthy ideal. After all, city councils have a strong mandate and responsibility to operate as efficiently, sustainably and responsively as possible. 

However, is it the role of local government to draft and implement legislative frameworks that establish appropriate balances between security and privacy within the context of surveillance technology? 

In the absence of national-level government leadership, some cities, such as San Francisco in the US, have taken it upon themselves to implement controls, such as banning the use of facial recognition by local authorities. But, surely, issues around privacy and the collection of individuals’ surveillance data go well beyond the remit of city hall. 

Ultimately, there are big issues of social fabric and social contract here that – sooner or later – will require the involvement and guidance of a so-far reluctant state.

In the meantime

For those looking to deploy facial recognition, the OPC website features a list of factors that it suggests should be considered carefully:

  • What is the lawful purpose for using the technology? (principle one of the Privacy Act)
  • How will you notify people that you are using the technology? (principle three)
  • Will the technology be used in a way that might be unfair or unreasonably intrusive? (principle four)
  • Will the personal information be stored securely? (principle 5)
  • How will you accommodate an individual’s right to access the information about them? (principle 6)
  • How will you accommodate an individual’s right to correct information about them, if it is wrong? (principle 7)
  • How will you make sure the information collected is up-to-date and accurate? (principle 8)
  • How long will you keep the information for? (principle 9)
  • What will be your reasons for disclosing the information? (principle 11)
  • With the introduction of any new technology, our office encourages any organisations considering collecting any personal information to consider their obligations and to undertake a privacy impact assessment.

The OPC CCTV and Privacy Guide can be viewed/downloaded from the OPC website: https://privacy.org.nz

Comment below to have your say on this article.

If you have a news story or would like to pitch an article, get in touch at editor@defsec.net.nz

Sign up to DEFSEC e-Newsletters.

RiskNZ