Computer Says No: Transparency needed in automated visa decision making

Decision making at the border
People don’t like the idea of a computer saying no, particularly if it leads to the refusal of a visa based on subjective genuineness grounds.

Immigration New Zealand’s pilot data modelling programme was pulled in April amid allegations of racist profiling and ‘pre-crime’ potential, yet the use of IT-supported profiling in immigration is nothing new. What, asks editor Nicholas Dynon, is with all the negativity?

‘Neutral’ discrimination

Within public discourse, ‘discrimination’ is a problematic term referring, as it does, to “the unjust or prejudicial treatment of different categories of people, especially on the grounds of race, age, or sex.” So too in discourse on immigration and border management, where people are rightly sensitised to stories of officials conducting invasive checks or making adverse decisions at the border on the basis of racial profiling.

Yet, in another sense, ‘discrimination’ refers to “recognition and understanding of the difference between one thing and another,” such as “discriminating between right and wrong.” It is in this sense that government immigration programs tend to be ‘discriminatory’ – and they ought to be.

Territorial and border integrity is the sovereign right of a state and a responsibility of government. In order to manage their borders, a government enacts legislated structures that provide a basis for discrimination between non-citizens to whom it grants entry and those to whom entry is denied.

In New Zealand, regulations and policy cascading from the Immigration Act 2009 provide the legislated basis to discriminate between visa applicants on the basis of various financial, character, health, skills, employment and credibility considerations, which vary according to the type of visa applied for.

Skilled visa categories, for example, discriminate against applicants who are insufficiently educated or experienced or too old or who work in an occupation not currently deemed to be ‘in demand’; Investment visas discriminate against those who are not fabulously wealthy; and working holiday visas discriminate against those from countries with whom New Zealand has no bilateral working holiday agreement.

In determining whether an applicant is a ‘bona fide applicant’ for a visitor visa, for example, the Immigration Instructions require INZ visa officers to take into accountthe personal circumstances of the applicant, including such things as the strength of any family ties in their home country and New Zealand; the nature of any personal, financial, employment or other commitments in either country; and any circumstances that may discourage the applicant from returning home when their visa expires.

If their home country is in the grips of armed conflict, sustained civil unrest or cataclysmic economic disaster, I’m guessing that that particular applicant – and their fellow countrymen in general – may well have difficulty meeting the ‘bona fide applicant’ requirement due to the existence of “circumstances that may discourage the applicant from returning home when their visa expires.” And with that, we have a ‘profile’ relating to people from ‘Country X’.

Enjoying this article? Consider a subscription to the print edition of Line of Defence Magazine.

According to Immigration Minister Iain Lees-Galloway the Immigration Act 2009 “recognises that immigration matters inherently involve different treatment on the basis of personal characteristics, but immigration policy development seeks to ensure that any changes are necessary and proportionate.”

These legislative structures and associated review mechanisms give the public a degree of confidence that only the ‘right’ forms of discrimination are applied to visa applicants. But what happens when computers and algorithms start making the decisions?

Technophobia and the Data Modelling Programme

Dystopian science fiction plotlines have fashioned a public wary of a future in which computers are empowered to make decisions that affect people. No clearer example of this was last April when Mr Lees-Galloway suspended an INZ pilot data modelling programme following an expose by Radio New Zealand and subsequent public backlash.

According to reports, the programme used a profiling tool that analysed the historical data of around 11,000 illegal immigrants, including their gender, age, country of origin, visa held upon entry to New Zealand, and whether they had been involved with the police, been illegally employed, or used health services to which they were not entitled.

The data would forecast the negative impact individual illegal immigrants might be expected to have in future as a cost-effective basis upon which INZ might prioritise deportation action.

The programme’s outing led to cacophony of criticism that the profiling tool involved racial profiling, that it utilised a predictive algorithm that would lead to prejudicial enforcement action, and that it breached privacy in terms of the use of individuals’ personal data.

“This approach appears to be another way of reducing migrant numbers,” said New Zealand Association for Migration and Investment chair June Ranson. “An individual will be deported or refused entry due to their background being similar according to computer profiling rather than actual facts.”

It is worthwhile noting that such criticisms tend to fail to acknowledge that the programme focused only on illegal immigrants already eligible for deportation (not visa applicants or migrants in general) and that it was designed to inform human decision making rather than supplant it… misconceptions that a good communications strategy perhaps could have more adequately dealt with.

Nevertheless, and as Danyl Mclauchlan correctly pointed out in The Spinoff,“humans aren’t actually very good at making evidence-based decisions – that’s why almost every large company and government department in the world is moving towards decision-making processes incorporating computation and statistical modelling.”

He continues, “if you remove statistical models and computational algorithms which reveal discriminatory assumptions or outcomes, you’re not removing discrimination: you’re just making it less transparent.”

Future-present: Electronic processing platforms

Although INZ’s data modelling programme did not involve the metering out of decisions by computers, there are plenty of examples in the immigration world where computers are doing just that.

A number of countries require the passport holders of certain countries to apply for and hold an Electronic Travel Authority (ETA) to enter, as opposed to a traditional visa or visa waiver arrangement (New Zealand is expected to roll out its first ETAs in 2019). With a streamlined online application process, ETAs can also involve automated business rules-based decisions to grant visas.

Australia’s decades-old ETA system boasts an auto-grant facility that enables the automatic grant of visas to individuals that meet certain criteria without the need for manual processing. Where an application does not meet the business rules leading to auto-grant, the application is pushed to a human decision maker.

Such systems are developed and often managed by private sector partners, with the government retaining ownership of policy, data, the business rules that determine how an application is handled.

In 2017, the Australian Government announced it would take this model a step further with a new privatised Global Digital Platform (GDP) designed to make receiving and processing visa applications ‘exclusively digital’. Starting with temporary visas, machine learning and robotic process automation are expected to increase the proportion of visa assessments that can be automated over time.

IT News reported on 10 December that the bidding process for the billion-dollar project has narrowed to two contenders: Australian Visa Processing – a conglomerate consisting of Ellerston Capital, PwC, Qantas Ventures, NAB and Pacific Blue Capital – and Australia Post and Accenture.

It is envisaged that the GDP will ultimately include handling health checks, and more complex and subjective assessments relating to character and bona fides. And it is in relation to the latter in particular that the project has courted some controversy.

Echoing concerns expressed in New Zealand in relation to INZ’s data modelling programme, significant public unease exists around the idea of computers or artificial intelligence making assessments or decisions that are deemed subjective – and where they may lead to an adverse decision (rather than a positive auto-grant) against a visa applicant.

Unnecessarily exacerbating these concerns is the paucity of detail provided to the public about such projects, which tend to be shrouded under cover of commercial-in-confidence (as with the GDP) or a need-to-know (as may have been the case with the INZ programme).

People don’t like the idea of a computer saying no, particularly if it leads to the refusal of a visa based on subjective genuineness grounds. It’s dystopian, and its dehumanising.

To avoid the costs associated with public fallout, government innovators in this space need to change their disposition from one of opacity to one of transparency – and up their game in terms of credible consultation, communication and stakeholder engagement planning.

RiskNZ