Home Office says it will abandon its racist visa algorithm – after we sued them

Home Office lawyers wrote to us yesterday, to respond to the legal challenge which we’ve been working on with the Joint Council for the Welfare of Immigrants (JCWI)

We were asking the Court to declare the streaming algorithm unlawful, and to order a halt to its use to assess visa applications.

Before the case could be heard, the Home Office caved in. They’ve agreed that from this Friday, August 7, they will get rid of the ‘streaming algorithm.’ 

Home Secretary Priti Patel has pledged a full review of the system, including for issues of ‘unconscious bias’ and discrimination.

This marks the end of a computer system which had been used for years to process every visa application to the UK. It’s great news, because the algorithm entrenched racism and bias into the visa system.

The Home Office kept a secret list of suspect nationalities automatically given a ‘red’ traffic-light risk score – people of these nationalities were likely to be denied a visa. It had got so bad that academic and nonprofit organisations told us they no longer even tried to have colleagues from certain countries visit the UK to work with them.

We also discovered that the algorithm suffered from “feedback loop” problems known to plague many such automated systems – where past bias and discrimination, fed into a computer program, reinforce future bias and discrimination. Researchers documented this issue with predictive policing systems in the US and we realised the same problem had crept in here.

It’s also great news because this was the first successful judicial review of a UK government algorithmic decision-making system.

More and more government departments are talking up the potential for using machine learning and artificial intelligence to aid decisions. Make no mistake: this is where government is heading, from your local council right on up to Number 10.

But at the moment there’s an alarming lack of transparency about where these tools are being used and an even more alarming lack of safeguards to prevent biased and unfair software ruining people’s lives.

There’s been some discussion around correcting for biased algorithms but nowhere near enough debate about giving the public a say in whether they want government by algorithm in the first place. At Foxglove, we believe in democracy – not opaque and unaccountable technocracy.

Foxglove exists to challenge such abuses of technology. It’s a safe bet that this won’t be the last time we’ll need to challenge a government algorithm in the courts.

Campaigns like this rely on donations. To support Foxglove’s ongoing work, please click the Donate button below.