Data in government: making it fair and legal

Discrimination via algorithm is no less of an offence than discrimination by a public official.”

Committee on Standards in Public Life, Artificial Intelligence and Public Standards, p26

Last month, the Committee on Standards in Public Life published a report and recommendations to government, “to ensure that high standards of conduct are upheld as technologically assisted decision making is adopted more widely across the public sector.”

Foxglove wants to make sure every public body in the UK uses data, and computer-aided decision-making, in a way that is open, fair, and legal.

We challenge opacity, bias and discrimination. We work alongside those on the receiving end of unjust, computer-aided decision-making, and seek to help them secure justice. So we read this report with great interest. We see it as a starting point for an overdue reckoning in the public sector with the way information about us is gathered and used.

We have a simple aim: to make all uses of data by the UK government, national or local, fair, just, and equal.

This is framed as a report about ‘AI in the public sector’ – but really the principles apply to any public sector use of mass data. The Committee gives the government’s current performance a pretty low score – and rightly so.

Transparency, yes, but with teeth

The government does not publish any centralised audit identifying and making publicly available the extent of AI use across central government or the wider public sector.”

Committee on Standards in Public Life, Artificial Intelligence and Public Standards, p19

The CPSL recognises that insufficient safeguards are in place to guard against AI and machine learning leading to harmful outcomes in the public sector. It argues that “two areas [of concern] in particular – transparency and data bias – are in need of urgent attention in the form of new regulation and guidance”.

Foxglove sees this as the start of an essential debate we need to have.

The CSPL are right: There is currently a serious lack of transparency about where and how computer-aided decision-making is being used in the public sector. This makes it much harder for organisations independent of government (like Foxglove) to scrutinise the use of AI by government bodies.

It also makes it much harder for an affected individual – a visa applicant, or benefit claimant, or school pupil, for example – to know if, and how, mass data has been involved in a decision with the potential to have a huge impact on their own life.

Equality: spotting and stopping the bias doom loop

public bodies should always know how their systems are biased and who is most affected by that bias.”

Committee on Standards in Public Life, Artificial Intelligence and Public Standards, p28

This lack of transparency makes it all the more difficult to assess or challenge potential bias in AI-assisted decisions. Because a decision-making algorithm will generally “learn” from historical data, there’s a risk that pre-existing biases which occurred in human decision making will be entrenched and exacerbated.

In addition, there’s a further risk that new forms of bias will be introduced, for example due to the prejudices or blindspots of the programmers who are writing the algorithm.

Our existing legal case challenging the Home Office, over its use of an algorithm to process visitor visa applications, illustrates these problems very clearly.

We’ve had to take legal action to start to gather meaningful information about what the Home Office’s algorithm does and how it has used – and the Home Office is still refusing to disclose important details.

It’s proven very hard to say if, let alone how, the algorithm has affected the decisions made for individuals we’ve spoken to who have had visitor visa applications rejected, or for businesses, universities or other institutions whose work has been disrupted as a result.

The tool is being deployed by a government department with a poor record of discriminatory behaviour, particularly when it comes to overseeing the immigration system, with huge scope for historically biases and prejudices being coded into any automated system.

Accountability: humans can be unfair, but black box processes make it worse

There will be more of an incentive for public officials to monitor and check their AI systems if an official has to answer to the public for the outcome of an automated decision.

Committee on Standards in Public Life, Artificial Intelligence and Public Standards, p20

Foxglove is seeking to use existing laws to challenge the Home Office in the courts over its use of this algorithm.

We think it will always be important for external, independent organisations like Foxglove to play a role in scrutinising and challenging unfair government decisions-making.

However, we do also agree with the CSPL that the government needs to act as a matter of urgency to introduce improved governance and regulation of the use of AI in the public sector. If this was done effectively, it should both strengthen legal protections, and make legal challenges such as that which we are bringing against the Home Office more of a last resort.

We think the CPSL’s recommendations are a helpful starting point, and would urge ministers and parliamentarians to bring forward plans to implement them as soon as possible. However, we think it remains to be seen whether these recommendations will on their own be sufficient to ensure AI does not lead to harmful outcomes, or make it harder for the public to challenge harmful outcomes.

Democracy: put the systems to the people, and mark out no-go areas

AI will create new possibilities in prediction, automation and analysis, so it is important that public sector organisations examine the ethical permissibility of their project before deciding to procure or build an AI system.

Committee on Standards in Public Life, Artificial Intelligence and Public Standards, p31

It remains to be seen whether the committee’s assessment that “there is nothing inherently new about what is needed to govern and manage AI responsibly” is correct, and whether existing principles and legal protections, if properly applied, will be sufficient.

Our legal challenges, such as our challenge of the Home Office’s use of algorithmic decision-making to stream visa applications, will be an important test of this.

Clearly proper application of existing laws, through the courts and through the actions of regulators such as the ICO and the ECHR, will help. But at present it is too early to say with confidence whether existing laws, such as the Equality Act or the GDPR, provide adequate legal protections of whether new laws are needed to address the specific challenges of AI.

There is one important omission in the CSPL’s report which we think reflects an ingrained, and flawed, assumption within government.

The CSPL doesn’t acknowledge that certain computer-aided technologies – facial recognition, for example – may be so intrinsically prone to harmful outcomes that it’s simply impossible to mitigate their harm through regulation and governance.

In these cases a moratorium on their use would be a more sensible approach. Such technologies certainly shouldn’t be rolled out and imposed on the public without explicit democratic consent.