Challenging inaccurate decisions of public authorities which fundamentally impact the life of the British public, could soon be harder. The UK government plans to replace the Human Rights Act 1998 with a Modern Bill of Rights. Its package of law reform proposals will make it very hard, and in some cases impossible, for individuals to challenge decisions produced by the operation of artificial intelligence decision-making processes in court. While individuals who experience discrimination in their daily lives will be particularly affected, all individuals will face barriers to accessing justice. This development is important in light of the fact that the UK government formulated a strategic priority in 2017 to create conditions for the growth of the artificial intelligence industry in the United Kingdom. As a follow up the UK government set up the Government Digital Service and the Office for Artificial Intelligence in 2019 in order to inform public authorities about how they can embed artificial intelligence technology into the provision of public services. This suggests that public authorities will make increasing reliance on the employment of artificial intelligence decision-making processes. The Department for Work and Pensions (DWP) is already using artificial intelligence technology to detect which individuals are fraudulently claiming benefits.
There are diverse contexts in which public authorities may use artificial intelligence technology to estimate the likelihood of individuals exhibiting particular behaviour and to reach determinations about individuals. PredPol is marketing a product for police which predicts in what areas individuals are likely to perpetrate a crime. Other examples of possible applications of artificial intelligence technology include determining how to allocate job training resources to unemployed individuals and to determine who gets to be admitted to a university. There are also discussions about using artificial intelligence technology to make decisions concerning child welfare, such as whether to take a child into care.
The use of artificial intelligence decision-making processes has a track record of producing flawed decisions and of disproportionately disadvantaging individuals who experience discrimination and exclusion. For instance, in 2021 the Los Angeles police discontinued the deployment of the PredPol product because the system used inconsistent criteria to identify people who were likely to commit a crime. People of colour were particularly affected. The Dutch government resigned in 2021 after it emerged that the tax authorities wrongfully accused 26 000 parents of fraudulently claiming child benefits. This error led to the affected individuals losing their jobs, filing for bankruptcies and divorcing. The tax authority confirmed that it singled out many families for special scrutiny because of their ethnic origin or dual nationality. Currently, in the United Kingdom people with disabilities are raising a concern that the use of artificial intelligence systems to detect fraud results in them being disproportionately targeted as potential benefits fraudsters. More than 700 post office branch managers received wrongful convictions due to accounting errors of the Horizon software.
Algorithmic decision-making is not only of concern because mistakes resulting from the operation of technology can lead to dire outcomes for individuals. In the foreseeable future technology will become part and parcel of the administration of justice. The government is currently reviewing the Judicial Review and Courts Bill. The purpose of this draft legislation is to introduce rules which would enable courts to use technology to facilitate their work (p. 1). Sections 19(1)(b) and 23empower the Online Procedure Rule Committee to create rules to require certain
aspects of the legal proceedings to be conducted by electronic means. Section 16G allows individuals to provide consent to the automatic online conviction option in respect of a summary non-imprisonable offence. In such cases the court will not be involved because the individual will plead guilty and the entire procedure will be completed online (p. 3). One of the risks is that if the system malfunctions, then it could register a guilty plea when in fact the individual did not plead guilty. In such cases individuals will wish to challenge the outcome.
The proliferation of the deployment of technology in the administration of justice and other contexts makes it imperative that individuals be able to challenge decisions which have a profound impact on their lives.
One of the causes for why the Modern Bill of Rights creates barriers to accessing justice is that individuals will need to prove that they suffered a “significant disadvantage” before a court can review their application (par. 221-222 at p. 65). The former head of the Government Legal Service, Sir Jonathan Jones QC, said in an interview to Alex Dean that people will find it harder to bring a claim due to this requirement. Consequently, they will be discouraged from bringing a claim to the court. Similarly, Frances Webber a lecturer at Warwick University expressed a concern that it will be very expensive for individuals to bring a claim due to having to show “significant disadvantage.” On the 13th of April 2022 the Joint Committee on Human Rights published a report commenting that the proposed Modern Bill of Rights will undermine the protection of fundamental rights (p. 4-6). Diminished access to justice is one of the concerns (par. 178 at p. 49).
Turning to the technological context experts, including the United Nations Special Rapporteur on the rights of persons with disabilities Gerard Quinn, noted that it is extremely difficult to challenge the decisions produced by an artificial intelligence decision-making process. These systems are so complex that any individual would find it difficult to understand how the system produced a particular decision. Currently, the interventions which improve the capacity of the system to provide the explanations for its decisions reduce the accuracy of the predictions about an applicant. It is very expensive and in many cases impossible to detect all the harmful impacts on the applicants which the use of these systems entails. Artificial intelligence decision-making processes use thousands of variables as inputs. Each variable has varying degrees of correlation with a characteristic protected by antidiscrimination legislation. Hence, it will be extremely expensive to carry out scientific research to establish what relationship exists between each input variable and the possession of a protected characteristic. In light of this state of affairs individuals will struggle to prove that they suffered “significant disadvantage” as a result of the automation of the decision-making process. Consequently, they may not get to a stage where a judge can hear their complaint.
The fact that the proposed Bill of Rights takes away existing rights from individuals is ironic. The goal of this policy proposal is to enable the UK to have “a sharper focus on protecting fundamental rights” (par. 8 at p. 6). The policy makers should carefully scrutinise their policy proposals relating to the replacement of the Human Rights Act 1998 to ensure that they do not take away existing rights and do not institute barriers to accessing justice. The Joint Committee on Human Rights made an important contribution to highlighting what provisions of the proposed Modern Bill of Rights are problematic. The report of the Joint Committee on Human Rights is silent on the issue of the deployment of artificial intelligence decision-making processes. The present discussion highlights the importance of accounting for the particular challenges to the protection of fundamental rights which the use of technology poses. Therefore, when the UK government is contemplating law reform it should specifically consider what impacts will arise from the interplay of the new legislation and the use of new technologies. If the government is to fulfil its goal of strengthening the protection of fundamental rights it should substantially revise the Modern Bill of Rights. A better approach is to strengthen existing protections in the Human Rights Act 1998 through amendment instead of replacing the Act with a new legal instrument.
Tetyana (Tanya) Krupiy, Lecturer, Newcastle University.
(Suggested citation: T. Krupiy, ‘The Modern Bill of Rights creates barriers to challenging algorithmic decisions’, U.K. Const. L. Blog (19th April 2022) (available at https://ukconstitutionallaw.org/))