There are many claims made about decision-making in judicial review. The way that judges do decide, or ought to decide, cases is the subject of lively debate within the political, policy, practitioner, and academic spheres. For instance, in recent years there have been prominent warnings of the ‘conceptual growth or overgrowth’ of judicial review and of judicial review making ‘significant inroads into executive discretion.’ Such claims, and the debates which they provoke and influence, have consequences. They can shape how courts approach cases, how litigants litigate, and how the government and Parliament perceive and potentially reform judicial review. It therefore ought to be a cause for concern that this important debate is, and always has been, structurally limited by a ‘deficit of core evidence on how the judiciary actually exercises their discretion in judicial review proceedings.’
In the longer arc of time, the use of empirical data in the field of judicial review is still a new phenomenon. Maurice Sunkin provides an account of this history in an excellent recent chapter, highlighting that the value of empirical evidence in this area was not widely recognised until the turn of the century. The result, he explains, is that there are ‘many assumptions about the place of judicial review in our system that we have only recently been able to test against sound empirically based evidence.’ Much progress has been made in recent decades in introducing empirical evidence into the debate around judicial review; the studies which have taken place have transformed and enhanced judicial review scholarship, provided new insights, and gone some distance to grounding debates in robust evidence (see here and here for prominent examples). However, this progress has only gone so far. Perhaps most notably, much of the empirical evidence we do have relates to litigation dynamics (e.g. the parties, settlements patterns, and the impact of claims on public bodies), and there is much less on judicial review decision-making patterns.
In the absence of access to robust empirical data on how judicial review cases are being decided, the traditional emphasis on doctrinal studies has remained dominant. Doctrinal approaches have many advantages and doctrinal scholarship is of great value, but their standard use in administrative law is also widely characterised by what Paul Craig has recently described as a ‘twin malaise.’ Craig diagnoses these twin malaises as ‘mining and lumping’ and ‘path dependency.’ ‘Mining and lumping’ refers to the phenomenon where propositions are sustained through a process of searching for supporting evidence. This methodological error misrepresents the available data by ignoring evidence which is to the contrary of the initial proposition. In simple terms, there is a risk of ‘cherry-picking’ cases that fit one’s preferred interpretation, while ignoring those that do not. ‘Path dependency’ refers to the tendency of issues to be represented and analysed only in ways they have traditionally been framed. In the context of judicial review, Craig suggests that path dependency casts the judicial review discourse in terms of judicial overreach but fails to explore potential underreach with the same vigour. A tendency to focus on this issue also leads many to ignore the majority of decisions at first instance, where the everyday business of judicial review is conducted.
It is, for the most part, frustration with the kind of tendencies that Craig identifies that has led judicial review scholars to undertake more systematic studies of judicial review cases in recent years, often in the form of content analysis (for instance, see here and here). These studies generally take a sample of judgments, systematically categorise various aspects (e.g. grounds, outcome, parties), and analyse the whole sample by reference to those categories to give new insights on trends in decision-making. While these studies cannot remove the foundational disagreements about public law, they show the potential to significantly change the empirical basis of those disagreements and identify important questions of law closer to those arising in the courts day to day. They have provided new insights into the nature of judicial review and facilitated the testing of, and sometimes disproving of, claims which have been made about judicial review decision-making. Nonetheless, they do have some built-in limitations: the way such studies develop our understanding of decision-making in judicial review is piecemeal and there is no comprehensive source of empirical data on decision-making. Those studies are also often concerned with a specific stretch of time or evaluating a specific claim which has been made about decision-making in judicial review. The result is that this wave of new studies provide important glimpses into the reality of judicial review decision-making without providing a comprehensive picture.
In addition to challenges accessing judgment data, the critical limitation on the advancement of systematic study of judicial review decision-making is analyst capacity. Systematic studies of judicial review are tough work. There are hundreds of judicial review judgments in the Administrative Court each year, and some are very lengthy and complicated. The manual, rigorous analysis of these judgments is slow. If extra research support is needed to make the process quicker than an individual scholar can achieve, it can require funding for that extra research capacity. Any aspirations for an updated and comprehensive database on judicial review decision-making are far beyond the reach of the current organisational frameworks for conducting research. To make progress, innovative solutions to this problem, that break away from traditional ways of working, are required.
We are currently exploring potential solutions to this challenge. In particular, we are examining how far it might be possible to use programmatic and machine learning techniques to automate systematic analysis of judicial review judgments. To do so, we are working with a sample of over 5,300 judgments given by the Administrative Court between 1 January 2015 and 31 December 2020. These judgments were made available to us for the purposes of this research by vLex Justis. The central technical question is: how far will it be possible to automatically collect data from this sample? We know it is not entirely impossible. We have successfully extracted basic data (such as the names of the parties, the case citation and the date) during the early stages of the project.
But we are also confident we cannot automate this process entirely. There are major barriers to information extraction, such as the heterogeneity of judgment structure across both the Administrative Court and the tribunal system. In practical terms, the question is whether it is possible to automate data collection to such a point that a database can be maintained with a limited and sustainable amount of manual intervention. If this is possible, systemic evidence of judicial decision-making that can be used to quickly test and explore claims about judicial review decision-making would become much more widely available.
This would help contribute to the creation of a more accurate picture of how judicial review decision-making operates in practice by remedying the ‘snapshot’ effect created by existing studies and assisting in disrupting the ‘pathologies’ often seen in current doctrinal approaches. It holds the potential to shift important debates within the political, policy, practitioner, and academic spheres onto a more evidence-based footing. Alongside the technical questions, however, we are also exploring equally important questions about the potential ethical and methodological limitations of using such techniques. While our project in no way seeks to allow us to predict outcomes, there are significant questions about whether and how this kind of analysis should be used and how judgment data ought to be handled. We intend to report our initial findings in 2022.
This project is funded by an ESRC IAA grant and investment by Mishcon de Reya. The underlying dataset was made available by vLex Justis.
Cassandra Somers-Joce is a Research Assistant at the University of York
Daniel Hoadley is Head of Litigation Data at Mishcon de Reya
Editha Nemsic is a Data Scientist at Mishcon de Reya
Dr Joe Tomlinson is Senior Lecturer in Public Law at the University of York
(Suggested citation: C. Somers-Joce, D. Hoadley, E. Nemsic, and J. Tomlinson, ‘Better Evidence of Judicial Review Decision-Making: Exploring the Potential of Machine Learning’, U.K. Const. L. Blog (4th November 2021) (available at https://ukconstitutionallaw.org/))