The use of algorithms in the justice system in England and Wales by Christina Blacklaws

  • Print
  • Share
  • Save

The opportunities and the challenges presented by technology present some of the biggest issues facing our global society. They force us to address deep ethical and jurisprudential questions and spark fierce debate. And nowhere is this debate more important, intense or difficult than around the areas of human rights and the justice system.

The opportunities and the challenges presented by technology present some of the biggest issues facing our global society. They force us to address deep ethical and jurisprudential questions and spark fierce debate. And nowhere is this debate more important, intense or difficult than around the areas of human rights and the justice system.

On 14th June, the Law Society of England and Wales announced the launch of a ground-breaking initiative of a year-long exploration of the impact of technology and data on human rights and the justice system. The Public Policy Commission, as it is called, will focus on the use of algorithms by various actors within the justice system.

The justice system is a core pillar of the effective operation of a democratic nation. It is vital that in dispensing justice, that it is both efficient and effective, but most importantly that it retains the trust and confidence of society. While it is expected that innovation will push the boundaries of the norms, we should ensure that as a society we understand the choices we make and that there is space made to have public discussion about the opportunities and challenges.

There are many issues around ethics, the practicalities, around oversight and commercial sensitivities in this area. For example, in policing, algorithmic data or intelligence analysis is already used for three reasons:

  • predictive policing on a macro level incorporating strategic planning, prioritisation and forecasting;
  • operational intelligence linking and evaluation which may include, for instance, crime reduction activities; and
  • decision-making or risk-assessments relating to individuals.

For example, Durham Constabulary have used an artificial intelligence system to inform decisions about whether to keep a suspect in custody. Their Harm Assessment Risk Tool (HART) is one of the first algorithmic models to be deployed by a UK police force in an operational capacity.

After proving effective in reducing the incidence of property crimes such as burglary in California, ‘PredPol’ has been used by Kent Police to conduct their own crime prediction hotspot mapping. After one year of operation, a review carried out by Kent Police in 2014 found that the software produced a hit rate of 11%, making it ‘10 times more likely to predict the location of crime than random patrolling and more than twice as likely to predict crime [than] boxes produced using intelligence-led techniques’.

Some in policing see this as the next big leap in law enforcement, akin to the revolution brought about by advances in DNA analysis. Privacy campaigners see it as the next big battleground for civil liberties, as the state effectively asks for a degree of privacy to be surrendered in return for a promise of greater security.

In addition, this technology has not always worked. The Met used facial recognition at the 2017 Notting Hill carnival, where the system was wrong 98% of the time, falsely telling officers on 102 occasions it had spotted a suspect. Indeed, algorithmic bias will be amongst the many key questions the Commission will explore in detail over the course of the year.

It is vital that we give voice to all those with a stake and we then emerge at a point of informed consensus. This is why the Commission will carry out a year-long, multi-disciplinary work to investigate the implications on the use of algorithms in the justice system. I have the privilege of chairing the Commission, along with co-commissioners Sofia Olhede and Sylvie Delacroix. Our final report, which will summarise all evidence gathered, is expected in February next year.

Technology does not only affect the justice system but also the way we work as lawyers. To start exploring this impact, we have launched a new horizon scanning report on Artificial Intelligence (AI) and the Legal Profession.

It explores the developments of the use of AI in legal practice, such as document analysis and delivery, legal advisor support and case outcome prediction. It considers the likely implications on legal jobs, the types of legal work and the impact on fee structures and costs. The report also examines the legal issues arising from the increased use of AI systems in society generally, including issues around transparency, ethics and liability.

Technological advances will bring changes both to how work as lawyers and how the wider legal and justice system will be affected. We will have many opportunities to explore and challenges to address. It is important to make the most out of these fascinating opportunities, and safeguard against any potential issues they may bring.

 

Christina Blacklaws is the President of the Law Society of England and Wales.

  • Print
  • Share
  • Save

About the Joint Brussels Office

EU flag

The Law Societies' Brussels Office monitors developments and represents the profession in negotiations with the European institutions.

Find out more about the Joint Brussels Office