The last year has seen a number of important reports touching on the subject of artificial intelligence (AI) and lawyers’ ethics.
One of the most important was published in April of this year when the European Commission launched its ethics guidelines for trustworthy artificial intelligence (https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai). Although it surveyed the entire field of AI and not just its use in legal services, the seven key requirements that it identified are directly applicable to our work: human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; environmental and societal well-being; and accountability.
I comment below on how I believe those seven requirements apply to lawyers. What is interesting is that there is nothing new needed in a lawyer’s professional code, merely application of our existing values to a new set of circumstances.
Human agency and oversight
Lawyers are expected to supervise and take responsibility for the work that goes out in their name. This applies as much to work undertaken by juniors or paralegals as work undertaken by machines. In order for this to happen, a variety of pre-conditions must be met.
First, lawyers must be trained in new technology, so that they understand what it does. Sometimes this happens – increasingly in the USA – as part of a law degree. Sometimes it happens during the training contract (see the SRA’s report on Technology and Legal Services published late last year (https://www.sra.org.uk/risk/resources/technology-legal-services.page)).
Second, lawyers must keep up to date with developments in technology. This is now specifically referred to in the commentary to Rule 1.1 (which deals with competence) of the American Bar Association’s Model Rules of Professional Conduct, which states that ‘[t]o maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology’ (https://www.americanbar.org/groups/professional_responsibility/publications/model_rules_of_professional_conduct/rule_1_1_competence/comment_on_rule_1_1/).
Technical robustness and safety
This is obviously of key importance for lawyers. It arises because of the necessity to keep data safe, and to take steps to guarantee cybersecurity. This means not only complying with certain laws, such as the GDPR, or keeping a firm’s operations out of the reach of criminals, but also implementing one of the core values of lawyers, that of client confidentiality. Without technical robustness and safety, none of these can be assured.
Interestingly, in relation at any rate to robustness and safety, the SRA says: ‘If there is an error or flaw in an AI system run, or provided by, a separate technology company then we are unlikely to take regulatory action where the firm did everything it reasonably could to assure itself that the system was appropriate and to prevent any issues arising’.
Privacy and data governance
This is covered also what I have just said above.
The SRA says that firms need to be able to demonstrate that their advice is competent, fair and compliant with other obligations, such as confidentiality and conflict obligations. Without being able to show to clients how automation deals with their data and secrets, firms may be failing to comply with the law and their professional ethical obligations. Firms may also be unaware of biases in their system – see the requirement below.
Diversity, non-discrimination and fairness
This was one of the areas covered by another report into AI and lawyers’ ethics from the last few months. The report was sponsored by the Law Society on the topic of algorithms in the justice system, and was published by the Technology and Law Policy Commission this June (https://www.lawsociety.org.uk/support-services/research-trends/algorithm-use-in-the-criminal-justice-system-report/). There have already been other studies that show that there is post-code and racial bias in AI systems used by the courts and others in predicting certain outcomes. Information which comes out of an automated system, even an intelligent one, is only as good as the data put into it.
Environmental and societal well-being
These are general considerations, without specific application to lawyers alone. For instance, law firms should consider resource usage and energy consumption in the systems they use, and also be sure that their systems do not affect physical and mental well-being.
This complements a number of the requirements already mentioned: that lawyers should take responsibility for the AI that they use, should assess any negative impacts it may have and take action to correct them, and provide redress for those harmed.
The future struggles, I anticipate, are going to take place soon over ‘Human agency and oversight’ and ‘Accountability’. For instance, must all AI be overseen by a lawyer? (In our jurisdiction, the answer is no, other than for reserved activities.) And who is liable for legal services, maybe delivered across borders, by an AI system not developed by the provider?
For the record, the European Commission is now undertaking a piloting process of its ethics guidelines, with stakeholders invited to test the list and provide feedback (https://ec.europa.eu/futurium/en/ethics-guidelines-trustworthy-ai/register-piloting-process-0).