The hard stuff

2024 is the year when the use (or abuse) of artificial intelligence (AI) will become a real issue for the legal profession. A sign of the times is that the Bar Council has recently published a document containing what is emphatically not guidance, nor legal advice, which sets out a number of points on the use of AI by barrister. A full copy of the document can be found here: Considerations-when-using-ChatGPT-and-Generative-AI-Software-based-on-large-language-models-January-2024

The internet is full of information or commentary on artificial intelligence: Google tells me that simply entering that term will produce a total of  3,370,000,000 pages of information. Much it can however, be categorised as “white noise” and much of the commentary will simply be irrelevant to the legal profession. So I have decided from time to time to write some articles or blogs, on AI, and its practical uses for me. And indeed to explain how I am using it for activities related, directly or tangentially to my work.

At the moment I have acquited monthly subscriptions to ChatGPT Plus and Microsoft Copilot: the former is perhaps the best known of the “chatbots” which have appeared in the last year or so, and the latter is a new product which was issued this year, which lives inside the Microsoft Office 365 suite of programs.

It therefore offers the tantalising prospect of integrating directly into my workflow, as I produce Word documents, Excel spreadsheets or Powerpoint presentations. But before we get going on this new enterprise it is worth noting some points from the Bar Council “non-guidance”. As I am going to be using ChatGPT let us begin with what it has to say about this product:

5. ChatGPT is an advanced LLM AI technology developed by OpenAI. It is based on GPT architecture, which stands for ‘Generative Pre-Trained Transformer’. The latest iteration of ChatGPT at the time of this guidance is GPT-4. Transformer architecture uses mathematical matrices, supplemented by corrective procedures and technologies. The number of parameters used by GPT-4 is thought to be in the many billions.

6. In common with other LLMs (such as Google’s Bard), ChatGPT is trained on huge amounts of data, which is processed through a neural network made up of multiple nodes and layers. These networks continually adjust the way they interpret and make sense of data based on a host of factors, including the results of previous trial and error.

7. Certain consequences inevitably follow from the nature of the technological process that is being carried out. LLM AI systems are not concerned with concepts like ‘truth’ or accuracy.

The constraints and challenges presented by ChatGPT (and similar products) are noted later int he document, and as its potential to “hallucinate” is well known, I shall pass on to what issues arise in respect of legal professional privilege, confidential information and data protection compliance:

19. Be extremely vigilant not to share with a generative LLM system any legally privileged or confidential information (including trade secrets), or any personal data, as the input information provided is likely to be used to generate future outputs and could therefore be publicly shared with other users. Any such sharing of confidential information is likely to be a breach of Core Duty 6 and rule rC15.5 of the Code of Conduct, which could also result in disciplinary proceedings and/or legal liability.

20. Barristers will also need to comply with relevant data protection laws. You should never input any personal data in response to prompts from the system. Note that in March 2023, the Italian Data Protection Authority issued a temporary ban on ChatGPT, largely to investigate whether there was a lack of any legal basis for the collection and processing of any personal data used for training the system, and whether there was a lack of any proper notice to data subjects. Italy, France and Spain are currently investigating OpenAI’s processing of data. Using only synthetic data (that is data that is artificially created) on prompts to the LLM represents one possible way to avoid the risk of falling into breach of the General Data Protection Regulation (EU 2016/679) as retained in English law (UK GDPR).

21. As practitioners will be aware, the regulatory landscape in this area is in a state of flux and it is difficult to predict exactly what the UK position will be. Under the EU AI Act6 certain uses of AI tools in legal practice are categorised as ‘high-risk’ which triggers heightened regulatory obligations. The UK Government’s White Paper: A pro-innovation approach to AI regulation7 published in March 2023, suggests that existing regulators should act in accordance with five principles (similar to the OECD principles on AI8 although with different wording):
(i) safety, security and robustness;
(ii) appropriate transparency and explain-ability;
(iii) fairness;
(iv) accountability and governance;
(v) contestability and redress.

22. In the UK, the Information Commissioner has published guidance in relation to the development and use of technologies such as ChatGPT: “Generative AI: eight questions that developers and users need to ask”.

Thus the key point to note at this stage, is that these publically available tools, should not have inputted into them what I might describe as “work, work”: that is the papers relating to a particular case, or the facts of a particular case that I am working on.

Although they offer the prospect of being able to summarise or analyse data, at this time, with these products, they should be used for the non-confidential elements of legal practice, or what might be called the “non work, work.”

We shall therefore start our digital odyssey, next week with a consideration of AI generated artwork, and consider not only how this can be done, but how the pictures and products might be used in legal practice or legal marketing.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.