Artificial Intelligence and the Law

Civilisation is a recent affair.

The last Ice Age ended a mere 12,000 years ago, and what we call “civilisation” was built on a series of embedded technologies: discoveries so general in their application, and so transformative in their spread, that they changed the direction of history.

Fire. Agriculture. Bronze and then iron. Later, the printing press, and still later, electricity. Historians reckon there have been perhaps two dozen such inflection points.

They are arriving faster now, each wave of innovation breaking more quickly upon the last.

Artificial intelligence (AI) is the newest technology. Just as fire and the printing press transformed society, artificial intelligence is poised to reshape our world in ways we are only beginning to understand. It is already embedded, woven into countless systems and applications as I have found, half-noticed yet unavoidable. There is no shortage of commentary about its rise, its promise, and its perils: books, papers, websites, films, even the occasional melodrama, if you want to learn more about it.

The information problem it immediately presents, is not one of scarcity but of glut. There is too much information, too little that is digestible. When I “discovered” AI, I quickly realised that it was going to be important: for our world, and more prosaically for my work. So, I set out to learn about it, and then to record what I had discovered in a short book, “Andrew and the Marvellous Analytical Engine” published in September of this year. What follows is drawn from the book.

What is artificial intelligence?
Artificial intelligence (AI) refers to computer systems built to carry out tasks that usually require human intelligence. These include recognising patterns, solving problems, understanding language, and learning from experience.

At its core is a collection of mathematical, statistical, and computational techniques designed by people to simulate aspects of human reasoning.AI has been equated by popular culture with the concept of “thinking machines”: machines which have autonomy and think in the way that a human being does.Films like The Terminator or the reimagined Battlestar Galactica depict machines with desires, instincts, or intent. It is possible that someday, a “thinking machine” might be created but thinking is not what the current generation of machines do.

AI software has no consciousness, no personality and no independent will. It is a software programme made to execute instructions, using data and algorithms provided by humans.

Even the most advanced language models or image recognisers are highly sophisticated pattern-matching systems, nothing more. Through vast computational power, they are able to analyse data and devise solutions of “best fit” to instructions put to them.

The real purpose of AI is practical. Scientists and engineers have always sought to make machines more useful: to process information, automate work, and extend human capabilities.

In medicine, AI helps diagnose diseases and develop treatments. In agriculture, it boosts yields and reduces waste. In climate science, it sharpens forecasts and guides disaster responses. Computers can scan oceans of data and offer insights at speeds no human can match.

The implications are vast. AI promises access to knowledge, education, and expertise on a global scale. But it also raises questions: privacy, accountability, fairness, and bias.

Integrity too: in what circumstances might it be inappropriate to deploy AI? Responsibility remains with people, not machines. Properly guided, AI is a powerful tool, one that will shape the future not as a robot overlord, but as a servant of human ingenuity.

AI really came to popular consciousness, in 2022 with ChatGPT. The technology behind ChatGPT represents some of the most remarkable advances in artificial intelligence, specifically in the field known as natural language processing. At the heart of ChatGPT is a large language model, a sophisticated kind of neural network known for its ability to understand, generate, and reason with human language.

ChatGPT belongs to the family of large language models(LLMs). An LLM earns that title from both its size and the scale of its training.

Fed vast amounts of text, books, articles, websites, the model encodes patterns as billions, sometimes trillions, of parameters, akin to neural connections in the brain. Through pre-training, it learns grammar, facts, idioms, and logic. Fine-tuning follows, with human feedback guiding the model to give safer, more useful answers.

In 2025, the landscape of AI applications available to individual consumers is astonishing. Today, people can access sophisticated AI for free or at modest subscription rates, with tools ranging from writing assistants to creative design platforms.

In sum, the consumer-facing AI ecosystem in 2025 empowers individuals in ways that only a few years ago would have seemed remarkable. Anyone can now access the capabilities once reserved for specialized professionals: from writing and translating to design, automation, and research.

The unifying thread across all these tools is their capacity to boost efficiency, by automating repetitive work, sparking creativity, synthesizing complex information, and making new skills or insights instantly accessible.

Use of AI also presents a multitude of plethora of significant dangers that can affect individuals and society on a daily basis. As AI systems have become more powerful and widely dispersed, misuse and unintended consequences have grown more prominent.

One of the clearest and most striking perils comes in the form of “deepfakes” such as AI-generated videos, audio, or images that convincingly imitate real people and events. These creations are not just curiosities or tools for entertainment; in the wrong hands, they can be devastating weapons for manipulation.

As a phenomenon, they illustrate in the most profound way, the problem of integrity: the genuine quality (or not) of AI produced output, and whether it is human made, or machine made. In this respect even though the machines are evolving, humans still largely prefer to have dealings with another human being, rather than a machine.

Deepfakes may show politicians apparently declaring controversial policies, business leaders admitting to crimes, or ordinary people participating in situations that never occurred. When people cannot trust that what they see and hear actually occurred, the fabric of civil society is strained, and the notion of shared reality is eroded.

On a different level, but continuing the theme, there are some areas or documents, where we might expect communication to be produced by humans: this can be as simple and obvious as personal communications between friends, or spouses. How would the recipient feel to know that something is machine produced, rather than by hand?

Education, the key to social mobility, faces its own threats. With AI capable of generating essays, coding assignments, reports, and even exam answers indistinguishable from genuine student work, traditional assessment methods are under assault.

Teachers and examiners may struggle to verify the authenticity of student submissions. Standardised testing and coursework risk becoming contests not of knowledge, but of one’s ability to leverage or outwit AI tools, endangering both academic standards and fairness.

One of the more insidious dangers lies in the transmission and entrenchment of discrimination and biases. If AI systems are trained on what is termed “poisoned” data that reflects society’s prejudices, be it racial, gender-based, economic, or otherwise, they may produce recommendations, decisions, or outputs that perpetuate those inequalities.

In hiring, lending, policing, or housing, AI models have sometimes made unjust predictions or reinforced discriminatory patterns, not because of malice, but because the data on which they were trained carried such hidden prejudices. This can institutionalize inequitable treatment, all while cloaked in the appearance of algorithmic objectivity.

The problem of poisoned data compounds these issues. Adversaries may deliberately introduce false, misleading, or prejudiced data into the training sets used for AI models. This so-called data poisoning can render models unreliable, dangerous, or biased in targeted ways.

For example, an AI tasked with content moderation can be sabotaged to ignore certain kinds of hate speech, or a medical diagnostic system can be steered to provide incorrect recommendations. The more society entrusts decision-making to AI systems, the greater the harm from even subtle distortions.

All of these dangers are exacerbated by the fact that AI technology is becoming more accessible, cheaper, and easier to deploy. Not only states and corporate actors, but private individuals and small groups can access sophisticated AI tools, sometimes with little oversight.

AI in Legal Practice
Turning now to legal practice, the approach that I have taken to unravelling the potential uses of AI, is not to start with the applications or the software and try to find something to do with them.

Instead, a better way is to consider the sector of legal practice at scale first of all, and to consider how the nature of legal practice may be changed by the deployment of AI at scale, in the next 3 to 5 years. How might my work change as a result of what I can see actually or potentially unfolding? I have chosen to explore some themes below.

Fraud and falsification of evidence
The spread of AI through legal practice has opened new ground for fraud. Deepfake and synthetic media technologies now allow evidence to be fabricated with a degree of realism that would once have seemed impossible. For litigators, any assumption that digital evidence is inherently reliable can no longer stand.

To meet this challenge, it is necessary to understand both the techniques of manipulation and the counter-forensic tools developed to expose them.

At the heart of the problem lie Generative Adversarial Networks (GANs). These pit two neural networks against each other, one creating synthetic content, the other critiquing it, until the fabrications pass for real.

Off-the-shelf applications such as DeepFaceLab make cinema-quality face swaps available to anyone with a laptop. Simpler programs like FaceSwap or SwapMe replace one face with another while preserving natural expressions.

StyleGAN goes further, generating photorealistic human faces that belong to no one, offering fraudsters the possibility of conjuring entirely fictitious witnesses.

Manipulation is not confined to faces. Neural text models can generate contracts or correspondence with fabricated terms or signatures. Voice cloning systems such as Real-Time Voice Cloning or WaveNet reproduce an individual’s speech patterns so convincingly that synthetic admissions or threats may be fabricated with ease.

The sophistication of these methods demands equally sophisticated defences. Intel’s FakeCatcher identifies fakes by analysing minute colour changes in human skin caused by blood circulation, which current deepfakes cannot mimic.

Forensic work also looks to provenance rather than detection alone. Blockchain can provide immutable records of digital content from its creation onwards.

Cryptographic hashing assigns each file a digital fingerprint that will instantly betray later alterations. Metadata analysis can reveal inconsistencies in time stamps or device identifiers, pointing to manipulation.

In short, the evidential battleground is shifting. The age of “seeing is believing” is over; in its place comes an era where evidence must be tested not only on what it shows, but on how it is shown to be real.

Time based billing
For more than half a century the billable hour has reigned supreme in the legal profession of England and Wales. That reign is now drawing to a close. Artificial intelligence has not merely chipped away at inefficiencies; it has upended the economics of practice itself.

What began in the 1960s as a supposedly objective measure of work is becoming untenable in an age where tasks that once occupied a team of juniors for a week can be dispatched by an algorithm. The profession faces one of its greatest upheavals since hourly billing first took root.

The most obvious casualty is document review. Once a reliable generator of hours, it is now performed by AI systems that can process millions of files overnight with accuracy rates surpassing human reviewers.

The Serious Fraud Office has already shown that disclosure can be managed with AI assistance, cutting months of labour down to days while remaining compliant with legal duties.

Legal research, too, has shrunk from days in the library to minutes online, as AI platforms parse legislation, precedent, and commentary at lightning speed. Drafting is under fire too. AI tools now produce contracts, pleadings and correspondence with a consistency and speed that renders the old billable model faintly absurd.

First drafts are created in a fraction of the time, errors are fewer, and clients are unlikely to pay for afternoons spent labouring over work a machine could finish before lunch. From routine emails to complex transactional structures, AI has become a silent partner in the production of legal documents.

Litigation support has followed the same path, with systems able to model case strategies, predict judicial leanings, and generate arguments from vast databases of decisions. Once clients know that this capability exists, they will resist subsidising the inefficiency of hourly billing.

The paradox is sharpest for firms that have invested heavily in AI: the better their systems perform, the fewer hours they can justify billing, and the more strained their revenue model becomes. They may become leaner and more efficient and worse off financially in consequence.

The solution, already embraced by many, is value-based billing. Firms are moving to fixed fees, capped arrangements, subscriptions, and outcome-driven pricing or will have to do so.

Firm culture must also change. Billable hour targets have been the yardstick of performance and promotion for decades; replacing them with metrics based on value will demand a wholesale rethinking of evaluation, compensation, and partnership structures.

Professional negligence
AI is now reshaping professional standards in legal practice across England and Wales, and in doing so has created an interesting dilemma. There is the potential for lawyers to face negligence claims both for using AI unwisely and also for failing to use it when it would have improved the quality of their work.

The Master of the Rolls, Sir Geoffrey Vos, captured the dilemma neatly in a speech to the Professional Negligence Bar Association: lawyers are “damned if they do and damned if they don’t.”

A solicitor who ignores technology that improves research coverage or analytical accuracy may one day be judged as failing to meet the reasonable standard of competence.

A doctor who fails to use AI diagnostic tools may in time find that is regarded as a negligent omission. Lawyers are no different.

As AI systems prove their capacity to sift disclosure, locate authorities, or spot inconsistencies at scale, the expectation will grow that competent practitioners will deploy them. The refusal to do so may itself become actionable.

The practical message for professional indemnity lawyers is that the duty now runs both ways. The legal profession must understand what AI can and cannot do, must have systems in place to verify its outputs, and must be transparent with clients about how it is employed.

As consequential steps, lawyers must invest in training and infrastructure, acquire technical competence while maintaining traditional skills, and grapple with ethical questions around informed client consent.

Legal research
AI has begun to change legal research more profoundly than any other area of work in England and Wales. AI tools will do more than list cases.

They can track judicial trends, identifying, for instance, that courts are increasingly ready to grant interim injunctions in data protection disputes, or that proportionality in disclosure is being applied with greater rigour.

Verification remains critical. AI outputs must be cross-checked against established sources such as Westlaw or Lexis. In practice the technology serves as a powerful starting point, generating avenues for research that are then tested through traditional methods. Used this way, the hybrid approach combines AI’s speed with human accuracy.

The reason for caution is clear. AI hallucinations, where it produces false case references, are now a clear problem. They come with plausible citations, authentic-sounding judicial language, and neatly drafted reasoning that appears tailor-made to the case at hand.

A practitioner under time pressure, or working outside their usual field, may not notice until too late. General-purpose chatbots are particularly treacherous: trained on the internet at large, they lack any sense of legal hierarchy and will cheerfully “cite” a fictitious House of Lords case that overturns binding Court of Appeal authority.

The future of legal research will not be machine-only nor human-only, but a blend. AI will supply speed, scope, and pattern recognition. Lawyers will supply verification, discernment, and judgment. The profession’s task is to harness the power of the new technology without abandoning the standards that give legal advice its authority.

Conclusions
AI is beginning to transform legal practice in England and Wales in ways that are both practical and profound. Rather than focusing on hype, the key to understanding its impact lies in identifying everyday legal problems and asking how AI can address them.

At the same time as promising tremendous benefits, AI introduces risks: deepfakes and synthetic media challenge assumptions about the reliability of digital evidence, requiring new forensic tools, provenance checks, and judicial protocols to test authenticity.

The billable hour is also under pressure. As AI automates research, drafting, and disclosure, clients resist paying for time rather than value. Firms and barristers alike are moving toward value based fees, though cultural and operational hurdles remain.

Alongside these economic shifts, AI is reshaping professional standards: negligence may arise both from over-reliance on flawed AI and from failure to use it when it would enhance competence.

In terms of legal research, AI promises speed and efficiency but demands verification, oversight, and ethical vigilance. The challenge for lawyers is to harness these tools responsibly while adapting business models and professional duties.

Andrew and the Marvellous Analytical Engine is available for purchase at www.amazon.co.uk for £9.99 for the Kindle Version and £19.99 for the Print Version.

A version of this article first appeared in Litigation Funding magazine.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top