inn
conversation

Jacob Turner

Barrister, Fountain Court Chambers
"AI’s role in law isn’t something for the future - it’s already here".
April 23, 2025

Jacob Turner, barrister at Fountain Court Chambers, author of Robot Rules, and one of the leading voices in the rapidly evolving world of AI law. From representing countries like Venezuela and Argentina in major international cases to securing the release of a dog named Achilles from police custody, Jacob’s career has been anything but conventional. Known for his profound knowledge of AI and its impact on the legal profession, Jacob is on the cutting edge of discussions around ethics, regulation, and the future of law. In this InnConversation, Jacob pulls back the curtain on the fascinating intersection of artificial intelligence and law, and why the courtroom of tomorrow is already being written today.

Authors
Ellie Hecht
ellie@innlegal.co.uk
Simon Spence
simon@innlegal.co.uk
No items found.

AI is shaking up the legal world - what’s the biggest shift you see coming for chambers and law firms?

The biggest shift for law firms will likely be a move away from a billable hour model, as AI is increasingly able to undertake repetitive tasks at scale. For chambers it will be a realisation that barristers do not have a monopoly on tasks such as legal research or even drafting. I think the impact on AI will be slightly different for law firms than for Chambers. In very broad terms, law firms’ transactional work is more amenable to AI-driven automation than litigation work, because transactions tend to involve a large number of repeatable tasks, with slight variations. By contrast, litigation tends to involve a more disparate set of tasks and capabilities, and turns more on unpredictable events. As a result of both the larger market for transactional support, as well as the more readily commoditisable nature of the tasks involved, as things stand legal technology for law firms is currently more advanced and effective than for chambers. That said, the technology is ever improving and it can’t be assumed that litigation technology will continue to lag behind in the long term.

Legal research used to mean hours in the library - now AI can do it in seconds. Will we ever trust it enough to replace human judgment?

Many lawyers are already are using AI for legal research and I predict the wider legal industry will become increasingly trusting of the technology. Just as we are comfortable using simple word searches rather than reading through enormous volumes of text, I think we will become increasingly comfortable with using both classification and generative AI to analyse and summarise large volumes of information. There is no inherent reason why such technology should be worse than a human. We are not there yet, but at some point in the near future I anticipate AI’s legal research skills exceeding even the best human lawyers on demonstrable metrics, such as accuracy and speed. At that point it would be foolish to trust human judgment over the AI.

Should clients be told when AI has played a role in their legal services? Or is that just the new normal?

In most circumstances it will not be necessary to inform clients that AI has been used. Lawyers do not generally need to disclose to clients which technologies or tools they are using for ‘back-end’ tasks. For instance, it is usually not necessary for a lawyer to need to declare to a client that they have used ‘Google search’ to find information, so long as the information in question is correct and verifiable. When a senior lawyer delegates certain tasks to a junior one, for example creating the first draft of a document, the senior lawyer nonetheless retains an overall responsibility for the work. Ultimately, lawyers owe a professional duties to their clients to satisfy themselves that their output is of an acceptable quality, and has been checked for accuracy. That applies just as much to AI output as to output generated in any other way.

Can you imagine a courtroom where AI-generated arguments are submitted and debated? Would that be a legal revolution or a dystopian nightmare?

Not only can I imagine such a situation, it is already happening. There are various infamous stories of lawyers using AI to generate false legal precedents, but there are no doubt many other instances which have not made the press where lawyers have successfully used AI to locate authorities and to generate arguments that have been used in Court. Indeed it is no longer just text being generated. Recently a litigant in the US used AI to generate a video of an attorney speaking on his behalf. The Court quickly realised what was happening and asked why permission had not been sought, but the possibility remains and is fascinating. The one key thing to remember with AI is that the technology is not getting worse. It will only get better. The use of AI in litigation is already a legal revolution, but whether it is a positive or negative development is in our hands to control. If properly regulated then it could lead to massively improved access to justice.

AI is writing contracts, drafting legal opinions, and assisting with case law - how do we make sure ethics keep up with the speed of technology?

Regulators can play a significant role in both shaping and encouraging the adoption of AI. One of the main barriers at the moment is uncertainty as to the regulatory and ethical implications of AI use in the legal industry. The Bar Standards Board, SRA and Law Society have already begun publishing guidance on how the technology can be adopted in conformity with professional obligations. In due course the professional obligations, and likely also the CPR, will need to be updated to keep pace with these technological changes.

In Robot Rules, you explore the ethical challenges posed by AI. Which do you believe is the most urgent ethical concern that we need to address when regulating artificial intelligence?

To date there has been insufficient focus on responsibility for harm caused by AI. The problem is a difficult one because we have never before needed to deal with a technology which can take its own decisions. Clearer articulation is needed of the duties on the developers as well as the deployers of AI. Without knowing the ambit of these duties it will be impossible to work out whether they have been breached and therefore whether liability arises.

You propose a framework for regulating AI. How do you ensure that your approach strikes the right balance between encouraging innovation and safeguarding against potential risks?

It is often assumed, especially in the field of AI, that regulation and innovation are in tension. They are not. Properly designed and implemented, a stable

regulatory scheme can provide those developing AI with confidence and stability necessary to invest and ultimately to innovate. We have seen this, for example, in the pharmaceuticals industry in the UK – which is heavily regulated but also highly successful commercially. Rather than oscillating between calling for greater AI safety and unleased AI creativity, it would be more sensible to lay down a principled framework for responsible innovation and enshrine this in legislation. More detailed guidance can be made and updated over time by agencies with delegated powers so as to ensure the more granular requirements remain up to date.

In the book, you discuss the issue of legal responsibility in the context of AI. How do you think the law should address accountability when autonomous machines cause harm or make decisions independently?

There are two main issues to overcome when determining responsibility for harm caused by AI: duty and causation. As to duty, we need to be clearer as to what is expected from the developers and deployers of AI at each stage of the value chain. Emerging global regulatory norms, such as those set out in the EU AI Act for High Risk AI are likely to help shape expectations in this regard. As to causation this is currently an ongoing problem at the level of scientific proof. As things stand it can be hard to determine which actor is factually responsible for harm caused by AI in circumstances where a system can learn and adapt in unpredictable ways based on multiple inputs. One solution might be to establish a form of ‘strict’ liability along the lines of product liability. The EU has already expanded the Product Liability Directive to include AI and it would be sensible for the UK to do the same.

Robot Rules compares various global AI regulations. Which country’s approach do you find most effective, and what aspects of their regulations should be adopted by other nations?

I think it is important to be clear as to what is meant by ‘effective’. I’ll assume it means that the law is clear, can be followed, and achieves its aim (usually public protection) with minimal disruption to innovation and other negative externalities. It is too early to comment on the empirical effectiveness of specific regimes because very few countries have enacted AI-specific legislation, and even where it has been enacted (the main example being the EU AI Act), mostly it has not yet come into force. The EU AI Act is helpful in parts – in particular the detailed expectations laid down for ‘High Risk’ AI developers and deployers. These are already helping to set a global baseline for AI obligations, as can be seen from the adoption by South Korea of similar legislation recently. Other aspects of the EU’s legislation – especially the prohibitions on certain AI, I consider to have been poorly designed and likely lead to unintended consequences such as widely-adopted and helpful technologies being banned.

In addition to your work on AI you are also representing various sovereign states, such as Venezuela, Libya, Argentina and India. What unites sovereign and AI work?

What interests me about both fields is that they sit at the intersection of law and politics. I have always enjoyed the interaction between legal problems and the wider societal debates of which they form part. Working for sovereign states always brings with it a public angle, and likewise working in AI policy and regulation requires a consideration not just of an individual legal question but its broader impact. What I love about sovereign disputes is that they are invariably interesting and significant. My sovereign cases have ranged from freeing the flagship of Argentina’s navy from the port of Accra in 2013, where it had been arrested by a US vulture fund, to litigating in the English Courts as to which group was the proper government of Venezuela and hence controlled $2bn worth of the country’s gold bullion held in the Bank of England.

Have you seen any trends in sovereign litigation?

One of the trends I have noticed in recent years is the gradual and continuing encroachment on sovereign immunity by the English Courts. On the one hand this represents perhaps a greater emphasis on individual economic and human rights of claimant parties. On the other hand the English Courts are straying into territory which has historically been the preserve of inter-governmental relations, and weakening rights for nations which represent the interests of many millions of people.

What was your first job before becoming a barrister?

I worked as a speechwriter to an Ambassador at the United Nations in New York.

Most interesting person you’ve represented?

I recently acted in a judicial review of the Hertfordshire police force, to free a dog called Achilles that we said had been wrongly imprisoned by them in a dog pen. After we had succeeded in obtaining urgent interim relief which required the police to return Achilles his owner I played “Who let the dogs out?” on repeat for several days.

If you could have any superpower to help you in your job, what would it be and why?

Unlimited time.

Finally, our previous guest asks
William Peake
Global Managing Partner, Harneys
Buy Nvidia Stock! I could see it was an early leader in AI chip development from around 2015 when I started to become interested in the area but instead of investing, I foolishly decided to write a book about AI. Buying shares would have been more efficient.
Subscribe
Vacancies, perspectives, industry updates, news and events.
Thank you!
Oops! Something went wrong while submitting the form.

Our thinking