#25 What Happens When a Law Firm Uses AI Tools On Real Legal Work?
This week, I summarize and give my takeaways regarding an AI study conducted by an international law firm.
AI will change the world, but how will it change M&A? I want to focus on AI’s impact on M&A in this newsletter. I am not an expert on either M&A or AI, but I want to learn about both topics and how they intersect. I thought there might be others in my situation (or people who are experts in one field or the other) who would find information on M&A and AI helpful in their careers, so I created this newsletter to track and share what I learn.
Law Firm Conducts Firm-Wide Study on AI Use
Ashurst, a UK-based law firm, recently conducted a firm-wide study on how legal AI tools impact their practice. Here is a link to a summary of the study written by Bloomberg Law. For those with some time on their hands, here is the full article on the firm’s website. I’ll give a short summary of the important points and then provide my takeaways.
The Study
The firm allowed its lawyers and staff to use three AI tools to test their effect on their practice. The firm did not disclose what tools they used but said that the lawyers used a general-use model, a legal specific model, and a model narrowly tailored to a particular task. The firm did not allow the lawyers to use client data when using the models. It’s difficult to risk client data in this study, so it’s understandable that client data was not used. But it would make it more interesting if they did use their data! Anyway, the study was both quantitative and qualitative and spanned Ashurt’s practice areas and offices. There are four major findings from the study, listed below.
Speed and Efficiency
Summary of findings:
The study found that lawyers using AI can “draft UK corporate filings requiring review and extraction from the company articles of association” in 80% of the normal time and can generate industry reports based on public filings in approximately 60% of the time it would take a lawyer without AI. The participants noted that AI was most helpful for creative tasks, like creating a blog post, social media post, or other marketing material. Additionally, participants noted that using AI to create a first draft enhanced their ability to pass the document to the next reviewer. However, the process was much slower if the AI-produced draft was incorrect.
My Takeaways:
First of all, saving over half the time on completing these tasks is impressive and 80% is extremely impressive. Second, this is a great example of how AI can complete low-level tasks which frees up the user to do more high-level work. Filling out corporate filings by pulling information from corporate documents is not the most flashy work, and some may say it is a little boring. Based on the study, AI is already primed to take over this task, allowing the user to review the AI’s work for accuracy and completeness and then move on to a more substantive task. Lastly, I think this finding is a good representation of AI technology’s progression. AI cannot take over high-level tasks, like deal strategy or drafting complex documents (hence the study finding that when AI is wrong, it takes the user more time to complete the task than normal), but it is sure good at extracting information from one document and filling in the blanks of another!
Legal Quality
Summary of findings:
The study set out to assess the “legal quality” of AI’s outputs. The firm recognized that “legal quality” is hard to evaluate quantitatively. Instead, the “legal quality” of a particular piece of work product is dependant on the experience, expertise, and general perspective of the lawyer.
My Takeaways:
This is related to my last point, above. It also brings up a good point about how we judge AI’s legal outputs. Lawyers “practice” law—suggesting there is no correct way to “do” law, and lawyers differ on what is acceptable based on their experience and expertise. I agree with the study on this point. How can we objectively judge an AI’s output when lawyers’ outputs also differ? Of course, there are some things that can be judged, like legal accuracy,1 acceptable industry formatting, etc. But the point still stands, lawyers can differ on certain points and both are correct!
Miscellaneous Tasks
Summary of findings:
The firm also studied AI’s impact on miscellaneous work tasks like notetaking, editing, and acting as an assistant. The study concluded that AI was very valuable to the participants for highly tedious tasks, like notetaking during a meeting. The participants stated that AI freed them up during meetings, allowing them to focus on more high-value things, like communicating with a client or thinking about strategy.
My Takeaways:
Today’s AI is best at this—things like summarizing a meeting, enhanced “ctrl-f”, and analyzing documents. While simple, these abilities enhance effectiveness. There is a lot of hype surrounding AI’s legal capabilities, but no tool has lived up to the hype. Meanwhile, AI tech is quietly giving users access to highly accurate capabilities that save significant time. While the hype is interesting, there should be more focus on AI’s current use cases, and I think this study does a good job of highlighting them.
When AI Should be Used in Law
Summary of findings:
The study referred to this question as the “jagged technological frontier,” referring to the divide between AI’s performance in particular tasks. We have talked about this issue before, and I concluded that AI is generally good at creative tasks, but not so good when precise language is required. The study points out that reserving AI for only tasks that it is good at will not progress the adoption of AI in the law. For that reason, participants were encouraged to use AI on tasks that challenge the AI system and the user to think of creative applications of technology to legal work.
My Takeaways:
I had never heard of the “jagged frontier,” but I think the study is correct despite what I said above—lawyers must push the boundaries of AI in law. It will make both the AI and the user uncomfortable, but it’s necessary to move the technology—and the profession—forward. AI should be used when it’s easy and accurate (like taking meeting notes or enhanced “ctrl-f”) and when it’s difficult (like drafting contract provisions).
Conclusion
This study sets a great example for firms looking to test AI. The study even provides some tips on conducting intra-firm tests (see the link, above). I enjoyed reading the study because I think it gives good insight on AI’s current legal capabilities. I don’t have any real legal problems to solve at the moment, so it is nice to see that someone else is testing AI on real-life legal problems.
About me
My name is Parker Lawter, and I am a law student pursuing a career as an M&A lawyer. I am in my last semester of law school, and with some extra time on my hands, I decided to create this newsletter. I hope it is informative and helpful to anyone who reads it! I am not an expert at either M&A or AI, but I am actively pursuing knowledge in both areas, and this newsletter is a part of that pursuit. I hope you’ll join me!
Follow me on LinkedIn: www.linkedin.com/in/parker-w-lawter-58a6a41b
All views expressed are my own!
There is still some grey area here, but I am referring to the accuracy of basic legal facts/concepts