Impact of AI on the legal market

Share This Article:

Author: Tom Armstrong, Business Development Manager.

AI. It’s the buzzword that most organisations are talking about, and some are fearing. Generative AI is a form of machine learning that can product text, video, images and an increasing number of other types of content. This type of AI especially has been discussed in conversations around dinner tables, all the way to number 10 in a his recently Global AI Summit that took place on the 1st-2nd of November.  It is a constantly evolving conversation, due to the technology’s fast-evolving nature. U.S. attorneys Steven Schwartz, Peter LoDuca, and their law firm Levidow, Levidow & Oberman were recently fined for using ChatGPT to create a court filing that included fake citations and six fictitious cases that were referred to in the legal brief, and since then it seems there has been yet another surge in the AI conversation.

Whenever the debate turns to AI’s impact on lawyers, their roles, or the legal sector in general, there seem to be two clear sides, the accepters, and the doomsayers. The argument from each side is AI will never be able to replace a well-trained lawyer, on the other that AI will lead to the downfall of the industry. 

From Thomson Reuters (TR) recent Future of Professionals report, 19% of professionals did state that widespread loss of jobs was their biggest concern. Although I believe job loss may not be as widespread as some are suggesting, there are still some understandable concerns over the applications of generative AI. If the Steven Schwartz and Peter LoDuca example of the misapplication of ChatGPT’s usage is anything to go by, one main concern over AI is how it can be correctly and responsibly applied within the legal market to make lawyers more efficient whilst maintaining standards. In the TR report 25% of professionals stated that a compromise in accuracy was their biggest concern, not in how AI might replace them. One other accuracy concern, similar to the ChatGPT mishap, is that clients may begin researching their own answers. The risk here is that by taking action independently this way, they may not fact-check the results they find and become misinformed due to their potential lack of specialist understanding of the subject matter. 

Security implications of the public hosting of data within ChatGPT is yet another area of worry that extends to all industries, not just legal. As all conversation logs are tracked, the data inputted to the application is not protected and confidential.  Similarly, if any content is outputted, like that of existing copyrighted material, it could open the door to individuals pursuing action for infringing on their rights, although Open AI have recently announced they will offer to cover all legal fees for any copyright suits that ensue from Chat GPT’s usage. 

Regarding the concerns for loss of jobs, it is mostly a question of if AI has the power to make certain roles redundant, and whether it will offer the opportunity for new specialised roles within the industry. Admittedly, AI can take away a lot of the labour-intensive work, which may risk junior and support staff’s roles. Currently, there is no proof that AI can make rational decisions, which has been the basis for many cases against it. Furthermore, with the implementation of new technology there will be the need for onboarding and specialist training, as well as monitoring of its output, which could drive more new roles, or allow for new skill sets to be developed amongst existing staff. 

AI and its applications aren’t all doom and gloom, however, with the majority of individuals from multiple reports expecting a net positive in improvements post-AI-implementation within their organisation. From a contract and e-discovery point of view, purpose-built generative AI tools can greatly increase productivity through fast automation of contracts, analysis of contracts, and keyword searching in e-discovery. Within the in-house realm, this could allow a cost-effective way to meet the rising demands for efficiency, amidst squeezed budgets. This purpose-built software isn’t doing anything that solicitors or paralegals cannot do themselves, however, it will greatly increase the speed at which they can complete it. Ben Allgrove, an attorney, and chief innovation officer at international law firm Baker McKenzie, told Built In ‘Being a lawyer is more than just fulfilling a task, it’s not the law that gets the deal across the line, it’s the negotiating skills. It’s the ability to read the person on the other side of the table.” This quickly displays the key need for human lawyers to remain in the industry, and the need for development in people-first, soft-skill heavy lawyers. Organisations like the O Shaped Lawyer, who have helped organisations like BARBRI instil a soft-skill approach to solicitor training in our Prep for Practice course (the SQE routes answer to a lack of PSC requirement), will be key in pushing the market to future proof the legal sector, amidst the introduction of new technologies. 

Many of the generative AI tools available do provide the potential for an increase in profitability for legal teams due to their low cost, an enticing proposition for many. As technology is implemented to increase the efficiency of legal work, the respective time and cost spent on a task can be reduced, allowing in-house teams especially to take on more work and save costs on external counsel. As posted in the TR report, 60% of their respondents estimated a greater proportion of work will be taken on in-house in the next 5 years.  

To summarise, it is still unclear what the full impact of AI on the legal market will be. I believe there will be a clearer picture in the next 10 years, as the boom in generative AI-based tools being released onto the market slows, and firms and in-house teams learn how to best implement them. The trends suggest an overall net positive impact, however, there are clear warning signs which should be considered. With the outcomes of the AI global summit focussing on the AI safety, with an inception of a new AI Safety Institute, there is the chance that developments in AI will be limited to ensure there is a ‘regulatory approach to maximise the benefits and take[s] into account the risks associated with AI’. This could therefore further organisation’s ability to properly assess the need, usage and benefits/risks of AI tools, before they are fully implemented within legal teams. This institute, alongside the understood warning signs should allow individuals to properly analyse their need for certain tools, and how to correctly implement them. This will help prevent users from jumping into a new system without the necessary due diligence being undertaken.  

Read Tom’s other blogs here:

Scroll to Top