By: Bobby Steinbach
About every two to three months, a gentleman calls me and says “hello,” and then he pauses, laughs, and says “Whew! I thought you hung up on me.” At this point, I do hang up because once again, I have been fooled by an AI-generated robocall.
I point this out not because I enjoy admitting that I fell for it, but because every day, AI programs get more sophisticated – more intelligence than artifice – and the scope of their capabilities increases. It can be tempting for law firms to want to dive into this new world, and they should: a legal tech company found that for firms who utilized AI, “there [were] marked gains in productivity and growth, with legal professionals working 40% more cases and producing 70% more billable hours in comparison to 2016. They [collected] 72% more over the same period.” There is a strong financial impetus for law firms to use AI tools.
But the truth is that it’s easy to find yourself in hot water if you don’t proceed with some caution. So let’s break down some of the more pressing concerns for law firms looking to increase their revenue, productivity, and online reach with generative AI tools.
The beauty of most generative AI programs is that they’re open source, which means anyone can modify and contribute to the technology. It’s also why ChatGPT was so successful: people around the world jumped in to “talk” to the programs. But no law firm should be using free, open programs. Instead, firms (and legal marketing companies) should invest in their own private AI architecture. Why?
Just remember: Even private programs can “hallucinate,” as Google explains, so firms will need to spend time training their models. Personal injury law firms will need, for example, to load pictures of healthy brains as well as those with damage or injury. Criminal defense firms that handle sensitive cases must be especially careful when loading photographic evidence into their datasets. Still, this should be an easier issue to fix, so long as firms dedicate time and resources to training the model from the start.
There is a lot of chatter when it comes to licensing, but it mostly breaks down into one of two categories:
So, why do law firms need to think about this? Because even if you are using generative AI to create something new, that “new” wasn’t born in a vacuum. The models had to learn from something. There are, per Forbes, many lawsuits making their way through the justice system, so it may be weeks, months, or even years before we have a definitive answer about usage rights.
The good news is that the blueprint for voluntary licensing already exists. In the legal sphere, both JSTOR and Elsevier “have either offered or entered into licensing agreements that allow for text and data mining (TDM) or other generative AI training uses,” and Getty Images is on board too. As such, law firms interested in generative AI may get more protection by using these companies’ datasets to train their own models.
Just remember: Lawyers are held to a higher code of ethics than other professionals. There is always a chance that what your program generates will be protected under copyright law – if not now, then in the future. Whether your marketing team is in-house or an outside agency, make sure they’re following best practices, too. Otherwise, you could find yourself at the wrong end of a cease-and-desist or even a lawsuit.
Generative AI programs can do extraordinary things:
There are several programs to choose from. Harvey, Lexis+ AI, and Thomson Reuters all offer generative AI “personal assistants” who can do everything from doc review to discovery requests to filing your documents – and those are just some of the bigger names.
Generative AI can feel like magic, but it’s important to remember that it’s a tool, not a solution. Law firms should be cognizant of their programs’ limitations and have an internal set of rules and guidelines about what is (or is not) acceptable.
Just remember: An AI-generated summary still needs to be fact-checked, lest you end up facing fines – or worse. Using a chatbot for intakes is great, but clients want to hear real people on the phone and see real faces in the office (or at least over a video conference). Pictures and videos should be reviewed to ensure they’re in line with the firm’s persona. And young lawyers still benefit from mentorship, which no AI program can provide.
Not everyone loves AI. That same tech company found that only 32% of law firm clients were “more likely to agree that AIʼs benefits outweigh the costs,” and thought “that AI will improve the quality of legal services.” Clients may push back if they think their lawyers are “cheating” by using an AI program. They may also push back if those same law firms’ overhead costs plummet, but their billing rate remains the same.
This is why firms should be upfront with their potential and existing clients about how they’re using AI and the steps they’re taking to protect their clients’ best interests. It should be a part of every client contract, and firms should offer clients the opportunity to opt out if they do not want their data used in the system.
Just remember: Clients aren’t the only ones who are wary of AI. Some courts have strict disclosure requirements about how attorneys can or cannot use AI tools in the courtroom or their filings. Law firms need to ensure that their work follows the rules of the court – no matter what those rules are.
Law firms that are either new to the AI game or still have some fear of it should ease their way in. Here are a few steps we recommend to our clients to help them feel more confident:
MeanPug Digital offers a full suite of marketing services for law firms across the country. We understand the importance of compliance, and we help our clients utilize generative AI tools to their advantage. Contact us anytime for a free comprehensive audit of your online and offline marketing strategy.