Share:

In this article, I propose a useful way for the legal profession to think about AI. I describe how the technology works and how lawyers currently tend to classify material, before arguing that AI should be treated as a “tertiary” source of information. 

Why Lawyers Should Care 

I was (and still partially am) a skeptic. I’d heard AI will replace junior talent. Or AI won’t replace lawyers; it will replace lawyers who don’t use it. The CEO of Google DeepMind (a Nobel laureate) believes AI could help “cure all disease” in ten years. Still, an inability to identify source, or worse, an inability to identify made-up sources, was an obvious deterrent. 

The nuanced view taken by the State Bar of Texas’s Taskforce for Responsible AI in its Year-End Report is that AI “is a rapidly emerging and potentially disruptive technology that presents attorneys and judges with risks and opportunities.” According to the taskforce, nearly a quarter of judges are already testing out AI. There are anecdotal examples about its use to review the record for particular concepts and to brainstorm oral argument questions. 

Consistent with this view, the State’s Professional Ethics Committee provided AI-related guidance in Opinion 705, noting that lawyers should not “unnecessarily retreat” from new technology that may save a client’s time and money. But should also appreciate the risk of articulating hallucinated answers and exposing confidential information. Some AI platforms store user inputs and share them with third parties and some states, are requiring the use of disclaimers. 

Thus, there is room for us to be both bullish and squeamish about AI. Either way, it is penetrating the industry. Being able to detect both its shortcomings and efficiencies can help us better serve our clients, our teams, and even our courts. 

Finally, aside from these practical considerations, there are also philosophical ones for the profession to exercise leadership about – such as whether government regulation of the inputs and outputs of AI is a free speech issue. Nonpartisan think tanks are already surveying American and global audiences about this issue. 

How AI Technology Works

There is an important distinction to be made between integrated AI that is already in platforms we use, such as autocorrect, search algorithms, and website chatbots, and generative AI that creates content like photographs (including deepfakes) and sentences. The large language models that generate sentences are generally what I mean by “AI” in this article. 

Large language models can review massive amounts of information across a database and synthesize it in an impressively organized manner, aggregating and distilling concepts. Popular platforms include Claude by Anthropic and ChatGPT by Open AI. 

However, large language models are fundamentally designed to predict the next word based on patterns. The technology may invoke an awkward word, cite to a broken link, and fail to hedge while doing so. Its training won’t be current, and likely won’t be precise on a novel or unique issue. 

AI as a Tertiary Source

In our professional context, we already tend to think of material as a primary or secondary source. Primary sources represent the rules we must follow. Cases, statutes, and regulations – i.e., the heart of the matter. Distinguishable cases permit comparison, meaning that a case from a state-level appellate court in 1975, for example, may have less weight than a case from a 2005 Supreme Court case. That is among the interesting work of a lawyer. 

The objective of a secondary source is commonly to lead you to your primary source. In fact, many law professors recommend starting your research with secondary sources, such as treatises or articles. The source material can also add unique value. This includes providing expert analysis, identifying cross-jurisdictional patterns, and condensing information into accessible takeaways. Interestingly, secondary sources are often evaluated by their authors and peer-review processes. 

The concept of AI as a “tertiary” source thus provides a useful framework for understanding and leveraging it. This framework resolves the common objections to AI. AI is based on statistical pattern recognition and, therefore, should not be relied upon as final authority. There are no authors nor peer-review processes. There are open questions about whether anyone can be held accountable for its algorithmic outputs, and whether anyone should be.  

But AI can at times be a tool to lead you to the right ideas and sources. It can refresh your recollection of popular treatises on federal practice, listing Wright & Miller, Moore’s, Chemerinsky for constitutional law, and Nimmer on copyrights, when prompted. It can be a first step in the brainstorming process, democratizing that process for professionals who don’t have the privilege of working on collaborative teams (or at least, not at all times). 

And much unlike a proposal that could rehaul our entire workplace, this is simple habit stacking. Next time you have a general question for your search engine, you can try an AI-powered alternative. And rather than merely asking the question, you can add context, like “I am a lawyer in Texas… Provide response in bullet points with links.” But don’t take its word as gospel. And avoid saying please and thanks – apparently such niceties are costing the providers millions in electricity!