Lawyers can and should use generative AI tools, particularly generative pre-trained transformers (GPTs). When they do, they can and should use prompt engineering to make the best use of these tools.
With a GPT, the lawyer provides inputs, or prompts, regarding various user goals. The basic interface allows a user to type in a query and receive a response. This prompt gets tokenized after submission to the GPT, interpreted through its parameters and weights, transformed for the generation of output tokens, which are ultimately converted into a text “response.” The parameters enable a GPT to “learn” and adjust in training to craft better responses, so the “more” parameters the “smarter” the GPT, so to speak.
A series of prompts and outputs is generally referred to as a chat or a session. Usually, the GPT “forgets” your specific chat as soon as you leave the chat or begin a new one. Some chats can be customized with various user-selected or user-crafted guidelines, like on Claude’s Projects feature.
And GPT sessions are typically rate-limited through a token usage cap. Token expenditures vary, but generally the more prompts, the faster the limit is met. And the refresh period differs between platforms. There is nothing more frustrating than reaching a token limit during a research session.
Based on these technical limits, the lawyer must make the most of a session. A lawyer should do the following:
1) Have a project goal in mind, whether general or specific. Writing blind wastes time.
2) Have a logical outline to seek resolution of the goal with the GPT. Not knowing the basics of the problem means the lawyer will spin her wheels with the GPT, which can never be “smart” in any way familiar to us as humans.
3) Craft the prompts to reach that resolution within the rate limits. Proficiency here can result in efficiency for your client.
Those are the basic branches of prompt engineering: The lawyer guides the GPT to provide user-suitable outputs. But remember PICNIC: problem in chair, not in computer. If the GPT generates poor results, it is often due to poor engineering.
Many kinds of prompt engineering can help a GPT do good work.
The lawyer can set up session parameters to keep the GPT economical in its outputs. Many GPTs allow users to order the GPT to make brief responses, even to adhere to a certain word count. And many also allow users to guide the “style” of the output, which could range from abrasive to academic. With examples, it can do its best to emulate the “sound” of a given writer. I have trained Claude on briefs from superior lawyer-writers like David Gunn, Murray Fogler, and Chad Baruch to rewrite parts of my work product with more measured tones. And most GPTs also default to inform the user in the output what the GPT transformed. It may also make suggestions to the user regarding what to change to make their writing sound more like their preferred writer’s.
The lawyer should also refer the GPT to previous inputs through various signposts like, “As we just discussed above…” or “Expand on your previous point five regarding ABC.” Re-copying or restating the input will run up limits and frustrate the average user. This is a key tool for efficiency, and many GPTs default to inquiring if the user wants to elaborate on a given point in the most recent output.
The lawyer will engineer better outputs with more precise verbiage, providing applicable context to the GPT, and a desired output format, as well as utilizing language to emphasize certain points.
Consider a sanctions request. The lawyer may provide key terms and definitions from cases, like “just” and “not excessive” and what “harassing” means; advise the GPT that the session involves application of Chapter 10 of the Civil Practice and Remedies Code with copies of Chapter 10 and helpful Chapter 10 authorities; explain that the project is to produce an issue, rule, application, and conclusion memo for a motion; describe the issue leading to the sanctions request and why sanctions are warranted or not; and instruct the GPT to emphasize that the requested sanction meets the Texas standards for imposing sanctions.
Or the lawyer might use the GPT to review a previously prepared motion to help guide the drafting of a related order to include the necessary findings. On the other hand, a respondent’s GPT exercise might glom onto the missing language, or at least make a case that such is missing, in order to guide a timely objection.
But the GPT’s outputs will almost always require tweaking. That is expected: The goal is not to get a perfect output to copy on the first go, but to get outputs to better refine. In a recent effort to craft a Rule 13 (Texas Rules of Civil Procedure, “Effect of Signing Pleadings, Motions and Other Papers; Sanctions,”) order, I noticed the findings of good cause were missing from the output, despite my prompt including a copy of Rule 13 (which sets out the requirement). I did not ask it to rewrite the output from scratch, as if it could “guess” at the error, but lifted the parts I liked, copied it to a new prompt, and instructed the GPT to generate Rule 13 findings as set out in the motion I had included with the prompt and with a summary of testimony from the hearing. It accomplished this just fine, and so with only a couple of prompts I had a halfway decent draft of a Rule 13 order, which I could later perfect before filing it. Using a GPT as a rough drafter/virtual law clerk saved a lot of time.
But not every GPT is fit for the task. Mixing and matching tools will make the modern practitioner better suited for modern challenges.
As always, one should look before taking the leap. At minimum, a lawyer must beware what he types into a given AI tool and what he copies from an AI tool. The State Bar of Texas just dropped Ethics Opinion 705,1 which is loaded with warnings for the careless practitioner about violating Rules 1.01 and 1.05 of the Texas Disciplinary Rules of Professional Conduct.
Do read Opinion 705 and also Chase Hardy’s and Matthew J. Hill’s March 2025 Texas Bar Journal article2 on the use of large language models in e-discovery; it is good to see our community embrace the tools at our disposal.
ENDNOTES
1 Tex. Comm. On Professional Ethics, Op. 705 (2025)
2 Chase Hardy & Matthew J. Hill, Large Language Models, 88 Tex. B.J. 3 (2025).

