The Augmented Lawyer: Drafting Tips As AI Alters Admin Law

By Karineh Khachatourian (December 22, 2025)

This article is part of a monthly column that examines emerging artificial intelligence legal trends and strategies for responsible, highimpact adoption. In this installment, we discuss how use of AI is requiring updates to the administrative side of law.

More attorneys are using artificial intelligence tools to save time every day.

Indeed, as a Dec. 9 Financial Times article noted, “Law firms are relying increasingly on AI to accelerate mundane tasks and stay a step ahead of their adversaries. The fast-moving technology is helping lawyers analyse judges’ opinions, zero in on evidence in a mountain of data, draft documents, and unlock a host of other efficiencies.”[1]

Of course, the technology introduces a host of new risks. The Financial Times article noted that attorneys “worry about jeopardising confidentiality, but accuracy is seen as a greater
risk.”

As generative AI becomes embedded in mainstream legal practice, its influence is expanding far beyond drafting and research. AI is now reshaping the administration of law itself, from how experts document and validate their work, to how produced litigation documents are handled and how joint defense teams operate.

This shift demands a new level of contractual clarity and operational discipline. Established legal instruments like expert engagement letters, joint defense agreements and case protective orders were traditionally never drafted with generative AI in mind. They generally do not address modern concerns, such as: How was AI used? Who verified the work? Was client data exposed to a model?

Below is an analysis of the major pressure points and practical steps for legal teams preparing for the administrative realignment that is required by AI.

Expert documentation: AI creates a new layer of verification.

Experts increasingly rely on AI tools to accelerate calculations, summarize data or generate first-pass narrative explanations. But these efficiencies create novel obligations and challenges.

For instance, an expert might use a large language model to produce an initial damage estimate, but later discover that the model relied on unstated assumptions or embedded datasets that cannot be independently verified.

If an expert uses an AI model to help craft findings, opposing counsel may argue that the output is partially authored by an external system. That raises the question: Whose work is it?

Validation also increases burdens. Courts expect experts to independently verify any AI-assisted output, but traditional engagement letters rarely require disclosure or certification of such use.

Practical Steps and Pointers

Make clear in the contract whether you will allow experts to use AI and under what circumstances.

Require experts to represent whether and how AI tools will be used, including model version and data sources. An example clause could be:

Expert may use generative AI tools solely for drafting assistance and data summarization, provided the tools operate within a closed, enterprise environment approved in writing by Counsel. Expert may not input confidential client information into any AI system that trains or retains user data. Any other use of AI technologies is expressly prohibited unless pre-approved by Counsel.

Build in verification certifications.

Experts must confirm that conclusions are independently validated. For example, you might say:

Expert certifies that all opinions and conclusions provided in this engagement are the result of Expert’s independent professional judgment, and that any AI-assisted outputs have been personally reviewed, audited, and validated for accuracy, reliability, and consistency with underlying data.

Clarify the ownership and authorship of the final work product.

The following is one possible provision:

All deliverables including reports, summaries, analyses, and exhibits shall be deemed authored solely by Expert, regardless of AI assistance.

Add provisions that specify whether AI-generated intermediate materials are retained or destroyed.

A clause might read as follows:

Expert shall not retain any AI-generated drafts, prompts, interaction logs, or intermediate analytical outputs beyond the final deliverable unless Counsel provides written instructions to preserve such materials. Upon completion of the engagement, Expert shall certify the deletion of all intermediate AI-generated materials unless preservation is required by court order.

Establish a standard for prompt hygiene.

These standards must ensure that no confidential client data is exposed to public models. An example provision might be as follows:

Expert agrees to use prompt hygiene protocols, including refraining from entering any client names, case facts, financial data, personal information, discovery documents, or privileged communications into public or consumer-grade AI systems.

Prompts shall be structured using generalized or anonymized placeholders (e.g., ‘Company A,’ ‘Dataset X’) when interacting with approved AI tools.

Joint defense and collaboration: Joint defense agreement assumptions no longer hold.

Joint defense groups operate on trust and the assumption that all parties control their information flows. Generative AI complicates this in several ways, including:

  • Potential cross-party contamination, wherein one party’s careless use of AI could inadvertently expose another party’s privileged materials; and
  • Undefined risk allocation, as joint defense agreements rarely allocate responsibility for AI-related breaches.

Practical Steps and Pointers

Add provisions that require parties to disclose the AI systems used in the collaboration.

The proposed term language could be as follows:

Prior to use, each Party shall disclose to all other Parties any generative AI or machine-learning systems to be used in connection with the review, analysis, or handling of Joint Defense Materials. Such disclosure shall include the system name, version, hosting environment, and whether inputs are stored, logged, or used for model training or improvement.

Create shared AI usage protocols.

Consider adding certain language like the following:

The Parties agree to follow a unified AI Usage Protocol specifying approved tools, permissible workflows, prompt hygiene standards, required anonymization methods, and documentation guidelines. No Party may use AI tools outside this protocol without written consent from all signatories.

Clarify that AI-generated summaries are covered under the same privilege protections as original materials.

Consider incorporating model language, such as:

All AI-generated outputs — including summaries, classifications, embeddings, annotations, or derivative analyses — shall be treated as Joint Defense Materials and afforded the same privilege and confidentiality protections as the underlying documents from which they were derived.

Include a mutual prohibition on public model ingestion of shared information.

It may be helpful to add language, such as:

No Party shall upload, input, or otherwise expose any Joint Defense Materials to any generative AI system that trains on user data, retains prompts, or operates in a public or consumer environment. Only closed, non-training, enterprise-grade systems approved by all Parties may be used.

Protective orders: AI creates new discovery categories.

Protective orders have long been the backbone of discovery, but they rarely contemplate generative AI. The challenges are threefold:

  • If AI is used to summarize, classify or annotate protected data, it may be unclear whether those derivatives also protected.
  • Parties may need explicit language preventing models from training on protected content.
  • AI tools may blur traditional distinctions between reviewers, processors and systems.

Practical Steps and Pointers

Define “protected material” to include AI-generated derivatives, embeddings and metadata.

To address the issue, consider language such as the following:

‘Protected Material’ includes all documents, data, and information designated Confidential or Highly Confidential under this Order, as well as any derivatives created through the use of artificial intelligence tools, including but not limited to summaries, classifications, vector embeddings, annotations, metadata, or other machine-generated representations derived from such material.

Prohibit the use of protected information in any model training or model improvement processes.

Consider adding language like the following:

No Party or Vendor may use Protected Material, in whole or in part, to train, fine-tune, benchmark, or otherwise improve any machine-learning or generative AI model. Protected Material shall only be used for litigation purposes as expressly permitted by this Order and shall not be stored or processed in any environment that uses such data for model training or analytics unrelated to this litigation.

Require detailed disclosure of AI tools that are used in the litigation workflow.

Example language could include:

Prior to the use of any AI-assisted review, analysis, or document-processing technology, the Party employing such tools shall disclose to all other Parties the name, version, hosting environment, functionality, and data-handling characteristics of each AI system, including whether inputs, outputs, or logs are stored, retained, or shared with any third party.

Clarify obligations around retention versus deletion of AI-generated work product.

One possible formulation is as follows:

AI-generated materials — such as summaries, embeddings, model outputs, and automated categorizations — shall be retained or deleted consistent with the obligations imposed under this Order. Unless required for evidentiary purposes or directed by the Court, such materials must be deleted upon completion of litigation. Parties shall document and certify post-litigation deletion of all AI-generated derivatives containing Protected Material.

Add requirements for technical access controls on any AI-assisted review systems.

Consider language like the following:

Any AI-assisted review platform used to process Protected Material must incorporate industry-standard technical safeguards, including role-based access controls, encryption of data in transit and at rest, audit logging of user activity, and segmentation to prevent unauthorized access by third parties or system administrators. Protected Material may only be accessed by individuals authorized under this Order, even when displayed or manipulated within an AI-enabled interface.

Conclusion

Generative AI is no longer just a tool for writing briefs or analyzing data. It is reshaping the administrative architecture of the legal system itself. Expert documentation, protective orders and collaborative frameworks all require modernization.

Law firms and in-house teams can lead this transition by rewriting templates, implementing operational safeguards, and educating experts and vendors.

Those who act early will reduce risk, streamline litigation and build client trust. Those who delay will confront avoidable disputes, patchwork protections and unpredictable judicial responses.

In the end, the organizations that adapt today will define the standards that everyone else will follow tomorrow.


Karineh Khachatourian is a managing partner at KXT Law.

The opinions expressed are those of the author(s) and do not necessarily reflect the views of their employer, its clients, or Portfolio Media Inc., or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and should not be taken as legal advice.

[1] https://www.ft.com/content/423e9bc4-227d-4bd4-87de-401132eb415a?utm.