By Karineh Khachatourian and Trevor Giampaoli (August 21, 2025)
This article is part of a monthly column that examines emerging artificial intelligence legal trends and strategies for responsible, high-impact adoption. In this installment, we discuss how law firms and corporate legal departments can ensure AI and confidentiality work together defensibly.
In the wake of growing concerns over how AI conversations are stored and potentially accessed, legal departments across industries are undergoing a strategic transformation.
The shift follows recent scrutiny around OpenAI Inc.’s data retention practices, where users of tools like ChatGPT learned that their seemingly ephemeral queries might be stored, analyzed or even subject to discovery in litigation.
More specifically, in In re: OpenAI Inc. Copyright Infringement Litigation, the U.S. District Court for the Southern District of New York ordered OpenAI in May to “preserve and segregate all output log data that would otherwise be deleted on a going forward basis … whether such data might be deleted at a user’s request or because of ‘numerous privacy laws and regulations.'”[1] Though OpenAI asked the court to lift the order, citing privacy concerns, it lost that bid in late June.
As such, the discoverability of AI prompts, regardless of any prior privacy commitments, is a growing concern affecting legal organizations’ internal AI policies and decision-making.
In light of this, what was once viewed as a breakthrough in efficiency is now being reevaluated through the lens of confidentiality, compliance and legal risk. And the convenience of open-access generative AI tools is now giving way to a new phase of cautious, policy-driven adoption.
Generative AI tools like ChatGPT, Claude and others have been increasingly integrated into the legal workflow for research, drafting, contract management and communications. But
the lack of clarity around how user data is stored, and whether it can be retrieved or exposed, has triggered a pivotal realization: AI tools may not be confidential by default. Or, even if they are, confidentiality may nonetheless be disregarded under certain circumstances.
In response, the legal industry has evolved from a stance of curiosity — resulting in early overreliance — to caution, resulting in responsive guardrails and deliberate integration. This, in turn, has led to the development of internal AI use policies and outside counsel guidelines designed to mitigate the risks of early overreliance.
With this shifted focus toward caution, AI use policies and outside counsel guidelines may seek to limit liability through policies such as:
- Restricting the types of data permissible for use in AI tools;
- Classifying data into tiers and only allowing certain tiers of data to be used in AI tools;
- Limiting the type of work for which AI can be used;
- Prohibiting certain types of AI tools or workflows;
- Prohibiting retention of prompts; and
- Implementing required training before allowing AI use.
That said, each individual law firm and legal department deals with their own clients, data and unique circumstances. As such, respective AI use policies will not be identical, and each legal organization must assess its needs and create an AI use policy accordingly.
In addressing the growing concerns, the most significant shift has been architectural. Rather than relying on open cloud-based AI services, some law firms and corporate legal departments might consider investing in proprietary and on-premise or private cloud AI tools. These deployments allow organizations to retain full control over their data, with no transmission to external servers.
Some large firms and legal departments have even built bespoke AI solutions, trained on internal precedents and case data, with strict integrated data governance. However, one size does not fit all, and each organization must decide for itself at what point it makes strategic and economic sense to build its own solution in-house, rather than adopt an existing software product.
Under this new lens, demand is soaring for AI vendors that offer data sovereignty — the ability to keep data within specific jurisdictions and ensure it doesn’t flow into training models or third-party analytics pipelines.
Legal departments and law firms can consider adding stringent requirements to their vendor selection processes, with questions focusing on:
- Where data is stored;
- Identification of third-party companies responsible for data storage;
- How long data is stored for;
- Whether data is used for training;
- How it is encrypted, anonymized and accessed;
- What happens upon contract termination;
- The ability to conduct third-party AI audits examining, e.g., testing security, ethics and the presence of hallucinations; and
- Confirmation of zero data retention agreements with large language model providers.
This list is in no way exhaustive, but it marks a broader trend toward vendor risk assessments becoming AI-specific. And while not every legal organization may require the same vendor obligations, it is nonetheless crucial to determine what each organization needs out of its vendors, their confidentiality and their security.
With this shift in data retention, law firms and corporate legal departments might consider revising contract language to explicitly address AI usage, both in how they use AI and how their vendors or third parties might. Clauses can prohibit the use of public AI tools with confidential or proprietary data, and mandate disclosure if AI tools are used in any deliverable.
Furthermore, clauses may even limit the location in which AI tools store or process data, or go as far as prohibiting the use of AI as a whole.
However, as the benefits of AI adoption have continued to expand, blanket bans on AI are far less prevalent. Clauses instead focus on the safe adoption of AI, both in-house and as a requirement for any outside counsel.
As such, companies are scrutinizing how external counsel might be leveraging AI in their own workflows. If a company is trusting outside counsel with its confidential and privileged data, it needs to ensure that outside counsel will only work with AI tools or vendors that will continue to protect such data.
As Sona Sulakian, CEO and co-founder of Pincites, explains, “Don’t let vendors blur the line” between anonymized and de-identified data. She continues, “Anonymized means irreversible,” whereas “de-identified means reversible with effort.” Rather than “assume stronger privacy protections than they’re getting,” she says, AI customers of all types should (1) “define ‘anonymous’ or ‘de-identified'” in their contracts, (2) “ask how the vendor anonymizes, not just if,” and (3) if applicable, ensure that de-identified data has proper “safeguards, purpose limits, [and] re-ID prohibitions.”
Adding to the confusion, the California Consumer Privacy Act defines de-identified data similarly to anonymized data, making it all the more important for organizations to understand the distinction.
Perhaps the most fundamental change is cultural. Legal professionals, whether lawyers or nonlawyers, may be trained to treat AI interactions like public interfaces more akin to posting on social media than using internal software. By thinking of AI tools and platforms in this way, lawyers and staff can appreciate the potential risks of AI use and adoption through everyday examples, while not having to cut it out of their work. This means:
- Never inputting confidential information into public AI tools;
- Treating all AI queries and AI-provided responses as potentially discoverable; and
- Documenting AI-assisted work, such as the prompts used, for transparency and audit purposes.
Legal departments can even consider issuing guidance akin to AI hygiene protocols, where users must mask identifying details and avoid case-specific content unless working within approved, private AI environments.
The goal is not to discourage AI use, but to ensure it is done responsibly. Proper training sets the expectation that every interaction with AI should reflect the same care and professionalism applied to any public or client-facing communication.
As AI continues to reshape legal workflows, the early thrill of experimentation may give way to a more mature, governance-driven approach. The legal industry’s core principles — confidentiality, privilege and risk management — are now shaping how and where AI is deployed.
And as the plethora of AI cases currently being litigated continues to progress toward resolution, expect to see continued expansion in the area of AI governance. The reality is that AI is not going away. Our job now is to make confidentiality and AI work together defensibly.
Karineh Khachatourian is a managing partner and Trevor Giampaoli is an associate at KXT Law.
The opinions expressed are those of the author(s) and do not necessarily reflect the views of their employer, its clients, or Portfolio Media Inc., or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and should not be taken as legal advice.
[1] In Re: OpenAI, Inc. Copyright Infringement Litigation, Case No. 1:25-md-03143-SHSOTW (S.D.N.Y. May 13, 2025).