By Karineh Khachatourian and Catie Cambridge (February 27, 2026)
This article is part of a monthly column that examines emerging artificial intelligence legal trends and strategies for responsible, high-impact adoption. In this installment, we discuss how legal departments can push back on unrealistic “do more with less” mandates driven by perceived AI efficiencies, while positioning themselves as indispensable partners in responsible innovation.
Artificial intelligence has quickly become the corporate world’s favorite efficiency narrative. From sales forecasting to software development, AI is pitched as the tool that will allow organizations to move faster, leaner and cheaper.
Indeed, a CNBC article published on Jan. 20 pointed to research that found that “AI was seen as a significant contributing factor to nearly 55,000 layoffs in the U.S. in 2025.” And, the article noted, “employee concerns about job loss due to AI have skyrocketed from 28% in 2024 to 40% in 2026.”[1]
In-house legal departments, particularly small and midsize teams, are increasingly caught in the crosshairs of that narrative.
As companies invest heavily in AI-driven tools, many legal leaders are encountering a familiar but newly amplified refrain from executives: If AI makes lawyers faster, shouldn’t legal be able to handle more work internally?
The resulting dynamic, wherein perceived AI-driven efficiency leads to unrealistic workload expectations, can be described as th
e AI efficiency trap. Left unaddressed, it can distort leadership’s understanding of legal capacity and create unsustainable operating models for in-house departments.
For general counsel and legal managers, the challenge is reframing what AI does and does not change about legal work in order to combat executives’ impulse to reduce teams and resources.
Each of the myths below reflects a common executive assumption that legal teams must proactively debunk using concrete explanations and practical education strategies.
1. “Executives don’t need to consult their lawyers anymore; they can get answers directly from AI.”
What Legal Teams Should Explain to Executives
Legal teams should begin by stressing that AI outputs are probabilistic, not authoritative. While AI tools may generate confident-sounding legal answers and can be helpful as starting points, they do not guarantee accuracy, timeliness or jurisdictional precision. Legal teams should make clear to executives that AI produces suggestions, not legal conclusions, and that an AI tool’s purported certainty often masks underlying errors or missing nuance.
Legal should also explain that using AI tools can raise privilege and confidentiality risks. Communications with AI platforms may not be protected by attorney-client privilege, meaning sensitive facts or strategic discussions shared with AI tools may inadvertently waive protections that would otherwise apply. Finally, legal teams should emphasize that legal advice is inseparable from company-specific context. In-house counsel translate law into business decisions based on the company’s risk tolerance, regulatory posture, commercial objectives and history. AI tools lack this institutional understanding and cannot assume accountability for outcomes.
Practical Steps Legal Can Take
- Proactively advise executives about when AI-generated outputs may be relied upon, and when formal legal review is required.
- Develop simple internal guidance clarifying that AI tools supplement, but do not replace, legal judgment.
- Educate business leaders on privilege risks associated with unsupervised AI use.
- Position legal review as risk ownership, not mere error correction.
2. “AI makes lawyers faster, so we can do more with the same team.”
What Legal Teams Should Explain to Executives
Legal teams should explain that speed is not the same as capacity. Even if AI accelerates drafting or research, lawyers must still review outputs, identify risks, align stakeholders and ensure consistency with company positions. These judgment-intensive steps do not shrink proportionally with faster first drafts.
Legal should also point out that AI increases complexity at the same time it increases speed. AI adoption itself introduces new regulatory obligations, novel contract terms, intellectual property considerations and data protection risks — expanding the scope of legal oversight rather than reducing it. Legal work is not becoming simpler; it is becoming broader and more interconnected.
Legal teams must help executives understand that perceived efficiency often results in invisible workload expansion, including more ad hoc questions, shorter turnaround time expectations and increased cognitive load — none of which shows up in productivity metrics.
Practical Steps Legal Can Take
- Reframe efficiency gains as opportunities to improve quality, foresight and risk prevention.
- Track and surface hidden work created by AI adoption, e.g., consults, escalations and reviews.
- Push back on volume expansion by tying capacity discussions to risk exposure, not task counts.
3. “AI governance is a future problem; product and revenue come first.”
What Legal Teams Should Explain to Executives
Legal teams should explain that delaying AI governance creates immediate commercial friction, not future efficiency. Without clear governance positions, sales and procurement teams may struggle to answer diligence questions, slowing deals and escalating issues unnecessarily.
Legal should stress that lapses in governance compound quickly. Retroactively imposing controls is far more disruptive and costly than establishing so-called lightweight frameworks early.
Critically, legal teams must educate executives that today’s enterprise buyers are explicitly embedding AI governance and responsible AI criteria into requests for proposals, vendor vetting and procurement workflows, requiring vendors to demonstrate compliance with governance, risk and transparency standards as a condition of selection.
Weak or vague responses on these topics increasingly lead to lost deals, protracted negotiations or more onerous contractual requirements.[2]
Practical Steps Legal Can Take
- Position AI governance as a revenue enabler and deal accelerator.
- Develop standardized diligence responses early.
- Implement lightweight governance frameworks that scale with growth.
- Use customer demands as evidence when advocating for early legal involvement.
4. “If legal pushes back, they’re anti-innovation.”
What Legal Teams Should Explain to Executives
Legal teams should reframe pushback as risk management in service of innovation, not resistance to it. Thoughtful legal oversight allows AI initiatives to scale sustainably, preventing regulatory shutdowns, reputational damage and customer distrust — thereby preserving the organization’s ability to innovate over the long term.
Legal teams should help executives understand that clear legal guardrails enable faster execution. Teams move more quickly when expectations are defined and escalation paths are predictable.
Practical Steps Legal Can Take
- Align legal concerns explicitly with business objectives.
- Offer solutions; don’t just identify risks.
- Frame guardrails as tools for speed, not obstacles.
Escaping the AI Efficiency Trap
Escaping the AI efficiency trap requires more than passive disagreement. It requires in-house legal teams to actively educate executives, reframe assumptions and tie legal judgment to business outcomes.
By doing so, legal departments can push back on unrealistic “do more with less” mandates, while positioning themselves as indispensable partners in responsible innovation. AI can be a force multiplier for legal teams, but only when its limitations are understood as clearly as its advantages.
Karineh Khachatourian is a managing partner at KXT Law.
Catie Cambridge is a strategic adviser at Docsum Inc. She previously served as chief legal officer at Allvue Systems.
The opinions expressed are those of the author(s) and do not necessarily reflect the views of their employer, its clients, or Portfolio Media Inc., or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and should not be taken as legal advice.
[1] https://www.cnbc.com/2026/01/20/ai-impacting-labor-market-like-a-tsunami-as-layoff-fears-mount.html.
[2] See, e.g., White House Office of Management and Budget (OMB) Memorandum M-25-22 (“Driving Efficient Acquisition of Artificial Intelligence in Government”) issued in April 2025; Article 25 of EU AI Act & Model Clauses; Microsoft: Enterprise AI Services Code of Conduct (2026);Salesforce: Global Supplier Code of Conduct (updated June 2025).
