Womble Perspectives
Welcome to Womble Perspectives, where we explore a wide range of topics from the latest legal updates to industry trends to the business of law. Our team of lawyers, professionals and occasional outside guests will take you through the most pressing issues facing businesses today and provide practical and actionable advice to help you navigate the ever-changing legal landscape. With a focus on innovation, collaboration and client service, we are committed to delivering exceptional value to our clients and to the communities we serve.
Womble Perspectives
The Hidden Legal Risks of Using AI Tools in Sensitive Matters
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Today we’re digging into a potentially groundbreaking legal ruling about AI. It’s the first time a federal court has weighed in on whether chats with an AI tool like Claude or ChatGPT can be considered privileged. Spoiler: the court said no. And this could have implications for businesses and legal teams.
Read the full article
About the authors
Welcome to Womble Perspectives, where we explore a wide range of topics, from the latest legal updates to industry trends to the business of law. Our team of lawyers, professionals and occasional outside guests will take you through the most pressing issues facing businesses today and provide practical and actionable advice to help you navigate the ever changing legal landscape.
With a focus on innovation, collaboration and client service. We are committed to delivering exceptional value to our clients and to the communities we serve. And now our latest episode.
Host 1:
Hello and welcome back to the show. Today we’re digging into a potentially groundbreaking legal ruling about AI. It’s the first time a federal court has weighed in on whether chats with an AI tool like Claude or ChatGPT can be considered privileged. Spoiler: the court said no. And this could have implications for businesses and legal teams.
Host 2:
Right. On February 17, 2026, Judge Rakoff from the Southern District of New York ruled that a criminal defendant’s AI chats were not protected by attorney‑client privilege or the work-product doctrine. And this wasn’t some fringe scenario. These were 31 conversations the defendant had with Claude, which the FBI seized during his arrest. The government asked to use them, and the court said, “Sure, go ahead.
Host 1:
So here’s what happened: the defendant, Bradley Heppner, used Claude to basically workshop ideas and information before meeting with his lawyers. His team later claimed those chats should be privileged. But the court said no, and for some pretty straightforward reasons.
Host 2:
First, the court pointed out that Claude is not an attorney. Privilege requires a communication between a client and an actual lawyer, or someone acting under a lawyer’s direction. AI doesn't have fiduciary duties. It doesn’t face disciplinary boards. It’s software. So right out of the gate, privilege just doesn’t fit.
Host 1:
But even beyond that, the court said there was no reasonable expectation of confidentiality. Anthropic’s terms and conditions say they collect user data and can share it with third parties, including government regulators. So the judge basically said: If the tool’s own terms tell you your information isn’t private, you can’t claim it was private. Which makes sense, even if it’s a little uncomfortable.
Host 2:
Yeah, that part hit hard because it applies to so many AI tools out there. Lots of companies train their models using user inputs or send data through third‑party processors. So unless you’re using an enterprise version with strict confidentiality controls, you’re kind of broadcasting your thoughts into the AI universe. Not ideal when you’re prepping for a meeting with your criminal defense team.
Host 1:
The last privilege factor was whether he was seeking legal advice. And the court said: nope. Heppner wasn’t using Claude under the direction of counsel, and Claude itself actually told the government, “I’m not a lawyer and can’t give legal advice.” So the court treated the chats like personal notes, except less protected, because AI companies can access them.
Host 2:
Right, and that’s the part that legal teams everywhere should be paying attention to. People often treat AI tools like a brainstorming buddy or a sounding board. But the court is basically saying: this is not a private notebook, and it definitely isn't your attorney. If you type something sensitive into an AI tool, it could end up in discovery.
Host 1:
So privilege didn’t apply. But what about the work‑product doctrine? That usually protects materials created for litigation. Heppner’s team argued for that too, but the court didn’t buy it. They said the documents weren’t created by or at the direction of a lawyer, so they don’t qualify.
Host 2:
And even more importantly, the chats didn’t reflect the attorneys’ strategy. The court acknowledged the AI chats might have influenced the lawyers later, but since the lawyers weren’t involved when Heppner made them, the protection didn’t attach. That’s a pretty strict line, but it’s consistent with how courts view work product.
Host 1:
Alright, let’s talk real-world impact. This ruling should make companies rethink how employees use AI tools, especially when there’s potential litigation. If someone casually plugs sensitive details into a public AI tool, they may unintentionally waive privilege. And that’s a risk most organizations are not prepared for.
Host 2:
Exactly. Legal teams need to get ahead of this by educating business stakeholders. Marketing teams, operations folks, customer service—everyone needs clear guidance. Companies should develop policies that steer people toward approved internal AI tools that guarantee confidentiality. And anything connected to legal issues should go through lawyers first, full stop.
Host 1:
The big takeaway? AI is incredibly powerful, but it’s not a lawyer and it’s definitely not a safe space unless your company has built one intentionally. Courts are starting to draw lines, and this one is a big early marker for how privilege applies in the age of AI. So if you're dealing with anything remotely sensitive, loop in your legal team before you start typing prompts.
Host 2:
Exactly. Think before you prompt. And if you’re a business leader or part of an internal legal or marketing team, use this case as a conversation starter. Policies need to catch up with behavior, and this ruling is a bright, flashing warning sign. Thanks for listening, and we’ll see you next time.
Thank you for listening to Womble Perspectives. If you want to learn more about the topics discussed in this episode, please visit The Show Notes, where you can find links to related resources mentioned today. The Show Notes also have more information about our attorneys who provided today's insights, including ways to reach out to them.
Don't forget to subscribe via your podcast player of choice so that you never miss an episode. Thank you again for listening.