Skip to main content

This Call May Be Recorded. And That Barely Covers It

Most contact center AI deployments are operating well beyond what your disclosure language describes. The litigation is just the most visible symptom.

auto generated image of microchip and iceberg combined

I don't often lead with this as a lawyer: the legal risk isn't what concerns me most.

What concerns me is the gap between what customers think is happening when they call your company and what is actually happening. In many organizations I've seen, that gap is wider than anyone has formally acknowledged, and it's been growing quietly for years, one AI deployment at a time.

Disclosure Language Was Written for a Different Worldlink to this section

When companies began recording calls, the disclosure requirement made sense at the time. A human agent was listening. Maybe a supervisor. Maybe a QA team samples a small fraction of calls each month. "This call may be recorded for quality assurance purposes," described the reality accurately enough.

That world is gone. The disclosure language remains.

Today, when a customer calls your contact center, their words may be captured, transcribed, scored for sentiment, analyzed for churn risk, flagged for compliance, fed into a workforce management model, and retained in a data warehouse that informs decisions they'll never know about. All of that can happen—is happening —in many organizations under legal cover written for a world where "listening" meant one person in a headset.

Most organizations haven't directly addressed this gap. I'm not sure most have even looked at it.

That world is gone. The disclosure language remains.

Laurence Denny

CLO

Three Layers. One Disclosure.link to this section

I've found it useful to think about organizational listening in three distinct layers.

The first is human listening: an agent hears a customer, responds, resolves, or doesn't. The interaction ends. This is what customers picture when they accept the disclosure. The second is machine listening: AI analyzes tone, flags intent, scores quality, surfaces coaching insights, and identifies escalation patterns in real time. The conversation is being parsed, not just heard. The third is organizational listening: aggregated conversation data informs decisions about staffing models, product gaps, pricing strategy, and customer segmentation, sometimes weeks or months after the call ends.

Most organizations operate responsibly at each layer. The breakdown isn't in what they're doing. It's in what they're explaining. Disclosures written for the first layer are being applied to all three, and the customers on the other end of those calls don’t always understand.

The Litigation Is a Signal, Not the Storylink to this section

The primary claim in most cases involving contact center AI is that it constitutes an unauthorized wiretap. What these cases consistently surface isn't novel legal theory, as similar claims have been made about chatbots and website tracking technologies. It's confusion.

Customers who didn't understand who was listening, or how their words would be used after the call ended, or that their conversation was flowing through multiple vendor systems before it was done.

When customer confusion is that predictable, you're not looking at a disclosure problem—you're looking at a design problem. The system works perfectly on the inside and is nearly invisible from the outside.

When customer confusion is that predictable, you're not looking at a disclosure problem—you're looking at a design problem.

Your Vendors Are Your Blind Spotlink to this section

From a customer's perspective, there is no vendor stack. There is the company they called.

If a large language model transcribes their conversation, a separate analytics platform scores it, and a workforce management tool references it six weeks later during a staffing review, the customer still sees one brand. One relationship. One party they trusted with what they said when they were frustrated, confused, or trying to resolve something that mattered to them.

When something goes wrong, the accountability falls on that brand. Not on the vendors, not on the contracts, not on the data processing agreements.

Most vendor contracts I've reviewed were written before the current AI stack was assembled. The language describes a simpler world. The disclosure given to customers describes an even simpler one. Neither reflects what's actually happening to that conversation once it leaves the interface.

What Good Disclosure Actually Signalslink to this section

I want to push back on something I hear often in legal circles: that disclosure is a defensive tool, that it exists to limit liability, that technically adequate language is good enough.

That framing produces exactly the disclosure language you'd expect, and customers read it accordingly. Fine print designed to protect the company.

Disclosure written from a different premise—one that treats customers as people who deserve to understand what they're consenting to—does something different. It signals that you've thought carefully about the distance between your capability and your transparency, that you've made intentional choices rather than operationally convenient ones.

In my experience, that kind of disclosure also tends to be shorter. Clearer. Because when you've actually worked through what you're doing and why, you don't need much language to describe it.

Disclosure written from a different premise—one that treats customers as people who deserve to understand what they're consenting to—does something different.

Close the Gap Before It Closes Youlink to this section

AI will continue to expand what companies can learn from customer conversations. That's a genuine opportunity. But embedded in most deployment decisions is a quiet assumption: that the existing disclosure covers it, that customers understand, that the gap between what is happening and what was explained is small enough to manage.

That assumption has a shelf life. In many organizations, it has already expired.

Updating the disclosure language is the easy part. The harder work is mapping what's actually happening in customer conversations across every layer and every vendor relationship that touches them, then deliberately deciding what customers should know. Most companies haven't done that work yet.

The ones that do won't just reduce legal exposure. They'll build something more durable —credibility that comes from being as trustworthy as they say they are.

Laurence Denny

Laurence Denny

CLO, 8x8

Laurence Denny is Chief Legal Officer at 8x8, overseeing the company’s global legal, privacy, compliance, and cybersecurity functions, along with a cross-functional global telecommunications group. He focuses on governance and risk decisions that help communications and customer experience leaders move fast without compromising trust. Larry has held senior legal roles at 8x8, Extreme Networks, and TiVo, and began his career at Gibson, Dunn and Morrison Foerster. He earned his J.D. from Columbia Law School.

Related Articles

purple pulseform image

Insights for your inbox

Stay current on what matters in CX and IT. Subscribe to our LinkedIn newsletter for regular analysis on the decisions shaping customer experience, technology, and AI. Clarity you can act on.