Analysts React to Another Troubling Contact Center AI Lawsuit: “We’re Going to See More!”

Another court case warns contact center providers of the dangers of using their customers' data to train AI

4
Analysts React to Another Troubling Contact Center AI Lawsuit:
Contact CenterNews Analysis

Published: May 5, 2025

Charlie Mitchell

In March, a lawsuit rocked the CX sphere, alleging that prominent CCaaS provider Genesys recorded domestic violence hotline interactions without permission.

The filing also accused the vendor of mining those communications to “improve its own services.”

A similar lawsuit against Patagonia came out last year, alleging that its vendor – Talkdesk – had recorded calls and utilized that data to train its AI models.

While a California judge promptly threw that case out, prominent analysts warned that the lawsuit wouldn’t be a one-off.

After all, many CCaaS providers often promote their AI by stating: “We’ve analyzed millions of calls. That’s why our AI is the best.”

When offered the retort that those are – in fact – their customers’ calls, many would reply: “They come through our cloud and, because we anonymize them, it’s fine to use them for training.”

However, that’s not necessarily how the law or their customers – or their customers’ customers – will see it.

Liz Miller, VP & Principal Analyst at Constellation Research, made this point during a recent Big CX News update.

Yet, she also highlighted that – in this particular case – the allegations suggest there were no clear warning signs: no notification that the call might be recorded and used for AI training.

That underscores the need for Genesys – and its contact center competitors – to give customers more guidance on how they utilize AI technologies. Miller stated:

Too often, the end customer – the technology buyer – doesn’t know what they don’t know. And this is a wake-up call for vendors to share best practices more proactively.

As such, this lawsuit should be a call to action for many CCaaS players. Even if customers are implementing seemingly low-risk, commonplace technologies – like an AI-routing engine (as in this case) – they have their responsibilities.

Is Rapid AI Innovation Driving Missteps?

As the market becomes much more crowded, many CCaaS stalwarts – like Genesys – are innovating at an unprecedented pace to stay ahead. That’s likely to drive missteps and miscommunications.

Making this point, Zeus Kerravala, Principal Analyst at ZK Research, said: “I doubt any of them avoided using customer data entirely, intentionally or not.

I don’t think Genesys specifically targeted that customer or that hotline; something slipped through the cracks. But there’s no question we’re going to see more of these (lawsuits).

One company that has been vocal about never using customer data in the development of its AI models is Enghouse Interactive. Yet, that stance is a rarity.

Zoom is another business. As Kerravala noted, the contact center disruptor initially said it’ll use its customer data, but with their permission. Zoom then “got such negative backlash they repealed and said we’re just not going to use it at all.”

If, as Kerravala predicts, more cases like this come to the fore, that could prove a winning strategy, even if it’s not what Zoom initially intended.

This Is Far from the First AI Fail…

As noted, the rollout of AI hasn’t necessarily kept pace with the ethical responsibility that should come with it. Indeed, this lawsuit is the latest example in a string of fiascos.

From an airline bot botching its handling of bereavement fares to an eating disorder helpline deploying AI that told people to eat mac and cheese to feel better, there have been many notable failures.

Commenting on this trend, Keith Kirkpatrick, Research Director at The Futurum Group, said:

When deploying AI, the initial focus is on traditional metrics – i.e., ‘let’s handle more cases’— as opposed to how AI can be used to improve experiences and bottom line outcomes.

That focus is perhaps driving these mistakes.

Consider the Genesys lawsuit. From the filing, its customer wanted AI to collect information before the conversation to route people more effectively.

That’s a great use of AI, given the nature of the domestic violence hotline. They need to prioritize people in crisis.

However, a responsibility comes with that, and it needs to be communicated properly.

There must also be the option to opt-out. This not being available shows that much of the contact center industry still hasn’t learned that lesson from decades of frustrating IVRs.

Noting this, Miller added: “We still don’t quite understand that the “rage scream” is our fault. We still want to blame the customer for just not having patience.

“But, I think the difference here is that – in the age of AI – there are no second chances… We, as humans, are more open to giving that human agent a second chance. If AI (messes up), you’re done. You had one chance, and I’m not coming back.”

The Critical Takeaway

Here’s the reality for vendors and tech buyers: you’re not just going to lose customers now; you’re going to get sued.

It used to be a reputational risk. It’s a legal one. Miller summarized:

So, now’s the time to bring acceptable use guidelines and your attorneys into the conversation. That’s the reality of where we’re at.

Yet, because providers have rushed the conversation, they haven’t shared many best practices with the industry.

“Virtual agents often don’t disclose that they’re bots,” added Kerravala. “Should they? Probably—but we haven’t even finished that conversation for IVRs after decades.”

Ultimately, this lawsuit is a new warning, and the industry has to get its act together.

 

 

Artificial IntelligenceCCaaSSecurity and Compliance

Brands mentioned in this article.

Featured

Share This Post