Skip to Content

The Evolving Legal Landscape for AI in your Contact Center

AI is a key driver of innovation in the Contact Center as a Service (CCaaS) space, significantly improving agent efficiency and customer experience.  However, when the rate of technology change outpaces the applicable legal frameworks, uncertainties and opportunities for disputes arise.

In this 11-minute podcast, LB3 partner Laura McDonald joins Tony Mangino for an update on AI driven legal developments in the CCaaS space and to discuss strategies to mitigate the legal risks inherent in the technology. 

If you would like to learn more about our experience in this space, please visit our Network Services Transactions and Information Technology Advisory Services webpages.


Follow us on LinkedIn: TC2 & LB3 

The Evolving Legal Landscape for AI in your Contact Center

Hello, today is Thursday, October 9th, 2025. I’m Tony Mangino from TC2 and this is Staying Connected.

I’m joined today by Laura McDonald, a partner at LB3, to give us an update on legal developments that impact the use of AI in contact centers.  Laura joined me about a year ago to discuss legal developments that could impact the use of AI in contact centers and I invited her back to ask her whether there have been changes in the legal landscape that enterprises should know about.  Laura, thanks for coming back.

Laura: Tony, thank you for having me; it is always a pleasure to join you on Staying Connected. Last year, we talked about the efficiencies and advantages that AI brings to the contact center, and as is often the case, the technology was developing faster than the legal framework, which creates uncertainties and opportunities for disputes. On that front little has changed, but I’m happy to give a brief recap of what we are seeing this year.   

Tony: Can you start by giving us a quick refresh on how AI is being used by enterprises in contact centers?

Laura: Absolutely.  AI is making a big impact – both for efficiencies and ultimately cost and customer satisfaction.  Various companies have cloud based solutions to make the agent and client process more efficient.   For example, AI allows agents to access information and assess customer questions in real time, it helps route inquiries to the right agents, better identify the issue at hand, and reduce wait time and agent and customer frustrations.  It can provide automated intelligent responses and information without the need for live agent intervention.  It is being used to make real time transcriptions, to identify trends, and it can be used to authenticate calls and deter fraud. By using AI for real-time sentiment analysis, agents can respond more artfully to customer concerns and determine if the call should be transferred.  Moreover, by using AI to address routine issues, agents have more time to deal with complex ones.   

Tony:  That sounds great – what’s the catch? Are there potential pitfalls that enterprises need to watch for?

Laura: Any technology migration has risks;  companies often focus on the transition and operational risks; but we also need to think about potential legal risks.  It is easier if a product and the laws that govern it are time tested and clear, but that is not the case with the use of AI in contact centers.  So, enterprises must anticipate, understand, and manage potential legal risks in a gray area, which is harder.

Tony:  So, there are no laws in the US governing the use of AI in contact centers? 

Laura:   Legislation in the US directly addressing AI is still in its infancy, but the lack of comprehensive and specific AI legislation in the United States does not mean that everything is fair game. When there are no specific rules, companies, litigants, and judges look to what is already in place that might apply.  For example, there are laws governing recording calls, privacy, and consent – laws like the California Invasion of Privacy Act (CIPA), the Federal Wiretap Act, HIPAA for health data, and the Illinois Biometric Information Privacy Act (BIPA).  Plaintiffs’ lawyers are creative, and even if these weren’t written with AI in mind, plaintiffs are applying them to contact center AI solutions.  In addition, there are legal premises (often referred to as common law) that create obligations and rights – such as implied or actual contracts and liability based on negligent acts.  Thus, a litigant might claim your AI agent created a contract, promising a benefit you did not mean to offer, or that they were harmed by the negligent actions of your AI agent, say by providing incorrect directions that led to an injury.    

Tony:  So, are you seeing litigation involving the use of AI in contact centers?

Laura:  Yes, and most I would lump into two categories:  the AI platform hallucinating or providing misleading information, like the Air Canada chatbot we discussed last year that gave bad info to a customer (and ended poorly for Air Canada).   Or claims that the use of AI technology to authenticate callers, or to intercept, record, and/or respond to customer’s calls or chats violated the customer’s privacy or state laws, etc.  Basically, the customers claim they had inadequate notice and/or did not provide (or have the opportunity to provide) appropriate consent.  

Tony:  So, can you give us real life examples?

Laura:  Yes,  BIPA and CIPA are examples of statutes not drafted specifically to address AI and contact centers, but that are being used to address claims regardless. For example, a case was filed in Illinois using BIPA claiming that John Hancock’s call center’s use of Amazon Connect and Pindrop’s biometric voiceprint authentication was unlawful, and another complaint filed against Nuance was also based on BIPA.  Both have been dismissed without prejudice but that means they (or someone else using similar arguments) can refile. 

There are a number of cases filed in California using CIPA such as one against Genesys involving a crisis hotline claiming that claims that Genesys’s Cloud CX overhears and collects private details from callers, records their conversations, and uses the information for its own purposes (including training its systems to improve its own services) without notifying the end user and obtaining appropriate consent in violation of the Federal Wiretap laws and CIPA. Genesys has filed a motion to dismiss, which will be heard later this month.

Tony: it sounds like there is a lot to think about. What do you recommend for enterprise customers who are considering taking advantage of AI as part of their call centers?

Laura:  Sure, I’d group into three categories:

  • Knowledge: As a start, map every AI application in use. What type of data is collected, stored, shared, and by whom, where, and how?  Is it sensitive? Used only within your enterprise, or is it helping train broader models, such as those of the CX provider or its agents?  Keep abreast of what is happening.
  • Protections: Build robust contracts that clarify data usage and protections with your vendors and any involved third parties. Obtain explicit customer consent, and keep thorough records. One-size-fits-all consent banners are not enough—be specific about what data is collected and how it’s used.   Also, make sure to monitor responses so that your AI solution is not hallucinating or creating potential contracts and if it is, you can catch and resolve issues quickly.  Consider adding clear and visible caveats that providing information as to benefits you are not offering those benefits; all benefit must be part of an agreed upon contract, or validated by a real person.  
  • Plan B: If a customer declines consent, what’s your alternative? Can they interact without AI? Be sure this is spelled out—if someone opts out, your system, scripts, and agents should have a non-AI path.  Make sure that the end user knows it has that option and you can honor it.  Unfortunately, retaining that ability may undermine some of the cost and operational efficiencies, but they may save a lot of money and time if you are sued.

Tony:  In your experience, what can a company do to keep up in this fast-moving legal and technical environment?

Laura: The best-prepared companies have legal, compliance, IT and ethics teams working together to review systems regularly. Be proactive, reviewing technology changes and lawsuits, and legislation as they’re filed and adapting policies to reflect the guidance and changes; vet high-risk changes.   In addition, ensure that agents and support staff know how the AI works, what its limitations are, and how to respond to customer concerns. Update your interactive voice response and chatbot scripts as features, litigation, and laws evolve. What protected you a year ago might not do so today.  In short, your compliance reviews need to be ongoing.

Tony:  Sounds like companies should take advantage of AI, but should do so thoughtfully.  No-one wants to spend money on fighting a class action. 

Laura:  Yes, being careful doesn’t provide you with a risk free experience, but it does mitigate the risk. 

Tony: Laura, thanks for the update.  If you have any questions or you’d like to learn more about how to mitigate the legal risks of AI associated with CCaaS solutions or you’d like to discuss best practice for sourcing CCaaS, please contact Laura, me, or any of our LB3 and TC2 colleagues by giving us a call or shooting us an email. 

You can also stay current by subscribing to Staying Connected, by checking out our websites, and finally, I’d strongly recommend that you take a minute to go and begin following us on LinkedIn.