Subscribe to the blog and receive recommendations to boost your CX
The use of AI agents is advancing rapidly, but most companies are still in the implementation phase. In fact, McKinsey points out that only 1% of surveyed organizations consider their AI adoption to have reached maturity. This matters (and a lot) because as agents gain autonomy, new risks also emerge: vulnerabilities that can disrupt operations, compromise sensitive data, or erode trust.
The difference compared to previous waves of automation is that we are not talking about a system that merely helps or suggests, but about systems capable of making decisions and executing actions. In CX, this can mean interacting with users and customers, resolving queries, retrieving or updating customer information from the CRM, or activating operational flows.
When everything goes well, the value is evident. But when something fails, the effect can also scale. In other words, as autonomy increases, so do the risks that directly affect what is hardest to protect in CX: service continuity, data security, and trust.
The good news is that it’s not about slowing down adoption, but about doing it right: with controls, traceability, and a design built for real environments where teams and processes cannot afford downtime. Below, we review the most common emerging risks in agent-based environments and, above all, how to contain them in practice with Inagent.
Emerging risks when AI agents go into production
1) Chained vulnerabilities: when a failure becomes a domino effect
One of the potential risks related to AI agents is chained vulnerabilities: an agent makes a mistake, and that error doesn’t stay encapsulated; instead, it triggers a domino effect that influences subsequent decisions and amplifies the initial impact.
In a CX context, this domino effect is clearly seen in collections management. For example:
- A first agent might misinterpret a customer's situation (payment capacity, priority, or context) and classify the debt as “low risk” when it isn't, or apply the wrong segmentation.
- That output passes to a second agent that decides the next step (channel, message tone, contact frequency) and to a third that schedules the actions.
- The result could be twofold: operational inefficiency (contacting the wrong way or too late) and a direct impact on the experience (undue persistence, inappropriate tone, or complaints).
Mitigation here is usually done in layers. A single model isn't enough because human language is nuanced: it is best to combine sentiment with frustration signals (“cues”) and reinforce the decision with business rules. For instance, if a customer mentions “cancel,” “lawsuit,” “fraud,” or “complaint,” the priority should be raised even if the sentiment appears neutral.
How do we contain this risk in Inagent?
Inagent is designed to reduce the probability of a cascade at the source. Its multi-agent system does not split the operation into mini-agents by micro-task (where errors replicate easily); instead, each agent is capable of completing a global function that can encompass several tasks, minimizing unnecessary handovers and reducing the domino effect to a minimum.
In the specific case of the example, Inagent incorporates sentiment analysis (positive/neutral/negative) to help calibrate priority, which can be complemented with business rules to reinforce critical decisions. Additionally, all conversations are transcribed and recorded for analysis, whether they occur through text channels or calls.
2) Cross-agent task escalation
Another emerging risk occurs when agents collaborate: cross-agent task escalation. In this scenario, a malicious agent tries to exploit trust mechanisms to obtain unauthorized privileges, for example, by requesting sensitive data from another AI agent under the guise of a legitimate request (using phrases like: “I need it for an urgent task” or “this comes from an authorized role”). If the system grants these requests without strict controls, the result can be unauthorized access or a data breach.
The most effective mitigation is usually the most operational: clearly defining what each agent can request, what it can see, and what it can execute, under what conditions, and with what validations. This means applying the principle of least privilege and placing guardrails that prevent the AI agent from improvising access or decisions outside its framework.
How do we contain this risk in Inagent?
To prevent leaks and unauthorized escalations, the most important step is defining the script and access levels well, including limitations and guardrails that limit what information the AI agent can handle and what actions it can execute. In other words: Inagent is designed to work with clear rules, so that AI agents always operate within a controlled lane.
3) Synthetic-identity risk: agent impersonation
Identity theft (or synthetic-identity risk) is another risk to consider in architectures with multiple components and trust relationships. The typical scenario involves an attacker trying to pose as a trusted agent to request access to histories or sensitive information. If identity and permissions are not properly governed, the system may grant access without detecting the impersonation.
Mitigation relies on strong identity and access management, with role segregation and additional controls for sensitive actions or data. In these environments, it also helps to avoid models where each agent operates as an independent identity with its own privileges, as this proliferation increases the attack surface and control complexity.
How do we contain this risk in Inagent?
Inagent's agents are prepared to assist, but they do not operate with individual credentials for each virtual agent. The platform works with human supervision and multiple access levels, ensuring that actions and access to sensitive information are limited and governed.
4) Untraceable data leakage
Finally, when AI agents provide information autonomously, a particularly delicate risk can appear: untraceable data leakage. An AI agent might share information to resolve a query and, unintentionally, include personal or sensitive data that was not necessary. If that exchange is not recorded, or if there is no clear audit, the leak may go unnoticed, leaving the organization without the ability to investigate or control it.
Mitigation here rests on two key pillars: data minimization (sharing only what is essential) and guaranteeing traceability (ensuring relevant actions are recorded, auditable, and reviewable). In multi-tenant environments, this is completed by maintaining isolation between organizations so that context and data do not mix or pass from one client to another.
How do we contain this risk in Inagent?
Inagent mitigates this risk through three strategies:
- It incorporates prior validations with strict rules that the AI agent must follow.
- It maintains a record in the conversation history, allowing for review and analysis for quality control and auditing purposes.
- It works with an independent architecture per client, preventing an agent from one organization from learning or reusing context from another.
Inagent: commitment to AI security in real environments
When we talk about AI agents operating in real environments, security must translate into continuous controls, compliance, and a solid technical foundation.
In addition to mitigating security risks through Inagent's design, Inconcert has a solid company-wide commitment:
- Inconcert holds security certifications such as ISO 14001 and PCI, which involve a continuous approach to risk management and security audits to reinforce reliability and operational continuity.
- Inconcert also obtained the ISO/IEC 42001 certification, focused on the responsible and secure use of artificial intelligence—a relevant step for scaling AI agents with consistent governance and control criteria.
- In practice, all of this is supported by Amazon Web Services (AWS) cloud infrastructure, designed for guaranteed operations.
- Inagent environments remain segmented, private, and monitored, ensuring customer data stays isolated.
- To reinforce traceability and auditability, administration is performed via Infrastructure as Code (IaC), allowing for reproducible, verifiable, and auditable configurations.
Ultimately, every use case has its own level of exposure: the reasonable approach is to identify and evaluate the associated organizational risk and, when necessary, update the evaluation methodology to maintain control without hindering progress.
If you want to see how this translates into a real case, we can show you how Inagent can integrate into your operations and what controls you can apply based on your processes, channels, and security requirements. Shall we talk and look at it in a demo?


