Kumar Mehta, Founder and Chief Development Officer, Versa.
As enterprises rapidly embed large language models (LLMs) into products, workflows and customer-facing systems, a new category of risk is emerging.
Attackers are now trying to corrupt enterprise models, degrade their behavior and influence dangerous or misleading outputs. This is called model poisoning.
Traditional attacks try to break into systems. Model poisoning changes how systems behave after they are trusted. A compromised model does not trigger the same alarms as a breach. It continues operating normally while introducing risk into decisions and customer interactions.
The understanding of the risk has evolved over the last few years, starting with researchers at Mithril Security demonstrating “PoisonGPT” in 2023, a surgically modified open-source model that passed standard benchmarks while spreading targeted disinformation.
Then, in early 2024, researchers at JFrog identified roughly 100 models on Hugging Face carrying malicious code capable of executing arbitrary commands.
Anthropic’s “Sleeper Agents” research similarly showed that backdoors trained into a model can survive the safety-tuning procedures.
These are early warning signs of what could happen when models enter the enterprise through a supply chain the enterprise does not fully control.
The Attack That Alters Behavior, Not Access
Many security programs focus on who can access a model endpoint, but that is only part of the problem when it comes to model poisoning attacks.
If the model’s behavior can be manipulated—through poisoned training data, compromised fine-tunes, tampered embeddings or malicious “updates” in the model supply chain—then correct access control still yields incorrect outcomes.
A poisoned model can pass benchmarks, behave normally and degrade only under specific triggers. From the outside, it appears intact. Model poisoning behaves less like a break-in and more like a Trojan Horse embedded in the supply chain. You did not lose the keys. You lost the ability to trust what the locks are protecting.
In this case, correct access to a compromised model still produces incorrect outcomes. For enterprises deploying AI into customer-facing decisions, two assurances are needed:
1. The interaction is safe at runtime.
2. The model remains trustworthy over time.
The impact extends beyond data exposure to revenue, compliance and brand trust.
Defense In Depth For The AI Era
There is no single control for model poisoning, so mature programs layer defenses across the model lifecycle. A few key strategies include:
• Provenance And Vetting At Intake: Treat models like third-party software. Track origin and updates. Use trusted sources.
• Data Governance At Training: Poisoning often begins in data. Apply production-level controls to training and fine-tuning data.
• Pre-Deployment Evaluation: Benchmarks won’t expose real attacks, so red teaming is required.
• Runtime Controls: Insert a semantic inspection layer between applications and models to evaluate prompts, responses and tool use.
• Behavioral Observability: A poisoned model often reveals itself only under specific triggers, so anomaly detection at the behavioral level—not just the traffic level—matters. Baselining how models normally respond is still a nascent discipline, as is watching for drift over time, but it is where much of the tooling investment is heading.
• Action-Level Controls: The consequences of a bad output depend on what the system is allowed to do with it. Least-privilege access for AI-initiated actions, human approval for high-impact steps and narrow tool permissions limit how far any single incorrect output can travel.
• Governance And Accountability: Define policies for model approval, vendor review and incident response.
Why Governance Becomes The Control Plane
Reducing risks like model poisoning or manipulation depends more on operational discipline than on any single technology.
One effective strategy is centralizing model access through an LLM proxy or model gateway, so that embedding calls, fine-tune jobs and inference traffic flow through a controlled layer.
This can help support unified logging, consistent policy, key rotation and rapid revocation when something looks wrong. It also creates a natural place to monitor prompt traffic for the patterns that tend to precede manipulation.
External context—retrieved documents, tool outputs, third-party data—also deserves the same scrutiny any other untrusted input receives. Tool outputs can be inspected and redacted before reaching the model, and high-risk requests can be routed through approvals or challenges.
Finally, model updates and fine-tunes benefit from the same rigor that has become standard for software releases: versioning, staged rollout, automated testing and the ability to roll back quickly. Limiting what the system is allowed to do with approved tools, least-privilege access, write approvals can turn a bad output into a contained event rather than an escalating one.
What This Means For Boards And Executive Leadership
The first major AI compromise to reach the front page is unlikely to resemble a traditional cyber breach. Instead, it will appear as an AI system giving dangerously incorrect guidance, leaking confidential information or triggering actions it was never intended to perform.
Even a small amount of injected bad data could have large consequences. For instance, a study by researchers at New York University found that if medical misinformation accounted for only 0.001% of the training data in an LLM, the LLM would be compromised.
Organizations that wait until tools and standards fully mature may find themselves responding after a public failure rather than preventing one. A more practical approach is to begin layering controls now. The key questions are:
• How are prompts, responses and model traffic inspected and secured?
• What do we know about where our models came from and how they have been modified?
• What guardrails exist around tool access, parameter use and action approval?
• What policies govern the data that enters and exits the model context?
• How would we detect behavioral drift, and how quickly could we revoke or roll back?
• Critically: Who is accountable when the system fails despite those controls?
Model poisoning is not a future problem. It is an early signal of how AI systems can fail at scale. Addressing it now will not only help to avoid risk, but it will also help to build the trust required to deploy AI broadly and confidently.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?







