Michelle Drolet is CEO of Towerwall, a specialized cybersecurity firm focused on proactive cyber preparedness and compliance services.
Cybersecurity has generally kept pace with the growing sophistication of threats. It has evolved from firewall and antivirus to next-gen firewalls, comprehensive endpoint security suites and intelligence-driven approaches such as EDR, XDR and MDR—solutions based on the assumption that systems behave predictably and that risks can be mapped, monitored and mitigated within specific guardrails.
But even the best security was not designed for systems that think.
AI spending in 2026 is expected to reach $2.5 trillion. This adoption of AI is disrupting the cybersecurity space. We are witnessing rapid AI development with systems powered by large language models (LLMs) and autonomous agents not only executing instructions but also interpreting, generating and making decisions. While there is little doubt that these systems are beneficial, they are also equally difficult to secure.
Not Just New Threats, But A Security Reset
Traditional security systems, like firewalls, endpoint, EDR and SIEM platforms, identify patterns and signatures and rely on known behaviors to target abnormalities. They are built to observe as many signals as they can, to detect and control risk.
But AI doesn’t follow these rules; the attacks levelled at AI environments reflect this fact. A data poisoning attack targets the training process of an AI system by manipulating input data, ultimately shaping what the model learns. Prompt injection attacks alter how the LLM interprets instructions and can manipulate its behavior.
Couple these attacks with the complex dynamics introduced into the environment with agentic AI, and you are addressing a very different type of risk. Here, inputs can carry hidden instructions, and the outcomes are not always predictable. Even visibility breaks down in this model. Security tools can monitor infrastructure, endpoints and network activity. But they struggle to interpret model behavior, decision pathways or the subtle ways in which inputs influence outputs.
A dangerous illusion is created in which systems appear to be functioning normally, even when a subtle yet dangerous manipulation is taking place in the background.
The Shift Toward ‘Secure By Design’
A post-deployment security framework is not going to cut it for organizations. This is where a “secure by design” approach enters the fray, embedding security at every layer where the system learns, evolves or acts.
At the data layer, the priority is to ensure training datasets are not manipulated in any way. After the model is trained, risks emerge in the form of biased outputs, hidden vulnerabilities and the kind of behavior that the model was not trained for. These risks can be addressed with input and output filtering, strict access controls, human oversight for risky actions and continuous monitoring.
At the interface layer, where prompts and APIs come into play, interactions must be continuously validated, filtered and interpreted in real time to prevent misuse. Finally, action must be governed by permissions, boundaries and clear escalation paths to ensure the system operates safely as it adapts and evolves.
While these layers loosely map to accepted principles of confidentiality, integrity and availability (CIA), they also allow organizations to secure data interpretation, the beating heart of AI systems.
A New Security Capability Stack Driven By ‘Understanding’
While traditional security is all about enforcing control, AI security is about building a solid understanding of the behavior of AI systems.
Reliance on predefined rules must end, and organizations must move toward dynamic validation that continuously tests how these systems behave under different conditions. Techniques like adversarial testing and AI-specific red teaming become critical, and these should be ongoing exercises.
Second, access control should transition to context control. Traditional access boundaries cannot account for AI systems operating across unstructured data sources, embeddings and vector databases. Security needs to control what information the model can use, not just who can access it.
Third, monitoring should also include behavior. Keep eyes open for specific patterns in inputs, outputs and corresponding decision flow. This will help you identify model drift, which is a change in the model’s response over time.
Finally, responding to harmful outputs or unexpected behavior should happen at machine speed, meaning automated safeguards must kick into gear upon drift detection.
Organizations using AI and automation extensively reduced the cost of a data breach by $1.9 million, says IBM. The implication is clear. Organizations adapting their security models to AI realities are safer and more resilient.
A Combination Of Framework And Strategy
Lucky for us, frameworks are leading the charge against AI risk. OWASP Top 10 for LLMs highlights emerging risk. MITRE ATLAS maps adversarial tactics across the AI attack life cycle. The NIST AI Risk Management Framework offers a structured approach to aligning governance, risk and compliance. These frameworks provide useful guidance to stay one step ahead of AI attacks, but they are not enough.
Organizations need a secure AI checklist, but they also need a shift in mindset. There must be a collaboration between security teams, data scientists, engineers and business leaders. Full control of AI systems may not always be possible; there will be some uncertainty. Design the security framework accordingly. These assumptions should encourage organizations to manage AI uncertainty with continuous governance, clearly defined ownership and a clear definition of which risks are acceptable and which are not.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

