Heather Ceylan is the Chief Information Security Officer at Box, where she leads the global information security program and strategy.
Our team was reviewing output from one of our security operations center (SOC) agents. On paper, the setup was well-governed—carefully scoped permissions, a human in the loop at every key decision point. The agent was triaging alerts, pulling context from multiple systems and handing analysts well-structured summaries.
As we dug in, however, we realized the agent was fabricating conclusions inside those reports. When the underlying data was thin, it filled gaps rather than flagging them. When the format expected a finding, it occasionally invented one. It sometimes reached verdicts with more confidence than the evidence warranted—a real problem in automated workflows where confident outputs trigger the next action.
This wasn’t an access problem. Every piece of data the agent touched, it was allowed to touch. The problem was what it did with that data, and that points to a broader shift in our industry.
What Happens When Agents Work Together
Most early AI security efforts focused on individual agents, preventing prompt injection, limiting data exposure and validating outputs. That all still matters, but what we’re seeing in practice is usually more interconnected:
• Agents delegating tasks to other agents
• Workflows spanning multiple systems
• Context being passed from one step to the next
• Multistep processes with little direct oversight
These networks of interconnected agents and systems introduce new risks. An individual agent might be operating as intended, but when it interacts with multiple agents building on each other’s outputs and decisions, the outcome can drift in unplanned directions.
A New Attack Surface: Coordination And Context
In these environments, two things matter most: how systems coordinate and what information they rely on.
Traditional security models are built around clear lines of control—the user’s ID, what application they’re using and how systems connect through defined interfaces. When activity flows across systems without clear stopping points, those boundaries blur and patterns emerge:
• Permissions expanding as workflows move across systems
• Original intent getting diluted as tasks are handed off
• Overlapping or unclear access across different parts of a process
• Limited visibility into what actually happened end-to-end
The context that agents rely on becomes a source of risk. Agents need access to enterprise data (documents, communications, structured records) to function effectively. If that access is overly broad or poorly governed, they can unintentionally expose sensitive information and propagate mistakes.
None of this requires a traditional “attack”; it can happen through normal operation, just faster and at a scale that’s harder to see.
Why Existing Security Models Fall Short
Most modern security frameworks were designed around a more predictable world with stable users, defined roles and contained interactions.
Agents behave differently. They’re dynamic—acting on behalf of other systems or users, initiating sequences of activity and adapting to new inputs. It’s possible to extend existing identity and access models to cover them, but it’s not always a clean fit. You can end up with gaps between policies you’ve defined and what happens in practice—especially across multistep, multisystem workflows.
Shifting From Access Control To Execution Control
The fix to our SOC agent issues, for instance, wasn’t tighter permissions but governing execution directly. We started treating the agent’s instructions like a control mechanism, not a prompt. We required real evidence before any conclusion, stopping cleanly when evidence ran out and explicitly prohibiting it from filling in blanks.
The experience showed me that securing agents today means moving from focusing on what they can access to how they execute over time by reframing access controls as starting points rather than solutions. In a multiagent world, control comes from understanding and governing how actions unfold, not just what’s permitted upfront.
The Role Of Governed Data And Content
If coordination is the new challenge, data is still the foundation. This is where well-intentioned AI programs run into trouble. The systems we’re deploying are only as reliable as the information they’re working from, and at most companies, most of those documents, messages and other internal content were created long before anyone imagined AI agents reading them.
When that information is poorly classified, overly accessible or missing clear ownership, the risk tends to show up indirectly:
• In decisions based on incomplete or outdated context
• In sensitive information surfacing where it shouldn’t
• In assumptions getting repeated across systems
I’ve come to believe that a more governed, policy-aware approach to data can meaningfully change what these systems are capable of. Companies that figure this out will have agents that behave more predictably than their competitors, even if they can’t articulate exactly why.
The Path Forward: A Vision Of Agentic Security
The challenge I keep coming back to in conversations with peers isn’t just preventing something from going wrong. It’s making sure our systems behave in ways that are understandable and aligned with intent, even as they operate with more autonomy.
That’s harder than it sounds, and I don’t think our field has fully come to terms with it. It requires a few shifts in how we work:
• Treating interactions between systems as meaningful events, not background activity
• Applying policies across entire workflows, not just individual steps
• Building visibility into decisions, not just infrastructure
• Recognizing how small gaps can cascade when everything’s connected
In this world, security starts to look less like enforcing boundaries and more like guiding behavior. For those of us who came up enforcing perimeters and access lists, that’s a different muscle that we’re still learning how to use.
AI agents are becoming the layer through which work gets done, and increasingly, they’re interacting more with each other than with us. This shift changes the nature of trust itself. The question now isn’t whether individual components are secure but whether the system as a whole behaves in a way that’s predictable, accountable and aligned with intent. I think security is heading toward better understanding of our systems’ behavior over time across decisions, interactions and outcomes as everything moves.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?







