
13 Apr 2026
Returning from RSA Conference this year, one thing was immediately clear: AI agents are rapidly becoming the next major shift in how work gets done — and cybersecurity is right at the centre of that transformation. Across the conference, vendors and practitioners alike were exploring how autonomous or semi-autonomous agents will operate within enterprise environments, making decisions and taking actions at scale. While the opportunity is significant, there remains a noticeable gap between vision and practical application. The messaging around agentic AI is widespread, but the real-world use cases and operational models are still emerging. The hype cycle may already be at capacity, but the underlying challenge is real. As developments like OpenClaw demonstrate, the impact of AI-driven operations is not theoretical — it is rapidly approaching, and organisations will need to be ready to manage it.

Amid this wave of AI-driven innovation, one of the more grounded and consistent themes was the resurgence of identity as the central control plane. As organisations begin to deploy AI agents that can act, decide, and execute, the question of what those agents are allowed to access and do becomes critical.
The landscape is shifting from solutions that primarily provide visibility to those that can also enable real-time enforcement, ensuring that access decisions are not just monitored, but actively controlled. At the same time, how organisations define and secure privileged access is evolving — vaulting alone is no longer sufficient to protect privileged accounts in dynamic, distributed environments. Identity governance, adaptive access controls, and continuous verification are becoming essential. Organisations that get ahead of AI agent governance, policy, and security will be best positioned to keep pace with the speed of change in an AI-augmented workforce.
Another strong theme was the growing focus on supply chain risk, combined with a clear reminder that fundamentals still matter. Organisations are increasingly aware that their security posture extends beyond internal controls to a complex ecosystem of third-party providers, software dependencies, and external integrations. Gaining visibility across this landscape is becoming a board-level priority, but visibility alone is not enough.
The conversations at RSAC consistently reinforced that strong security outcomes are still driven by disciplined execution of core principles — asset visibility, vulnerability management, identity hygiene, and effective detection and response. In many cases, organisations are not failing due to a lack of advanced technology, but due to gaps in consistently applying these fundamentals across their extended environment. So another year passes at RSAC, and it’s one that’s highlighted a market at an interesting inflection point.

On one hand, AI agents are set to redefine how organisations operate, introducing new levels of scale, speed, and complexity. On the other, the path to securing that future is not purely through new innovation, but through applying existing security principles more effectively and consistently.
The organisations that will succeed are those that can balance both — embracing AI-driven change while strengthening identity, visibility, and control across their environments. For partners and customers alike, the opportunity lies in cutting through the noise and translating this shift into practical outcomes: enabling governance over AI agents, reducing supply chain risk, and executing on the fundamentals that underpin resilient security programs.
