Why Access Management Is Breaking in the Age of Agentic AI

Why Access Management Is Breaking in the Age of Agentic AI

Apr 14, 2025

Pratyus Patnaik

Over the past year, the conversation around AI in the enterprise has shifted. We’ve moved beyond LLM chatbots and copilots. The frontier now is Agentic AI—autonomous software agents that can take action on behalf of users, across systems, without constant human oversight.

It’s a huge leap forward in capability—and it’s bringing about a reckoning with one of the most fundamental pillars of enterprise security: Access Management.

The Legacy Model: Built for Humans

Enterprise software architecture has always assumed a human to be involved in actions or decisionmaking – a concept known as “human in the loop.” Identity governance (IGA), privileged access management (PAM), ticketing workflows, approval chains are built around people initiating, reviewing, and authorizing actions.

Even with the increase in automation, the logic still held. Whether it’s running a CI/CD job, triggering a database migration, or approving temporary access through a workflow tool, there’s always been a human accountable for the action.

But this model is cracking under the weight of AI agents that can chain together actions across multiple systems, interpret high-level goals and autonomously decide how to achieve them, and operate 24/7, in real time and at scale. Suddenly, we’re facing questions that current systems aren’t equipped to answer.

Three Shifts Required for Agent-Aware Access Control

To support a world of autonomous agents operating safely and responsibly inside enterprise environments, access management needs to change fundamentally. We need to be able to answer: How does an AI agent determine what actions to take? Whose authority is it acting under—and when? What does “least privilege” even mean when the actor is non-human, and context is constantly shifting? We see three key shifts:

1. Fine-Grained Controls at the Data and Action Layers

Most access systems today are designed around high-level roles (e.g., admin, developer, analyst) and broad permissions. But agents need granular, scoped permissions tied not just to who they are acting for, but what they are trying to do.

That means contextual constraints (e.g., time-bound, resource-specific, task-limited), action-level gating (e.g., allow read access, but only for inference—not export or replication), and real-time policy evaluation that adapts as the task evolves

2. Context-Aware, Task-Specific Permissions

Rather than assigning static roles, we need systems that can grant dynamic, just-in-time permissions based on a combination of the task the agent is performing, the user or system it is acting on behalf of, and the environment or risk level (e.g., production vs. staging).

This moves us from identity-based access control (IBAC) to intent- and context-aware access control.

3. New Permission Models Built for Agents, Not Humans

Most fundamentally, we need new models for how agents gain, hold, and relinquish authority. This includes:

  • Delegation frameworks: How and when can users or services delegate authority to an agent?

  • Trust boundaries: How do we sandbox agent capabilities within clear operational limits?

  • Auditability: How do we ensure every agent action is explainable, traceable, and revocable?

We need to build the equivalent of IAM and RBAC for a world where the primary actor is no longer a person, but a piece of software.

Why This Matters Now

This isn’t a future problem. Enterprises are already experimenting with agents that handle knowledge base lookups and customer response drafting, security triage and alert resolution, infrastructure automation and environment scaling, and compliance checks and remediation workflows. In each of these cases, agents are making decisions and taking actions that would have previously required a human.

Unless we rethink access management, we’re either going to over-permission these agents (and increase risk), or block their utility altogether.

A Foundational Challenge for Safe and Scalable AI

At Natoma, we believe agent-aware access control is one of the foundational problems to solve if AI is going to operate safely, responsibly, and at scale in the enterprise.

We’re actively working on this challenge: designing systems that make it possible to define, enforce, and audit permissions for autonomous agents across hybrid enterprise environments.

We’d love to hear how you’re approaching it too. Let’s start the conversation.

Get started

Full control. Maximum security. Effortless scale.

Get started

Full control. Maximum security. Effortless scale.