AI Agents Need Their Own Identity. The Standards Don't Exist Yet.

Aaron Levie had dinner with 20+ enterprise AI and IT leaders this week and posted a thread about what he heard. Most of the bullets were predictable if you’ve been paying attention: agents are the big thing, governance is hard, interoperability matters, token budgets are becoming real OpEx.

But one bullet should have been the whole conversation: identity.

His question: can the agent have access to everything you have? In a world of dozens of agents working on your behalf, how do you manage partitioned levels of access to your information?

The more companies I work with on this, the clearer the pattern gets: most enterprises are bolting agents onto identity infrastructure that was never designed for non-human actors. In healthcare and life sciences, where regulations like HIPAA, 21 CFR Part 11, and GxP all require clear attribution of who did what and why, that gap is already causing problems that people are working around because there’s no solution yet.

Today you have two choices. Both of them are bad.

When you deploy an AI agent that needs to interact with enterprise systems - your document management platform, your case management tool, your CRM - you have two options for how it authenticates.

Option 1: Service accounts. A dedicated account for the agent with its own credentials, separate from any human user. Service accounts have been around forever. The agent gets its own username, its own API key, its own permissions. You can scope them narrowly - the agent that creates support tickets only gets access to that one queue. Fine-grained, least-privilege, exactly what security teams want.

But the drawbacks are significant. The more fine-grained you make them, the more you need. One per agent, per system, per scope. And because they’re not tied to people, they accumulate. Nobody offboards a service account when the developer who created it moves to another team. (MIT Technology Review just coined the term “zombie agents” for this exact pattern.)

Worse, you lose the thread on who’s responsible. Most agents act at the direction of a human. But if the agent authenticates as “service-account-47”, the audit trail says “service-account-47 modified 200 CRM records at 3am.” It doesn’t say who asked for it or why. In regulated industries where you need to demonstrate that a qualified human authorized an action, that’s a problem.

Option 2: User accounts (OAuth delegation). The agent authenticates as you. It gets your OAuth token, your permissions, your access. From the system’s perspective, there’s no difference between you opening a document and the agent opening it.

This solves the accountability problem - the audit trail shows your name. But it creates new ones.

We like to think user permissions are narrowly scoped. In practice, they almost never are. You have read access to folders you’ve never opened. You have admin rights from a project two years ago that nobody revoked. This is normal. It’s the accumulated cruft of organizational life, and for a human it’s mostly harmless because there are limits to what you can control and still enable people to do their jobs. Like it or not, individual human judgment is the final safety net most of the time.

An agent doesn’t have that restraint. If the token says it can read 200,000 documents, it will read as many as the workflow calls for.

I see this on my own computer every day. I don’t want the AI agents I use to have the same access to my workstation as I do. But if I constrain them too much, the utility drops to the point where I might as well do the work myself. That tension doesn’t go away.

And user-account delegation breaks down completely for agents that run on schedules or in response to triggers. An agent that updates your CRM every night at midnight isn’t operating at anyone’s direction in that moment. Nobody told it to go update 200 records. It just did. Saying “Bob authorized this” when Bob was asleep doesn’t feel right. So now you’re back to needing a functional account, and you’ve inherited all the problems from Option 1.

The model we actually need already exists. It’s in your calendar app.

Of all places, I think Outlook and other calendaring systems figured this out years ago.

When you delegate your calendar to an executive assistant in Outlook, something specific happens. The assistant can view your schedule, accept meetings, and send invitations. But Outlook doesn’t pretend the assistant is you. The meeting invite shows up as “sent by Sarah on behalf of Mike.” Both identities are preserved. The recipient knows who actually performed the action, and they know under whose authority it was done.

An agent acting on my behalf should work the same way. When it creates a case, the audit trail should read “ticket-creation-agent, acting on behalf of Mike, created case X in queue Y at timestamp Z.” Not “Mike created case X” (that’s impersonation). Not “service-account-47 created case X” (that’s attribution loss). Both identities, explicit, traceable.

But delegation alone isn’t enough. We need session-scoped restrictions.

I think most people working on this problem haven’t gone far enough yet.

Delegation with attribution tells you who’s responsible. It doesn’t limit what the agent can do within that delegation. And I know humans too well to believe organizations are going to start under-provisioning user accounts. That’s never going to happen. The political cost of telling a VP they can’t access something is always higher than the security risk of over-provisioning. That’s been true for 30 years and AI agents aren’t going to change it.

So given that user accounts will remain over-provisioned, there needs to be a notion of short-lived, task-scoped permissions that sit on top of the user’s existing access.

As a human, I might have read, write, and delete access to both our document management system and our case management tool. But when I send an agent to read a pile of clinical documents and write a summary to a project workspace, I don’t want the agent to have my full access. I want temporary restrictions: read from folder A, write to folder B, nothing else. Not for security theater, but because it prevents the agent from being “helpful” in ways I didn’t ask for.

Without session scoping, an agent drafting a weekly summary might helpfully pull in notes from a sensitive HR review because it technically has access and the content seemed relevant. The agent isn’t malicious. It’s doing exactly what broad permissions allow. But nobody intended for it to read those notes, and now that information has been surfaced in a document that might get shared.

Session scoping means: for this task, you can see these things and write to those things. When the session ends, the constrained scope expires. The user’s underlying access is never modified. The agent just doesn’t get to use all of it.

There’s no standard for this yet. But the pieces are forming.

I went looking for an existing standard that covers what I’m describing - delegation identity plus session-scoped permissions plus cross-system audit trails - and it doesn’t yet exist as a coherent whole. However, the IETF is working on pieces of it.

draft-oauth-ai-agents-on-behalf-of-user extends OAuth 2.0 with a way to encode both the user and the agent in a single token - an act (actor) claim for the agent and an obo (on-behalf-of) claim for the user. That’s the Outlook delegation model, formalized for OAuth. It’s still a draft, but it’s the right shape.

draft-klrc-aiagent-auth is broader - it composes WIMSE for workload identity, SPIFFE for ephemeral credentials, and OAuth for authorization into a framework for agent auth. It recommends “transaction tokens” - short-lived, scoped JWTs that assert both identity and authorization context for a specific operation. That’s close to session scoping, though framed at the individual transaction level rather than the task level.

The building blocks exist. But right now they’re scattered across half a dozen specs, and the enterprises actually deploying agents are mostly winging it with whichever option was fastest during the pilot.

Levie’s right that identity is emerging as a big topic. I’d go further: nothing else works until identity works. The governance, the interoperability, the cost management - all of it depends on knowing which agent did what, under whose authority, and whether it should have been allowed to.