For the first time in enterprise history, humans are no longer the only actors in the workflow. We now work alongside AI co-pilots that write code, autonomous assistants that open tickets and bots that triage incidents. The identity model that once assumed “a user is a person” is now broken by design.
The question isn’t whether AI will join the workforce. It already has. The real question is: When a human and a machine share the same outcome, who owns accountability?
We’ve entered an era where an “employee” is no longer a single entity. It is a distributed identity surface made up of a person, their delegated agents, their embedded automations and the systems acting on their behalf. The old idea that access is tied to a single user record collapses when the user is now a network of connected identities.
And governance hasn’t caught up.
The Human No Longer The Only Actor
I recently read a paper posing a simple but provocative question: If AI writes the code and that code creates a security flaw, who is accountable? The lawyer? The engineer? The AI vendor? The company that deployed it? In practice, there’s only ever one answer: the CISO. Accountability hasn’t shifted. The actors have.
I have bots in my own workflows that pull data from governance platforms, perform initial triage on alerts, cross-reference past incident patterns, create tickets based on predefined triggers and notify me only when the machine cannot decide.
That agent is not “me.” It has its own identity, its own access and its own permission boundaries. It’s not a tool I run. It’s a system that runs alongside me. It does work I don’t want my team wasting time on, and it does it without fatigue, context switching or the need to sleep.
That is the inevitable direction for every enterprise function, from HR and engineering, through to finance, sales and compliance. Customer service has already gone first. Development is next. Security won’t be far behind. Soon, the question will no longer be whether a human made a decision, but which part of the human-machine blend did.
Identity Problem Before The Legal Problem
In most organizations, machine agents are currently treated like shortcuts: “Just generate an API key, let it run, and we’ll sort out governance later.” But later never comes. The machine identity becomes a permanent fixture with privileges no human would ever have been granted. And because “it’s not a person,” no one assigns ownership, rotates credentials or audits activity.
That’s how you get incidents where a bot pushed code into production, or an AI assistant exposed data. It’s not because they were malicious, but because they were unbounded by identity design.
We are at risk of re-creating the worst mistakes of early cloud adoption: rapid deployment that leads to long-term governance debt. AI agents aren’t the threat. The absence of identity architecture for them is.
We Must Secure Delegation
The traditional model of cybersecurity was simple: Authenticate the user, authorize the action, log the event. That works when one person equals one digital identity. It fails completely when:
- A human delegates 40% of their tasks to software.
- A bot executes actions inside multiple systems with no session context.
- An AI assistant requests data not because it needs it, but because it was trained to be “helpful.”
- A machine account persists after the employee who depended on it leaves the business.
We’ve reached the point where access does not equal intent. Humans think in purpose. Machines operate in permission. That gap is where risk now lives.
The Coming Wave: Autonomous Decisioning
Right now, most AI agents are allowed to read, summarize, suggest, correlate or route. But the direction of travel is obvious: They will soon approve, modify, deploy, escalate and remediate.
Customer service AI is already refunding transactions without human review. DevOps pipelines are already promoting builds based on automated risk scoring. Threat response systems are already isolating endpoints without waiting for analyst confirmation.
No board is ready for the moment a bot performs an irreversible business action that was not explicitly authorized by a human, but was technically permitted by its identity. We will tolerate this risk until the first catastrophic example forces reform, just as it has in every previous technology cycle.
The New Identity Model: Human, Augmented, Autonomous
Instead of a single access profile per employee, we now need layered identity states:
- The Human User: accountable entity
- The Augmented Layer: co-pilots, plug-ins and extensions acting with the user
- The Autonomous Layer: agents acting without the user
Each requires different controls, logging, ownership, revocation logic and blast radius assumptions. If that sounds unfamiliar, it’s because we’re still governing a machine-augmented workforce with pre-AI assumptions.
The question that matters in a breach is no longer “Which credential was used?” but “Was the action executed by the person, the machine assisting them or the machine substituting for them?”
Until we can answer that reliably, attribution, accountability and legal responsibility will remain blurred.
The Strategic Shift CISOs Must Lead
Security is no longer just about preventing threats. It is about designing identity frameworks for a workforce where humans are no longer the only workforce. That means asking different questions:
- If the AI agent goes wrong, can we shut it down in seconds?
- Do we know which systems it can touch, or only which ones it “should”?
- If the employee leaves, does the machine identity they relied on retire too?
- Can we prove who was responsible, not just who was authenticated?
If we can’t answer those questions now, we won’t be able to answer them during litigation, regulatory review or incident response.
When every employee is a system, and every system can behave like an employee, the future of cybersecurity won’t be defined by how well we secure people or machines, but how well we secure the space between them.