Avoiding a "Terminator" Future—Why we need to get authz right.
When people worry about AI apocalypse scenarios, they often imagine sentient machines turning against humans. But a more plausible path to disaster is much simpler: machines doing exactly what they were told—just by the wrong people, or with the wrong data, at the wrong time.
Today, AI agents are getting more powerful. They can plan, take actions, even write code and interface with APIs. But few of these systems are architected with a policy engine as part of the agentic ai stack.
In most cases, the answer is no.
That’s not just a technical oversight—it’s a civilization-scale vulnerability.
As AI becomes more autonomous and data becomes more distributed, authorization becomes the real battleground.
Role-Based Access Control (RBAC) is ok, but it only addresses the tip of the tip of the iceberg. What we need is fine-grained, policy-based access control that runs everywhere—including inside AI agents themselves. One new access control mechanism is called TBAC--token based access control. Read this article for more info on TBAC.
Introducing Cedarling: A Local Policy Enforcer for AI Agents
The Linux Foundation Janssen Project has introduced a new lightweight policy engine called "The Cedarling" designed to enforce declarative Cedar policies—which are deterministic, performant, and easy to read. It is small enough to be embedded in the agentic AI stack, running locally, inside AI agents, even in a constrained environment like a browser or mobile application. The Cedarling as the policy engine of the agentic AI stack offers some benefits:
Recommended by LinkedIn
This isn't just about security. It’s about ensuring AI systems operate within the bounds of human intent, not just at design time—but at runtime, at the edge, and under adversarial conditions.
Why This Matters for Identity Architects, CISOs, and CTOs
As leaders, it’s our job to build systems that are resilient, responsible, and trustworthy.
We’ve spent decades building identity systems for humans. Now we need to extend that thinking to the non-humans--what we call "workloads"--including AI agents.
We Don’t Need to Fear AI—If We Control It
The AI future doesn’t have to be dystopian. But if we fail to solve the authorization problem, we’re essentially giving the keys to the most powerful technology in history—without knowing who can drive, where they’re going, or what they’ll crash into.
The Cedarling is just one tool in the fight to avoid a Terminator-like future. But it’s a critical piece of the puzzle: a local, fast, and expressive way to ensure AI systems only do what they're authorized to do.
Because in the end, it won’t be malevolent AI that ends us. It’ll be well-meaning AI, executing unauthorized commands, with unstoppable efficiency.
Let’s not let that happen.
CTO and Founder, Hub City Media, Inc.
1mosudo play global thermonuclear war
Original Amazon Alumni (95-98) / Verifiable creator of API Chaining(R)
1moThere is alot more to security than just authorization: cors, RBAC/ABAC, etc
Ken Huang, CISSP .... Would you say I take this too far?
Security hardware and software architect
1mojust don't let any AI have admin access. Then it will be game over. Once they have it you wont be able to get it back.