Avoiding a "Terminator" Future—Why we need to get authz right.

Avoiding a "Terminator" Future—Why we need to get authz right.

When people worry about AI apocalypse scenarios, they often imagine sentient machines turning against humans. But a more plausible path to disaster is much simpler: machines doing exactly what they were told—just by the wrong people, or with the wrong data, at the wrong time.

Today, AI agents are getting more powerful. They can plan, take actions, even write code and interface with APIs. But few of these systems are architected with a policy engine as part of the agentic ai stack.

  • Can your AI agents distinguish between users who are allowed to take actions and those who aren’t?
  • Can they obtain properly authorized data for training?
  • Can you centrally manage and audit policy—even when the agent runs on the edge, or even gets disconnected?

In most cases, the answer is no.

That’s not just a technical oversight—it’s a civilization-scale vulnerability.

As AI becomes more autonomous and data becomes more distributed, authorization becomes the real battleground.

  • Who can access or instruct AI agents?
  • What actions can those agents take?
  • Which data is available for inference and training?
  • Where and when can this happen?

Role-Based Access Control (RBAC) is ok, but it only addresses the tip of the tip of the iceberg. What we need is fine-grained, policy-based access control that runs everywhere—including inside AI agents themselves. One new access control mechanism is called TBAC--token based access control. Read this article for more info on TBAC.

Introducing Cedarling: A Local Policy Enforcer for AI Agents

The Linux Foundation Janssen Project has introduced a new lightweight policy engine called "The Cedarling" designed to enforce declarative Cedar policies—which are deterministic, performant, and easy to read. It is small enough to be embedded in the agentic AI stack, running locally, inside AI agents, even in a constrained environment like a browser or mobile application. The Cedarling as the policy engine of the agentic AI stack offers some benefits:

  • Authorize decisions in under 50 microseconds including JWT token validation
  • Extract claims from JWT tokens
  • Caching and streaming of decision logs for central analysis

This isn't just about security. It’s about ensuring AI systems operate within the bounds of human intent, not just at design time—but at runtime, at the edge, and under adversarial conditions.

Why This Matters for Identity Architects, CISOs, and CTOs

As leaders, it’s our job to build systems that are resilient, responsible, and trustworthy.

We’ve spent decades building identity systems for humans. Now we need to extend that thinking to the non-humans--what we call "workloads"--including AI agents.

  • For Identity Architects: The Cedarling lets you extend policy-based authorization beyond traditional apps into semi-autonomous agents and databases. You can use one policy syntax for browser applications, native applications, api gateways, IDPs, backends and databases. And maybe even PAM systems...
  • For CISOs: You gain enforceable controls over the capabilities of your AI systems, including what data they can see. You can detect attacks using the audit log access decisions.
  • For CTOs: You get a scalable, decentralized model for security that works across cloud, edge, and on-device AI—without brittle service calls or centralized chokepoints.
  • For Developers: Free open source libraries for JavaScript, Python, Go and Java to help you move your authorization code into a declarative syntax, and validate JWT tokens.

We Don’t Need to Fear AI—If We Control It

The AI future doesn’t have to be dystopian. But if we fail to solve the authorization problem, we’re essentially giving the keys to the most powerful technology in history—without knowing who can drive, where they’re going, or what they’ll crash into.

The Cedarling is just one tool in the fight to avoid a Terminator-like future. But it’s a critical piece of the puzzle: a local, fast, and expressive way to ensure AI systems only do what they're authorized to do.

Because in the end, it won’t be malevolent AI that ends us. It’ll be well-meaning AI, executing unauthorized commands, with unstoppable efficiency.

Let’s not let that happen.

Steve Giovannetti

CTO and Founder, Hub City Media, Inc.

1mo

sudo play global thermonuclear war

Owen Rubel - API EXPERT

Original Amazon Alumni (95-98) / Verifiable creator of API Chaining(R)

1mo

There is alot more to security than just authorization: cors, RBAC/ABAC, etc

Tom Jones

Security hardware and software architect

1mo

just don't let any AI have admin access. Then it will be game over. Once they have it you wont be able to get it back.

Like
Reply

To view or add a comment, sign in

More articles by Mike Schwartz

Insights from the community

Others also viewed

Explore topics