Just-in-Time Authorization: Securing the Non-Human Internet

Just-in-Time Authorization: Securing the Non-Human Internet

Posted in

In recent months, we’ve been writing extensively about some of the exciting possibilities offered by artificial intelligence and the agentic consumption of APIs, from new routes to monetization via AI through to more efficient workflows. But there are downsides to consider here, too.

Large language models (LLMs) have a habit of disregarding the API contract, which is just one example of the risks posed by handing off control to agents or other AI-driven tools. At our Platform Summit 2025, Curity‘s Jacob Ideskog spoke extensively about several measures API developers can (and should) take to mitigate some of these risks.

“When we use the term non-human identities, we usually think of AI,” he says. “But it’s bigger than that. We have service accounts calling certain things, machine-to-machine communications, IoT, and so on. All of a sudden, we’re waking up and realizing we need to spend some time on this.”

Below, we’ll look at how to adapt OAuth for use in conjunction with non-human consumers. We’ll also cover a few actionable tips and strategies for keeping your APIs safe in the age of agents, bots, and scrapers, and consider what the future of this space might look like.

This post was inspired by a presentation given by Jacob Ideskog, CTO, Curity, at Platform Summit 2025.

The Rise (and Risks) of Non-Human Consumption

Using the example of the 2022 Optus data breach, during which an unauthenticated API endpoint led to the exposure of 10 million customer records and an $11 million fine, Ideskog highlights a worrying truth: many APIs, especially older APIs, are uncatalogued, unmonitored, have none or very little access control, are public-facing, and have been forgotten about.

He talks about three common non-human entities that are ripe for abuse in such environments:

  • Ghosts: Forgotten identities or accounts, often with broad scopes, that were never removed.
  • Zombies: Excessive token lifetimes, credentials, or permissions that should have expired.
  • Robots: AI agents, serverless functions, and so on, that act with impunity beyond their intent.

The OAuth standard has historically relied on the provision of long-lived access tokens, human-centric consent flows, manual revocation, and static scopes. None of these ideas translates well to non-human consumption. Built for users, apps, and well-established integrations, common deployments often fail to account for those entities Ideskog describes.

Although OAuth 2.0 remains a powerful framework, it’s worth bearing in mind that it was released almost a decade and a half ago. In other words, long before most API developers considered the possibility that their services would be consumed by bots as well as humans.

The good news is that we can tweak many of its principles for non-human consumption.

From OAuth to Just-in-Time Access

“What is an agent?” Ideskog asks. “Why do we even call it an agent? Well, the reason is that we’re giving it agency, empowering it to do things, make its own choices, and take actions for themselves. How do we delegate access to something that can take actions for itself?”

“That’s what OAuth is all about,” he continues. “We call OAuth an authorization protocol, but it’s not — it’s a delegation protocol.” A critical component of OAuth 2.0 is the principle of least privilege. While least privilege is well-suited to human consumers, the way it’s typically enforced may not go far enough for non-human consumers.

Least privilege means having no more than the correct amount of access, which is a good thing, but it also means having access when it may not be required at all. Let’s contrast this with the concept of zero standing privilege, which offers no ongoing access rights as its status quo, and is one of the key principles behind just-in-time authorization (sometimes abbreviated as JIT).

As the name suggests, just-in-time authorization grants access at the exact moment it’s needed. Tokens are short-lived, bound to specific operations, and granted on-demand rather than on a “just in case” basis. When the relevant task is completed, access is revoked.

However, it’s still not a silver bullet for non-human consumers.

Ideskog observes that it can be hard to implement at scale, not least because it benefits from having a human in the loop. That may not always be ideal in the case of AI agents, for instance, where you likely want to automate as much of the process as possible.

Intent, Scopes, and Rich Authorization Requests

When we create an access model, we can use scopes to explain user attributes or business permissions to help determine API access. “This sort of raw access model is powerful,” says Ideskog, “because it creates an abstraction between the endpoints or the APIs, the application, and it’s explainable to me as a human.”

But he points out that, when it comes to non-human consumers like AI agents, we should be focusing on intent rather than access or permissions. The question should be less “how much access does this consumer need?” and more “why does the consumer need to access this?”

As Ideskog points out, intent is difficult to model with scopes. “If you try, you’ll end up with hundreds or even thousands of scopes.” A possible alternative to this model is using Rich Authorization Requests (RAR), which provides the opportunity for clients to request fine-grained permissions during an authorization request. (We can also, he suggests, lean on AI to create structured output in the form of RARs at scale.)

OAuth 2.1 co-author Torsten Lodderstedt outlines how RAR introduces the request parameter authorization_details, which carries “an array of JSON objects, each of them containing the authorization data for a certain API or resource,” which gives us a new tool to model access based on intent.

Ideskog points out, however, that knowing the intent of an agent (and modeling for it) isn’t the end of the story. He highlights that we need to be able to see what agents are doing, ideally in real time, and that they’re acting as they’re supposed to be. Observability practices like real-time monitoring, logging, tracing, and so on can be useful for that.

He also highlights how OpenID’s Shared Signals Framework (SSF) could be useful here, providing “protected, secure webhooks to communicate security alerts and status changes of users,” as well as data sharing schemas, privacy recommendations and protocols, and other tools to help mitigate breaches.

But it’s a long game, he clarifies, because “all systems need to start sharing these signals for us to be able to compare — we can’t make phone calls until the systems have phones! — so we need to build in emitting events in more places than we probably currently do.”

Considering Identity and Access Management Post-AI

You don’t need to look far to find stories about AI-powered services leaking secrets, oversharing, or disregarding soft limitations. While the knee-jerk reaction might be to batten down the hatches, locking down everything isn’t necessarily the right approach, though it’s one we’ll probably see quite a bit of while people get a handle on all of this.

Instead, we should be asking who (or what) is accessing our services, and why. In essence, intent matters more than access. We should also ask how long they need it for. That’s where the whole “just in time” aspect comes into play. In terms of best practices, Ideskog suggests the following:

  • Make sure all your systems have identities.
  • Model access for non-humans (and map it to something humans can understand).
  • Design for intent, because that’s how we tell AI to do things.
  • Monitor intent versus behavior wherever possible.
  • Audit your services, keeping an eye out for those ghosts and zombies.

As for all those signals — which are associated with robust observability, effective management of Agent2Agent (A2A) communication, Model Context Protocol (MCP), and so on — API developers won’t necessarily have the level of control that they might want over that. Beyond the precautions outlined above, the best they can do is hope that the proliferation of non-human consumption catalyzes the shift.

The good news is that, given the rapid growth of said consumption, it seems almost certain that more and more developers will be building and connecting services with non-humans in mind. We should expect more signals, more zero standing privilege, and the exorcism of more ghosts!

AI Summary

This article examines how Just-in-Time authorization can help secure APIs as non-human consumers like AI agents, bots, and automated systems become more prevalent.

  • Non-human consumption introduces new security risks, including forgotten service accounts, excessive token lifetimes, and autonomous agents acting beyond their intended scope.
  • Traditional OAuth deployments rely on long-lived tokens, static scopes, and human-centric consent models that do not translate well to non-human identities.
  • Just-in-Time authorization applies zero standing privilege by granting short-lived, task-specific access only at the moment it is needed, reducing unnecessary exposure.
  • Modeling intent is more effective than modeling raw access for AI agents, and Rich Authorization Requests provide a structured way to express fine-grained intent during authorization.
  • Observability and shared security signals are critical to ensuring agents behave as expected, but widespread adoption across systems is required for these mechanisms to be effective.

Intended for API architects, security engineers, and platform teams designing identity, access control, and authorization strategies for AI-driven and non-human API consumers.