Token Design for a Better API Architecture

Token-Design-for-a-Better-API-ArchitectureLittle details like tokens can sometimes help structure complex API architectures. In this piece we’re going to have a look at different architectures, and ultimately see how a better way to design tokens can lead to a more performant result.

Consider the role of tokens within two facets of API design, access control and data stability. Every time we deal with API design we must necessarily manage access control, and provide data stability. These two pillars can be broken down in the following way:

  • Access control
    • User identity (Authentication)
    • Permission (Authorization)
  • Data stability
    • Accurate documentation
    • Consistent response format (API versioning)
    • Service uptime and availability

The maxim here is to:

Design access control in a way that it facilitates data stability.

With this in mind we can now review three archetypal API architectures.

1: The Happy Accident

This is the most common design, which is in fact no design at all. Basically if you have a website, you have an API! With just a simple cURL request and some scraping, you can get the data you need directly from the website.

Here a very in-depth article on how to get started with web scraping, and several traps and pitfalls explained in details. Though this seems very simple and quick, we must be very careful with unhappy accidents. In fact there are two main problems with this “design”:

  1. Data format changes
  2. Working with private data (user & pass)

The first point is a problem of data stability: any new website implementation change can suddenly break our API. The second point, though it can usually be worked around, just makes the architecture so much more complicated. When dealing with authorization (usually username/email and password) doing a simple web scraping won’t immediately work, but we’ll have to deal with signup or login pages first (and hope there’s no captcha!)

2: The Front Desk

The front desk design represents a gateway model architecture. That is, all API requests must go through the gateway first, which then calls the actual service requested. The gateway usually manages authentication, rate limiting, and possibly other things as well (e.g. analytics, performance monitoring, etc.). This is sometimes dubbed Gateway-as-a-service, and there’s quite a few providers on the web. One interesting example is Tyk which is developed in Go, and is completely open source.

Like any architecture, using a gateway approach has both good and bad properties.

– Easy to iterate on
– Easy to develop quickly
– Shortens go-to-market time

– Single point of failure
– Potentially costly
– Rigid architectural constraints

Watch R. Kevin Nelson discuss this topic at a Nordic APIs event

3: Metropolis

A service-oriented architecture (SOA) is an architectural pattern in computer software design in which application components provide services to other components via a communications protocol, typically over a network. The principles of service-orientation are independent of any vendor, product or technology.


This third API architectural type is Service-Oriented Architecture, or SOA. With SOA, each service handles its own authentication, rate limiting, API credentials, etc. We can imagine this architecture is like if each service has its own front desk.

The idea behind Metropolis is that if each server knows how to handle the API gateways and they can also handle their own authentication, then the API requests can hit the services directly (both internally and externally).

Here’s where token design comes in handy to help the services handle their own authentication. But what exactly is authentication? When we talk about authentication we generally want to ask these two questions:

  1. Can this user perform this action?
  2. Can this app perform this action on this user’s behalf?

But there’s more to it than that. Check out our eBook dedicated to API Security to understand the subtleties behind authentication, authorization, delegation, and federation.


Token Design

Let’s start by reviewing what a token is. Here’s from the Oxford Dictionary:

Token – A thing serving as a visible or tangible representation of a fact, quality, feeling, etc.

In the realm of API design, a token is a simple string that represents a user that we have previously validated. Now, there’s two ways we can go about actually using a token. We are going to start from the old way.

The Old Way:

This would be a typical way to handle tokens for an API, and if you’ve worked with any kind of API you’re probably familiar with this design:

  1. Generate random session or token key
  2. Store payload data in a datastore
  3. Use session or token key to lookup the payload

Although this design may sound familiar, there’s an obvious problem with it. That is, we need to validate user credentials each time, which means sending our token and then performing a lookup for every request. This is basically going to the front desk for each request. It works, but it’s probably not the most efficient way to go about it. How can remove this bottleneck and have a more performant design?

With this in mind we can look at how token design and cryptography can lead us to a better architecture.

The Better Way:

We can use encryption to have a token design that avoids database lookups entirely. Let’s start with the complete process and then we’ll review it in detail:

  1. Minimize payload data. The payload should only contain minimal information, like user id, timestamps, and possibly other identification codes.
  2. Serialize the payload.
  3. Sign the payload. There are different ways to go about this, but it’s typically done with HMAC.
  4. Encrypt payload + signature combo
  5. Base64 encode the encryption result
  6. The encoded result is your token!

For all this we basically just need a library with a generate token and a parse token methods. There are 3 obvious benefits from this design:

  • No database lookups
  • No storage requirements for most tokens
  • Constant-time token parsing

And there’s only two requirements to be able to parse the tokens correctly:

  • Shared encryption and MAC keys between internal services
  • Properly implemented crypto libraries

Revocation & Logout

One small issue is that revocation and logout become problematic with this approach. Since we haven’t stored the token anywhere, how do we go about revoking its permission?

To answer that let’s get some details about the data on a typical payload:

  • User identification
    • ID, name, avatarURL
  • Token metadata
    • issuedAt, expires, sessionID

A token like this has roughly the following characteristics:
– Around 220 bytes in size
– The payload data is encrypted
– Two secret keys are required to parse or generate the tokens. This protects against both spoofing and data leakage.

Now back to logging out. For most APIs, revocation is more of an edge case rather than a common task. It won’t matter much then that when using this design the revocation process is going to be a bit slower than usual. In brief, we’ll have to use a back-channel (e.g. a message queue) that will inform all our services that a specific token has been revoked. Each service must then store in memory (or in a fast data-store cache like memcache or redis) that the token has been revoked. It’s in fact much easier and faster to store only the revoked tokens rather than storing every single one. This way instead of checking if a token is valid we can just check if a payload has been de-authorized.

The process for token permission revocation and logout is therefore:

  1. Propagate the de-authorization to nodes through a back-channel
  2. Wait for propagation to finish before responding


Carefully designing our approach to tokens can have significant effects on our overall API architecture. We started by seeing how any website can become an API through simple web scraping (the happy accident). We then reviewed how a front-desk architecture works like a gateway, and its pros and cons. Then we moved to a real Service-Oriented Architecture, and how a different way of designing tokens using some encryption techniques can lead to a new design completely free of database lookups.