Introducing The API Security Maturity Model Kristopher Sandoval February 25, 2020 When a user utilizes a service, that user must first attest they are who they say they are. In most use cases, they must then confirm they can do what they’re trying to do. For many users, this is a relatively non-transparent process, and it might seem to happen magically behind the scenes. The reality of implementing this system, however, has resulted in an ever-changing and evolving security landscape that requires specific modes of authentication and authorization.Thus, the question becomes apparent – how can we encapsulate information about the user, their rights, or their origin in a useful way? Wouldn’t it be great if an API could know who you are, whether to trust you, and whether you can do what you claim to be able to do?These ideas underpin the API Security Maturity Model, a new way to gauge the security of your API. Today, we’re going to explore each layer of this model to see how and why security experts advocate for a better identity-driven API platform.Jacob Ideskog, at VP at Curity, unveiled the API Security Maturity Model at the 2019 Platform Summit. Watch his full presentation here: width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen">Review: Richardson Maturity ModelJacob Ideskog, VP at Curity, invented the API Security Maturity Model. The idea arose as a corollary to the Richardson Maturity Model. As such, before we dive into the API Security Maturity Model, let’s first take a look at its’ inspiration.Leonard Richardson created the Richardson Maturity Model to reflect and describe the reality of the RESTful API space. An API can be REST-like while not being truly RESTful. Accordingly, REST compliance is not a duality, but rather a series of levels of increasing compliance.Level 0: The Richardson Model has four levels, though the first level is considered “level zero,” as it reflects absolutely no REST compliance. This level of maturity represents an API that doesn’t make use of hypermedia URIs, often has a single URI and method for calling that URI.Level 1: As we get more complicated, we enter Level one, where we start to see the use of multiple URIs with simple interactions. The critical point to remember here is that these multiple URIs still have simple verbiage usage.Level 2: Level Two is a significant evolution in that it boasts both multiple URIs and multiple ways of interacting with that data. This level sees far more complex APIs than the previous levels due to the nature of the verbiage used in exposing resources, allowing complex manipulation.Level 3: The highest level of maturity, APIs in this stage are indeed “REST” in that they employ hypermedia as the Engine of Application State (HATEOAS). This results in a highly complex and powerful API, both boasting multiple URIs and the verbiage to interact with them as well as the power behind hypermedia.Read our coverage on The Richardson Maturity Model here.The API Security Maturity ModelThe interesting thing about the Richardson Maturity Model is that it’s not merely different states of compliance: it represents cumulative upgrades from level to level. Each new step leverages new additions and includes previous gains. In the same way, the API Security Model attempts to describe API security in ever-increasing levels of security, complexity, and efficiency. The model, like the Richardson model, moves from the lowest-maturity to the highest. Consider it a playbook on how to progress into a secure deployment.With this in mind, let’s take a look at the specific models of the API Security Maturity Model, starting with the lowest maturity level and moving towards the highest. Level 0 – API Keys and Basic AuthenticationThis level is the starting point for most security, where APIs utilize API keys and Basic Authentication. This stage is quite basic. Authentication at this level is based upon the notion that whoever has the key must have it because it’s their key, and thus, their activity is valid. This “authentication” is then carried forward to other endpoints and APIs in a trusted network, with the vital data carried along that path.As we’ve previously described, API keys ≠ security. This type of protection is arguably a fundamentally insecure method. All an API key does is confirm that whoever is holding that key can do what that key allows. It does nothing to ensure the person who has the key is meant to have it, is using it accurately, or even that the key was legitimately authorized and is still valid. There are also additional concerns in that the user isn’t bound to the requested resource. The key would come from almost anywhere as long as it’s trusted, and that chain of authentication could, in theory, go anywhere within the network of trust.There’s an even more severe problem with this level of maturity; it only provides authentication. Authentication says that you are who you say you are (or, at the very least, you have something that means you are who you claim to be). What it does not do, however, is prove you have the right to make that claim, and access resources that person has rights to access. That would be authorization, which is fundamentally different from authentication. To have authorization, we need a more complex system.Understand the differences between Authentication, Authorization, Federation, and Delegation.Level 1 – Token-Based AuthenticationToken-Based authentication is a more complex system and represents a different level of security maturity. Tokens are used in this case to establish that whoever holds that token is who they say they are. In the wild, this is often constructed into a sort of quasi-authorization system, as the holding of a token can be seen as an authentication of both who the person is, and what their intent in holding that token is. Tokens are like an identification card. It may not necessarily say you can do something, but since you hold that card, some infer that you are thus trustworthy enough – after all, you have identified yourself through a secure means, so you must be reliable!A good practical way of thinking about this level of maturity is to frame it in terms of a realistic transport workflow. Let’s say you’re a news publisher. You have an inside organization of writers, editors, etc. who write articles, work on reports, etc. These authors login to their workstations and start pushing their content to an application, which then presents the content forward to the external viewership.In this case, you have several tokens working in concert. The authors are using their tokens to push content forward and attribute that content to themselves. The readers likewise have their tokens, which allow them to access the application and leave comments, which are, in turn, attributed to their profile. If they are premium subscribers, they might even have unique, different tokens that allow them different access patterns through a quasi-authentication scheme.This level has its problems, of course. Authentication Tokens, even though they are often used as a sort of authorization scheme in the wild, are meant only to be used for authentication. Because of that, the quasi-authentication comes from both a supposition of intent (wow, this person has this token, they must be trustworthy enough to do this thing!) and a complex mix of conditional statements and fuzzy logic.It should also be noted that using authentication tokens as a form of authorization is often highly insecure due to the nature of how tokens get distributed. Machines can get tokens very easily, and if a computer can do it, any malicious actor can do it. When tokens are easy to get, then your authorization scheme depends almost entirely on a system that is spoofable, corruptible, and frankly being used for the exact opposite purpose that was intended.Also read: Assisted Token Flow: The Answer to OAuth Integration in Single Page ApplicationsLevel 2 – Token-Based AuthorizationToken-Based authorization is a bit like our previous level, but the focus is shifted to authorization. At this level, we’re no longer answering the question of “who are you?” but rather “what can you do?”. An excellent way to think of this is to imagine a vast castle with secured gates. In order to enter the castle, you can provide your identity – in other words, you can authenticate that you are who you say you are. While that might get you into the gates, how does anyone know you have a right to sell goods? To sell products, you might need a seal from the king that says you are allowed to engage in commerce – in other words; you’d need authorization that states you are allowed to do something. Your first token said who you are, and your second token said what you could do.To take our example of authors and readers to another level, we can look at the ability to consume and the ability to publish. While authentication tokens allowed us to attribute content to a specific user, we’d also need a mechanism to ensure that only authors can post content. This is where authorization comes in – when authors push their content to the application for consumption, the system needs to be able to ensure that the content came from a trusted source, and has been authored by someone who has the right to upload the content.This access can be controlled quite granularly using a solution like OAuth – implementing scopes can govern permissions across a token’s lifespan and purpose, expiry can be set to ensure that tokens can “age out” of use, etc. In this way, while authentication is very much a “yes or no” proposition (specifically, you either are who you say you are or you’re not), authorization can be a much more variable sliding scale of applicability.While this might seem like a perfect fix for our security concerns in previous levels, there are a few significant reasons that this is still not enough. First and foremost, we must ask ourselves one question – who do we trust? These systems are designed to be authoritative, and as such, the token systems that come from them must be impervious and trustworthy for us to consider their tokens as evidentiary.Additionally, we must ask ourselves about how data gets handled in transit. These tokens get passed forward, and as they do, they collect more and more data. Accordingly, we must ask what data is being added, and by whom. If we can’t know for sure that the information we’re handling is, in fact, the same as when it was issued, we lose a significant amount of trust in the data as a core value.Read more on the Curity blog: The API Security Maturity ModelLevel 3 – Centralized Trust Using ClaimsClaims are a significant missing piece throughout all of our security layers, principally because we are trying to add security at the wrong place. It’s one thing for a user to claim to be who they are, or to claim to have certain rights – how do we trust that what they are saying is true? More importantly, how do we trust those who gave the evidence that they are using?That’s the fundamental question here – who do we trust? Do we trust the caller? Do we trust the API Gateway? How about the token issuer? Trust secures us, but it also opens us up to possible attacks. What can we do to fix this?Claims are the fix because they don’t simply tell you something about the subject; they give you context and the ability to verify that information. There are two core types of attributes that a claim can reference – Context Attributes tell us about the situation when a token is issued, and Subject Attributes tell us about the thing that received the token. In order to verify this is true, we trust an Asserting Party. As an example, let’s say we wanted to get a token proving that Nordic APIs has published a post. We can look to the attributes:Attribute: publisher: Nordic_APIs_Author1 publish_Date: 12/1/2019In order to instill better security, we can express this information in a claims token as such:Claim: Nordic APIs say: The publisher is Nordic_APIs_Author1.In this way, we not only say the information we need to say, we also specify who is attesting that the information is, in fact, true. In a practical format, the workflow relies quite heavily on signing and verification. When a Requesting Party requests a token from the Issuing Authority, that Authority returns the information requested. This information is signed using a Private Key – when the Requesting Party wants to verify this information, it can simply using the Public Key to ensure that it was signed before it was handed off.More to the point, encoding and encapsulating data in this way also allows us to add each layer’s functionality into a singular source with contextualized information. While the token is granted significant trust due to its signed nature, the meta contextual information (and the attestations from the Issuing Authority) allows us to know who has requested the information, and what they are allowed to see.Claims also solve the concern of data being added in transit. Because the information encoded is signed and controlled by the Issuing Authority, nothing is added in transit unless the Issuing Authority is involved – in this way, the source of information can be directly controlled.Also read: Exploring OAuth.tools, The World’s First OAuth PlaygroundConclusionSecurity is not a “one size fits all” equation, but the fundamental requirements of the system are nonetheless quite universal. The need to prove that people are who they say they are, the need to control access, etc., are all fundamental to the modern web and the systems that drive it. Accordingly, choosing the correct approach for your given security flow is paramount to successful communication.What do you think about this approach? Do you find claims to be the promised security solution we’ve been waiting for? Let us know in the comments below!