Beware OAuth Misconfigurations to Protect Your Web APIs

Beware OAuth Misconfigurations to Protect Your Web APIs

Posted in

When we discuss security failures, we often think of them as a single isolated problem — after all, a lock either works or is broken, right? The simple fact is that’s not true — just as a bad lock is only one aspect of securing a physical environment, a cybersecurity posture depends on many independent factors working in tandem.

With web APIs, one common source of this break can be seen in OAuth misconfigurations, especially in the form of code and flow misuse. Below, we’ll talk about OAuth misconfigurations, looking at a practical example and explaining how to avoid this potential security flaw.

The Nature of Security

When discussing security in a technical sense, it’s essential to understand that solving one risk doesn’t guarantee holistic protection. Security for live services should provide the best possible protection based on our collective knowledge, tools, and assumptions.

Imagine we’re creating a service requiring OAuth for external account connections to our API. Ideally, we’d have a perfectly secure system: physically secure servers, flawless code, and a clear OAuth flow. But reality differs. Many variables affect actual security. For example, server location’s susceptibility to cyber warfare and the need for encryption pose questions: which algorithm, how many iterations, and how much security is necessary? Human error is another factor since systems are created by people, and even minor errors can compromise security.

This brings us to the issue of OAuth misconfigurations. Fixing an insecure system requires detailed attention to each component, as a single security weakness or misconfiguration can have significant ripple effects on web security. Below, we’ll look at a real-world security posture to showcase the potential of OAuth misconfiguration failures.

Real-Life Case:

One of the more high-profile examples in recent years has been with is a massively popular website for travelers, attracting users with simple and easy-to-understand flows for booking trips. This flow, however, resulted in three unique security issues that were chained together by API security researchers at Salt Labs to fully take over an account.

How were they able to do this, and what lessons in configuration can we learn from this case?

The First Weakness: No Unique Path

The first big misconfiguration in the approach is the refusal to allow a unique path in its Facebook connection flow. allows users to click a “log in with Facebook” button to connect their account to Facebook, collecting their trips and details in a single location. By probing how connected with Facebook, Salt Labs discovered that they could arbitrarily change the redirect_uri portion of the OAuth link in the Facebook exchange to send the code to a different path than was intended.

In essence, this issue derives from the fact that any path, rather than a predetermined path, was acceptable to the system. Without any sort of validation or rules-based filtering, the flow simply accepted the path. This would be bad on its own, but without a way to do arbitrary redirection, the threat was, in theory, limited.

The Second Weakness: Arbitrary Redirection

Upon further inspection, Salt Labs discovered that by using the “add a display name” URL in the Account Dashboard as a base for an attack, an arbitrary path could be added to the state variable. This variable is delivered as a base64 JSON string, meaning that an attacker could take their own domain (an attack vector domain that poses as if it is a valid domain in the flow) and render it as a base64 JSON variable within the original link.

By doing this and providing the new link to the user, a victim would see a perfectly valid link that would seem to be valid and using the expected format. Because of how the system works internally, no particular flags would be raised, and the attack could, in theory, be used to fully take over an account. To do this, however, the attacker would need to get the code in a way that exposes the data response to an attacker. Unfortunately for an attacker, a redirection only feeds hash fragments to the browser, and as such, there’s no way for an attacker to get this data simply through the user clicking the link. To do that, they’d have to be able to change the response type.

Of course, since the attacker owns the link being sent to the user, and Facebook allows for the response type to be changed from “code” to “code, token”, this is easily done. However, when Salt Labs deployed this attack, it was stopped in its tracks. Facebook had a trick up its sleeves to deal with this kind of attack — Facebook required that the resultant redirect_uri match both the end state and the initial OAuth event. Simple enough, right? Attack thwarted!

Learn how to implement OAuth and OpenID Connect correctly at The Platform Summit.

The Third Weakness: Mobile Application Implementation

The real crux of this weakness came to Salt Labs when they perused how OAuth was configured in the mobile application context. The big difference in how the experience worked compared to the mobile application was that a second step was undertaken with the code. A reroute occurred in which the code would pass from Chrome to, then to the MobileApp, then to In doing this, a new redirect_uri is created using the result_uri of the mobile code flow.

This simple misconfiguration — the inheritance of the redirect_uri in one step and the dependence on origination in another — blew the gates wide open. By controlling the result_uri through the code Salt Labs already secured control over, any arbitrary entry could be sent to a user and validated both with and Facebook. From that point, the request is approved, and the account is taken over entirely.

Lessons Learned From Salt Labs

This real-life case study is a strong validation of two concepts around OAuth configuration security and security postures in general.

Firstly, the misconfiguration here is so minor that most would miss it. Nevertheless, depending on the origin for one step and then allowing arbitrary change to that flow, even if done with the best intents, is utilizing a part of OAuth in a way that it was not designed to be used.

Developers should look at their implementation and ensure that when they use OAuth, they use it correctly. OAuth is often the “keys” to the kingdom, and allowing one system to rule over the entirety of the process with a single line of code fundamentally breaks any secure posture that could be gained.

Secondly, this approach shows that any posture, no matter how well-thought-out, could easily fall victim to simple human error. Systems must be tested as if you are an attacker. There’s a reason that ethical hackers exist, and penetration testing of the mobile application should have exposed this weakness in the early days. The fact that it was not discovered indicates the general need for further testing base assumptions on the codebase.

Developers should do all they can to test their systems in less-than-ideal circumstances. No security system will be perfect, but more to the point, no environment for the end user will always be perfect. Assuming that mobile code will be stuck in mobile flows, that network packets won’t get dropped (or sniffed, for that matter), and other scenarios will lead to assumptions that result in dramatic insecurity.


Security is hard to get right, and more often than not, it’s a numbers game. Organizations have to be extraordinarily lucky to never be attacked, and if they are attacked, they better hope their configurations are set up correctly. Spending some time going through OAuth flows and configurations and testing the perceived reality versus the actual reality is paramount to any secure posture. It should be a key step in testing — and validating — any secure approach.