Top OAuth Client Vulnerabilities

Posted in

This article highlights some common OAuth vulnerabilities found in web and mobile apps in 2021, along with some mitigations to improve security.

Implementing web and mobile clients can be challenging since there are quite a few other factors in addition to security work, primarily around user experience and reliability:

  • Redirecting the system browser to sign the user in
  • Dealing with additional endpoints, error conditions, and token expiry
  • Dealing with navigation, page reloads, and multi-tab browsing
  • Supporting multiple authentication options

Some solutions to usability problems can make security worse, whereas the mitigations in this article deal well with other areas in addition to security.

Web Clients

Web clients are often the hardest to secure perfectly due to having to run JavaScript code in a browser, which is a limited (and sometimes hostile) environment for executing code.

1. Insecure Redirects

The parameters you send in an OpenID Connect authorization redirect are directly related to security, so use the latest recommendations to prevent possible vulnerabilities.

The standard parameters should use Proof Key for Code Exchange (PKCE), as recognized by the code_challenge field, and also should include a random unguessable state parameter:

client_id=my_client
redirect_uri=https://www.example.com
response_type=code
scope=openid
state=dMl9MPiqtX_LHL9HddqFqxRIRS4FUnbPTzB6cVszcZ4
code_challenge=9a7TInbDR662OL0zPZ6kgNDNRFHisi0r1G6qaLczpfw
code_challenge_method=S256

A sound library will supply these values correctly, though in high-security apps, you may be concerned about a Man in the Browser (MITB) Attack tampering with these values.

An emerging standard for confidential clients is to use JAR / PAR / JARM so that verifiable JWTs (or references) are used in requests and responses, as a protection against tampering.

2. Response Interception

OAuth responses are returned on a Redirect URI, and should always be received on an HTTPS URL that references an owned domain name:

<https://www.example.com?code=cdjn238023r&state=h802efh02r>

During development, it might be convenient to configure wildcard URLs so that multiple developers can log in using their local computer’s hostname:

*
https://*.mydomain.com

Instead, you should configure an absolute URL for all deployed systems and avoid Open Redirectors. These could allow an attacker to receive the response, then potentially swap the authorization code for tokens.

3. Cross Site Request Forgery (CSRF)

CSRF occurs when an attacker tricks a browser into performing an unwanted action by an authenticated user. An OpenID Connect web client must ensure that if an attacker can somehow get a user’s authorization code, they cannot inject it into their own session like this:

<https://www.example.com?code=cwe89n8c20e30e8ytfr&state=h802efh02efgwerfgr>

The standard solution is to validate received ‘state’ parameters:

  • The redirect sends an unguessable value, and the web app stores it.
  • The response has to contain the same value, or the web app rejects it.
  • For further details, see the IETF docs on OAuth Cross NSite Request Forgery.

4. Leaked Tokens in the Browser History

Avoid solutions that return tokens directly in browser response URLs, such as the deprecated Implicit Flow. These may include tokens in the browser history and have many security risks:

https://www.example.com#token=cdjn238023r&state=h802efh02r

5. Stolen Authorization Codes

If an attacker somehow steals an authorization code, they must not be able to send it to the Authorization Server’s token endpoint to receive tokens.

Using Proof Key for Code Exchange ensures this since the code_verifier field is unguessable and could not be supplied by an attacker. You should also ensure that the web client uses a client secret, which is usually a simple string value, though Mutual TLS is also possible:

grant_type: authorization_code
client_id: my_client
client_secret: cer23u0n4iuhyte
code: cdjn238023r&state=h802efh02r
redirect_uri: com.mycompany.myapp:/callback
code_verifier: 3GI6Tlm93c8Am0TcZkb4BkZh2eCycIuG4OmoWSUl-h0

Once the code is exchanged for tokens, a web client will either use access tokens in the browser to call APIs or use secure cookies to call APIs. Of these options, the former has greater vulnerabilities.

6. Cross Site Scripting

After user authentication, if your Web UI has Cross Site Scripting (XSS) vulnerabilities, then any malicious code that is allowed to execute can potentially call your APIs, regardless of whether you are using access tokens or secure cookies:

const options = {
    method: 'POST',
    credentials: 'include',
    headers: {
        'x-csrf': readCsrfToken()
    }
}
const data = await fetch(apiUrl, options);

The mitigation is to take XSS risks very seriously and follow the OWASP guidance to prevent it as part of your security development lifecycle.

7. Stolen API Credentials

Calling APIs with access tokens is done by JavaScript code, which opens up more attack vectors for malicious code. Therefore OAuth for Browser-Based Apps recommends the use of a Backend for Frontend (BFF) to manage tokens:

  • The BFF can manage a client secret when getting tokens.
  • An HTTP only / SameSite=strict cookie is used as an API credential from the browser.
  • The secure cookie can contain tokens if it is strongly encrypted.
  • The secure cookies are ‘first party’ and work well in usability terms.
  • JavaScript or the browser has no way to access the tokens directly.

BFF is often misunderstood and can be implemented fairly easily by plugging in a stateless BFF API developed by security experts. When done well, this will improve and simplify the web architecture.

8. Insecure Token Storage

If implementing OAuth solely in the browser, via JavaScript code in a Single Page Application (SPA), it is a challenge to deal with the following security concerns properly:

  • Storing access tokens in local storage is considered insecure.
  • Using refresh tokens in the browser is considered insecure.

If using tokens in the browser, then follow these guidelines, and understand that no secure storage is available:

  • Do not return refresh tokens to the browser.
  • Keep the access token short-lived, perhaps with a lifetime of 15 minutes.
  • Aim to store the token only in memory.

9. Token Information Disclosure

If you return an ID token to the browser, aim to keep it confidential and avoid including Personally Identifiable Information (PII) such as user names and email addresses.

Consider using minimal ID tokens and getting user information via an API request instead, which is the option used in a Backend for Frontend solution.

10. Credential Interception

If tokens are stored in browser memory within a private closure, then malicious code cannot read them directly. It is still possible that they can be intercepted, however, as in the following ‘monkey patching’ technique to steal an access token while it is being sent:

const original = XMLHttpRequest.prototype.setRequestHeader;
XMLHttpRequest.prototype.setRequestHeader = function(key, val) {
   maliciousCode(key, val)
   original.call(this, key, val);
}

One mitigation to this problem is to store tokens isolated from the rest of the app in a Web Worker, in which case this particular attack does not work. Again though, the preferred option is to use a Backend for Frontend since malicious code cannot intercept HTTP only cookies.

11. Credential Export

If a token is intercepted, then it can potentially be sent to a malicious site for a more concerted attack on your APIs. This can be mitigated by using recommended security headers, including a Content Security Policy that prevents malicious code from sending data to untrusted websites or browser domains:

content-security-policy: 
    default-src 'none'; 
connect-src 'self' <https://api.example.com>;
    img-src 'self';  
    script-src 'self';
    style-src 'self';

Although this is recommended, it is possible that the user or a browser plugin could disable the CSP. Again, a Backend for Frontend solution is preferred since there is no way for malicious JavaScript code to get a secure cookie to send out of the browser.

12. Session Hijacking

When implementing an OAuth solution solely in JavaScript, actions such as a user reloading the page, or opening a new browser tab, are tricky to manage.

The traditional method of dealing with this in an SPA was to perform a browser redirect on a hidden iframe that sends the SSO cookie to get a new access token. Recent browser restrictions may prevent this from working, since the SSO cookie is considered third party and dropped.

Malicious code can potentially abuse this technique by running the code for a login button on a hidden iframe. This could enable messages to be sent automatically, such as an authorization redirect to return an authorization code, or an authorization code grant to swap the code for tokens.

This is a form of Session Hijacking, and you should ensure that this does not return tokens to an attacker. In a Backend for Frontend solution, these actions would only rewrite secure cookies, and an attacker could not exploit the login result.

By default, your Authorization Server should reject calls from iframes with an X-Frame-Options: DENY response header. It is good practice to leave this configuration in place unless there is a good reason.

Mobile Clients

Mobile clients have their particular security challenges due to the risk of devices being stolen, and the attack vectors of the mobile operating system.

13. Password Disclosure

Some mobile apps use the deprecated Resource Owner Password Grant to authenticate users and get tokens via a web view. This is considered poor both in terms of security and usability:

  • It can be dangerous for the app to gain access to an end user’s password, especially since the same password may grant access to other resources for the user.
  • The Client ID and Client Secret used with the password grant can be reverse-engineered.
  • OIDC Providers are recommended to block logins from web views by checking the user agent string.
  • Once an app uses the password grant, it is stuck with password-based logins as the only option, and those with better usability, such as WebAuthn, cannot be integrated.

Use a standard option instead, whose security is designed explicitly for the mobile case, with the best security and extensibility options:

14. Stolen Redirect Scheme

The easiest to implement AppAuth solution is for the app to register a Custom Scheme URL of the following form for receiving the authorization response:

com.mycompany.myapp:/callback

Mobile operating systems do not prevent a malicious app from registering the same scheme, which could result in that app receiving the authorization code, then swapping it for tokens. The standard protection against this risk is again to use Proof Key for Code Exchange.

15. Mobile App Impersonation

Use of PKCE does not prevent a malicious app from reverse engineering an app or its HTTP requests to find the Client ID and Redirect URI, then triggering its own complete flow:

  • Authorization redirect, returning an authorization code
  • Authorization code grant, to swap the code for tokens

For this reason, financial-grade recommendations for mobile apps recommend the use of HTTPS Redirect URIs. Security features of the underlying operating system can then be used to guarantee that only genuine ‘attested’ apps can receive OAuth redirect responses:

  • At installation time, the mobile app’s digital signature is verified against an online public key that can only be published by the domain owner.
  • If this is not the case, then the app’s scheme registration will fail, as would be the case for a malicious app trying to use the HTTPS scheme.

For an iOS app, this requires extra infrastructure to support Universal Links, whereas Android apps use App Links in an equivalent manner.

16. Lack of Client Proof

As for web apps, mobile apps are public clients by default, which both limits the security options and increases the scope for impersonation by malicious parties:

  • At the time of authentication, the Authorization Server cannot verify the client’s identity.

With AppAuth, it is recommended to also use Dynamic Client Registration (DCR) when a user first runs the app, which turns the app into a confidential client:

  • Each instance of the mobile app then gets its own Client ID and Client Secret
  • The unique secret must be used by the app whenever it retrieves tokens
  • Invalid authentication patterns are then easier to spot from audit logs
  • Improved security options such as PAR / JAR / JARM are also possible

It is also possible to prove an app’s identity before allowing it to attempt to authenticate users. This is possible when a Hypermedia Authentication API approach is used.

Conclusion

Implementing secure web and mobile clients involves some subtle risks, though OAuth has many good mitigations that have been thought through by the experts. For further resources, check out the OAuth Best Current Practices, and Top OAuth API Vulnerabilities.