Leveraging the Encrypted Token Pattern

Overview

Download the code on GitHub

CSRF attacks involve leveraging user’s authenticated state in order to invoke malicious attacks, with the general purpose of manipulating data. There are two established approaches designed to prevent such attacks:

  1. Synchronizer Token Pattern
  2. Double-Submit Cookie Pattern

For more information on these, please visit the following resource:

https://www.owasp.org/index.php/Cross-Site_Request_Forgery_(CSRF)_Prevention_Cheat_Sheet

Both approaches succeed in preventing CSRF attacks, while introducing architectural and security consequences. Below is a brief synopsis.

Synchronizer Token Pattern

This pattern is recommended by owasp.org as the method of choice in preventing CSRF attacks, and is leveraged by CSRFGuard. While successfully preventing CSRF attacks, it introduces an architectural concern, in that the framework requires session state on web servers. This incurs two issues:

  1. Session-state costs memory
  2. Sessions result in an imbalance in terms of load distribution across web servers

While sessions generally cost a nominal amount of memory, significant user-load can exponentially increase that memory footprint. In general, it is best-practice to avoid sessions. More importantly, if a user has an active session on a specific web server, load-balancers will generally route that user’s subsequent requests to that specific server instead of distributing requests evenly. This results in over-utilization of that server and potential underutilization of adjacent servers. This feature can be disabled on load-balancers (generally), however doing so will result in associated sessions created on more than one web server for a specific user. This will cause synchronization issues, and require implementation of a session management tool to avoid loss of cached data across web servers.

Double-Submit Cookie Pattern

This pattern is a more lightweight implementation of CSRF-protection. While relatively new and generally considered somewhat untested (it’s just as effective as the Synchronizer Token Pattern in my opinion; the arguments against it are weak at best), it achieves protection while avoiding the use of state. The implementation of this pattern, like the Synchronizer Token Pattern, produces design and security consequences:

  1. Cookies cannot be tagged as HTTPONLY
  2. Potential XSS vulnerabilities in subdomains can introduce poisoned cookies in upper domains

Cookies that contain sensitive server metadata, such as session cookies, should be tagged as HTTPONLY. This prevents client-side scripts from reading values from the cookie, adding a layer of protection. Given that this pattern requires client-side scripts to read the token from the cookie and apply it to the HTTP header, we cannot tag the cookie as HTTPONLY, introducing a potential security concern.

Leveraging this pattern requires that all software in our suite of applications are fully XSS-resistant. If an application in a subdomain, below our application domain, is compromised within the context of an XSS attack, an attacker could potentially introduce a poisoned cookie to that site, which would be valid in our upper domain, and allow an attacker to circumnavigate our CSRF protection framework.

Conclusion

Both methods of protection introduce design and potential security consequences. As a result, I’ve created a new pattern, the Encrypted Token Pattern, to address these concerns.

Encrypted Token Pattern

This pattern addresses the shortfalls of both the Synchronizer Token Pattern and the Double-Submit Cookie Pattern as follows:

  • It does not require server-state
  • It does not require cookies
  • It does not require two tokens
  • It does not require any effort on the client-side other than including the token in HTTP requests
  • It does not require any other application in a subdomain to be XSS-proof

The Encrypted Token Pattern is described here.

Summary

The Encrypted Token Pattern solves the shortfalls of other CSRF protection patterns and allows us greater control over CSRF-defense, without introducing new security concerns or architectural problems.

Check out this post for a simple walkthrough outlining the steps involved in leveraging ARMOR to protect your application against CSRF attacks.

Connect with me:

RSSGitHubTwitter
LinkedInYouTubeGoogle+

14 thoughts on “Leveraging the Encrypted Token Pattern

  1. Bryan Cheng

    Hello,

    Great article!

    I’m interested in trying out this approach in an application I’m developing, but I have a simple implementation question. Your diagram references the use of a nonce which gets encrypted into the token. Is it absolutely necessary that each nonce only gets used for a single token, and if so, does this require state creation in the API provider (eg. storing some sort of rotating table of used nonce values)? Or is the nonce just meant to add randomness?

    Thanks!

    Reply
    1. Paul Mooney Post author

      Hi Bryan,

      Thanks for your post. Great question; yes, the nonce value adds randomness so that an attacker can’t identify associations between specific keys and credentials. However, during encryption you ideally assign what’s called an Initialisation Vector to the values you’d like to encrypt. This is a 128 bit (as a rule-of-thumb) array appended to the byte array you’re encrypting, which guarantees randomness. I added the Nonce Claim to the object itself to add another level or randomness, though it’s not necessary to comply with AES standards.

      I have a working implementation called ARMOR on Github, which comes with a sample .NET application and might save you some time. In this project, I leverage the RNGCryptoServiceProvider class to generate the Nonce value. I’ve stress-tested this over 10,000,000 iterations and confirmed that each generated random number is unique, removing the need to store and rotate each generated value as it’s highly unlikely that the same number will be repeated.

      Reply
  2. Mikko Rantalainen

    Could you elaborate why do you think that the Encrypted Token pattern does not need subdomains to be XXS-proof? The token must be embedded in a hidden form field and as such, an attacker using XSS vulnerability can e.g. set document.domain to toplevel doman, open a new window and load a page with a valid token. The attacker can then read the source of the page (this is allowed because both the script and the window source are from the same document.domain) and extract the correct CSRF cookie. After extracting the cookie, the attacker can then execute an CSRF attack. Perhaps I’m missing something but this seems to have same security as Double-Submit Cookie pattern.

    Reply
    1. Paul Mooney Post author

      Hi Mikko, thanks for your post.
      Typically, with the Double Submit approach, we are required to store the token in a cookie, which is potentially exposed through XSS vulnerabilities in sub-domains. As you quite rightly pointed out, storing the token in a hidden field within the HTML itself also introduces such vulnerabilities. One of the merits of the Encrypted Token pattern is that we are not restricted to storing the token in a cookie, and as such, are allowed a much greater degree of flexibility in terms of how we mitigate the risk of exposure. I don’t think that any such pattern will fully secure data stored in HTML if security vulnerabilities exist in sub-domains, however, using the Encrypted Token pattern, we have a broader spectrum with which to mitigate that risk. For example, I would suggest as an alternative to storing the token in a hidden field, that we could store it in a javaScript variable in an embedded JS file. This of course, is still subject to XSS vulnerabilities in sub-domains, but if we obfuscate the JS file it may sufficiently introduce enough complexity to prevent an attacker from determining the token value. Certainly it will offer a greatly reduced surface of attack than any Double Submit alternatives, particularly with purely, or mostly AJAX-driven applications, where it’s not necessary at all to store the token in a hidden field (using .setRequestHeader in ajaxManager.js). Interested in your thoughts on this. Thanks.

      Reply
    1. Paul Mooney Post author

      Hi Gili,

      Thanks for the comment. The purpose of ARMOR and the Encrypted Token pattern is to protect against traditional CSRF attacks; to prevent a logged-in user from actions initiated by an attacker. Login CSRF is a different, but related vulnerability. This document outlines the differences in detail (page 4). Login CSRF vulnerabilities are largely site-specific. The short answer is that you need to secure an anonymous login form in order to protect against Login CSRF. Using the Encrypted Token pattern, you could establish a pre-session token and validate it during login requests. The link above provides some other defence mechanisms. The main problem is in determining the origin of the login request. The general recommendation is to include the referer header, which poses problems of its own; e.g., attacks initiated behind firewalls, and the potential unreliability of the referer header in certain scenarios. Again, all of this depends largely on the nature of your site. If your site does not allow automatic login, then Login CSRF attacks won’t affect you.

      Reply
  3. Max

    Hi Paul,

    Instead of creating a noun and encrypting it with the time stamp and user Id that then needs to be decrypted on the next processed request. Why not keep the expiration time in the database and not encrypt the noun at all, since this is an already unique string. This way, when the next request comes in from the client, you challenge the authentication token, by looking it up in the database. If the entry is in the database and it has not expired, then you don’t even need to send it back; you just increment the time stamp in the database to ensure the person has another 10 min of work if you decide to log him out automatically after 10 minutes of inactivity. If the user logs off with his next request, then user’s token record will be deleted from the db and if someone would make a new request with the old authorization token, the challenge would fail. Sorry, I am new to it, but just a thought. Nice article nevertheless.

    Max

    Reply
    1. Paul Mooney Post author

      Hi Max, thanks for your comment. The purpose of the nonce is to provide an extended level of “randomness” to the encrypted token. This ensures that tokens will never be encrypted to the same value for the same user, preventing attackers from associating tokens with specific users.

      My concerns with storing token metadata in a database are that:

      • The database itself becomes a single point-of-failure, should an attacker gain access
      • Reading from the database during every HTTP request requires an extra IO operation which will add overhead
      • If the database fails for any reason, users will be locked out of the application

      I very much appreciate your input. The point of this blog is to initiate this type of discussion, and I’d welcome any further ideas.

      Reply
  4. cjlarose

    I’m surprised that your post did not include a comparison of the Encrypted Token Pattern against JWTs[1]. It seems like JWTs solve many of the same problems when signed, and are almost identical to the Encrypted Token Pattern when combined with encryption through the JWE specification. Could you elaborate on some of the differences?

    I realize now that this post was written in September 2013, and it’s possible that JWTs weren’t very popular at the time of writing–do you think that JWTs make your recommendation obsolete?

    [1]: http://self-issued.info/docs/draft-ietf-oauth-json-web-token.html

    Reply
    1. Paul Mooney Post author

      No, the design is not rendered obsolete by JWT structure. I chose JWT because it’s lightweight, and JSON lends itself well to the web. JWT though, is a standard for structuring data, and not a means to prevent CSRF attacks. The Encrypted Token Pattern allows you to store any metadata you like in the token, and requires a small number of fields only (Nonce, UserId, Timestamp). It also does not specify the mode of encryption (though I recommend Rijndael at the time of writing).
      All 3 CSRF prevention methods require a token of some description, so the emergence of JWE, etc., adds strength to the pattern at the very least.

      Reply
  5. B

    So now the whole operation depends on the secret key, that is used for symmetric encryption and decryption in steps 4 and 10, being secret. Is it possible to remove this attack vector. There could be a developer on a team who has access to source code and knows this secret key and leaves the team or company. Now that person can encrypt valid tokens that the server will successfully validate and this whole encrypted token is a moot.

    Reply
    1. Paul Mooney Post author

      Generally, key-generation is considered out-of-band in terms of cryptography-based solutions. If a key becomes compromised, then the entire system fails, as you suggest. However, this is true of any system. I suggest limiting exposure insofar as possible. For example; implement a black-box key-generation service, that cannot be interfered with manually (e.g., disgruntled employee), and design your system so that it retrieves keys directly from this service. Rotating those keys at short intervals reduces the scope of attack, as keys will only be valid for short periods of time.

      Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s