Leveraging the Encrypted Token Pattern


Download the code on GitHub

CSRF attacks involve leveraging user’s authenticated state in order to invoke malicious attacks, with the general purpose of manipulating data. There are two established approaches designed to prevent such attacks:

  1. Synchronizer Token Pattern
  2. Double-Submit Cookie Pattern

For more information on these, please visit the following resource:


Both approaches succeed in preventing CSRF attacks, while introducing architectural and security consequences. Below is a brief synopsis.

Synchronizer Token Pattern

This pattern is recommended by owasp.org as the method of choice in preventing CSRF attacks, and is leveraged by CSRFGuard. While successfully preventing CSRF attacks, it introduces an architectural concern, in that the framework requires session state on web servers. This incurs two issues:

  1. Session-state costs memory
  2. Sessions result in an imbalance in terms of load distribution across web servers

While sessions generally cost a nominal amount of memory, significant user-load can exponentially increase that memory footprint. In general, it is best-practice to avoid sessions. More importantly, if a user has an active session on a specific web server, load-balancers will generally route that user’s subsequent requests to that specific server instead of distributing requests evenly. This results in over-utilization of that server and potential underutilization of adjacent servers. This feature can be disabled on load-balancers (generally), however doing so will result in associated sessions created on more than one web server for a specific user. This will cause synchronization issues, and require implementation of a session management tool to avoid loss of cached data across web servers.

Double-Submit Cookie Pattern

This pattern is a more lightweight implementation of CSRF-protection. While relatively new and generally considered somewhat untested (it’s just as effective as the Synchronizer Token Pattern in my opinion; the arguments against it are weak at best), it achieves protection while avoiding the use of state. The implementation of this pattern, like the Synchronizer Token Pattern, produces design and security consequences:

  1. Cookies cannot be tagged as HTTPONLY
  2. Potential XSS vulnerabilities in subdomains can introduce poisoned cookies in upper domains

Cookies that contain sensitive server metadata, such as session cookies, should be tagged as HTTPONLY. This prevents client-side scripts from reading values from the cookie, adding a layer of protection. Given that this pattern requires client-side scripts to read the token from the cookie and apply it to the HTTP header, we cannot tag the cookie as HTTPONLY, introducing a potential security concern.

Leveraging this pattern requires that all software in our suite of applications are fully XSS-resistant. If an application in a subdomain, below our application domain, is compromised within the context of an XSS attack, an attacker could potentially introduce a poisoned cookie to that site, which would be valid in our upper domain, and allow an attacker to circumnavigate our CSRF protection framework.


Both methods of protection introduce design and potential security consequences. As a result, I’ve created a new pattern, the Encrypted Token Pattern, to address these concerns.

Encrypted Token Pattern

This pattern addresses the shortfalls of both the Synchronizer Token Pattern and the Double-Submit Cookie Pattern as follows:

  • It does not require server-state
  • It does not require cookies
  • It does not require two tokens
  • It does not require any effort on the client-side other than including the token in HTTP requests
  • It does not require any other application in a subdomain to be XSS-proof

The Encrypted Token Pattern is described here.


The Encrypted Token Pattern solves the shortfalls of other CSRF protection patterns and allows us greater control over CSRF-defense, without introducing new security concerns or architectural problems.

Check out this post for a simple walkthrough outlining the steps involved in leveraging ARMOR to protect your application against CSRF attacks.

Connect with me:


29 thoughts on “Leveraging the Encrypted Token Pattern

  1. Bryan Cheng


    Great article!

    I’m interested in trying out this approach in an application I’m developing, but I have a simple implementation question. Your diagram references the use of a nonce which gets encrypted into the token. Is it absolutely necessary that each nonce only gets used for a single token, and if so, does this require state creation in the API provider (eg. storing some sort of rotating table of used nonce values)? Or is the nonce just meant to add randomness?


    1. Paul Mooney Post author

      Hi Bryan,

      Thanks for your post. Great question; yes, the nonce value adds randomness so that an attacker can’t identify associations between specific keys and credentials. However, during encryption you ideally assign what’s called an Initialisation Vector to the values you’d like to encrypt. This is a 128 bit (as a rule-of-thumb) array appended to the byte array you’re encrypting, which guarantees randomness. I added the Nonce Claim to the object itself to add another level or randomness, though it’s not necessary to comply with AES standards.

      I have a working implementation called ARMOR on Github, which comes with a sample .NET application and might save you some time. In this project, I leverage the RNGCryptoServiceProvider class to generate the Nonce value. I’ve stress-tested this over 10,000,000 iterations and confirmed that each generated random number is unique, removing the need to store and rotate each generated value as it’s highly unlikely that the same number will be repeated.

  2. Mikko Rantalainen

    Could you elaborate why do you think that the Encrypted Token pattern does not need subdomains to be XXS-proof? The token must be embedded in a hidden form field and as such, an attacker using XSS vulnerability can e.g. set document.domain to toplevel doman, open a new window and load a page with a valid token. The attacker can then read the source of the page (this is allowed because both the script and the window source are from the same document.domain) and extract the correct CSRF cookie. After extracting the cookie, the attacker can then execute an CSRF attack. Perhaps I’m missing something but this seems to have same security as Double-Submit Cookie pattern.

    1. Paul Mooney Post author

      Hi Mikko, thanks for your post.
      Typically, with the Double Submit approach, we are required to store the token in a cookie, which is potentially exposed through XSS vulnerabilities in sub-domains. As you quite rightly pointed out, storing the token in a hidden field within the HTML itself also introduces such vulnerabilities. One of the merits of the Encrypted Token pattern is that we are not restricted to storing the token in a cookie, and as such, are allowed a much greater degree of flexibility in terms of how we mitigate the risk of exposure. I don’t think that any such pattern will fully secure data stored in HTML if security vulnerabilities exist in sub-domains, however, using the Encrypted Token pattern, we have a broader spectrum with which to mitigate that risk. For example, I would suggest as an alternative to storing the token in a hidden field, that we could store it in a javaScript variable in an embedded JS file. This of course, is still subject to XSS vulnerabilities in sub-domains, but if we obfuscate the JS file it may sufficiently introduce enough complexity to prevent an attacker from determining the token value. Certainly it will offer a greatly reduced surface of attack than any Double Submit alternatives, particularly with purely, or mostly AJAX-driven applications, where it’s not necessary at all to store the token in a hidden field (using .setRequestHeader in ajaxManager.js). Interested in your thoughts on this. Thanks.

    1. Paul Mooney Post author

      Hi Gili,

      Thanks for the comment. The purpose of ARMOR and the Encrypted Token pattern is to protect against traditional CSRF attacks; to prevent a logged-in user from actions initiated by an attacker. Login CSRF is a different, but related vulnerability. This document outlines the differences in detail (page 4). Login CSRF vulnerabilities are largely site-specific. The short answer is that you need to secure an anonymous login form in order to protect against Login CSRF. Using the Encrypted Token pattern, you could establish a pre-session token and validate it during login requests. The link above provides some other defence mechanisms. The main problem is in determining the origin of the login request. The general recommendation is to include the referer header, which poses problems of its own; e.g., attacks initiated behind firewalls, and the potential unreliability of the referer header in certain scenarios. Again, all of this depends largely on the nature of your site. If your site does not allow automatic login, then Login CSRF attacks won’t affect you.

  3. Max

    Hi Paul,

    Instead of creating a noun and encrypting it with the time stamp and user Id that then needs to be decrypted on the next processed request. Why not keep the expiration time in the database and not encrypt the noun at all, since this is an already unique string. This way, when the next request comes in from the client, you challenge the authentication token, by looking it up in the database. If the entry is in the database and it has not expired, then you don’t even need to send it back; you just increment the time stamp in the database to ensure the person has another 10 min of work if you decide to log him out automatically after 10 minutes of inactivity. If the user logs off with his next request, then user’s token record will be deleted from the db and if someone would make a new request with the old authorization token, the challenge would fail. Sorry, I am new to it, but just a thought. Nice article nevertheless.


    1. Paul Mooney Post author

      Hi Max, thanks for your comment. The purpose of the nonce is to provide an extended level of “randomness” to the encrypted token. This ensures that tokens will never be encrypted to the same value for the same user, preventing attackers from associating tokens with specific users.

      My concerns with storing token metadata in a database are that:

      • The database itself becomes a single point-of-failure, should an attacker gain access
      • Reading from the database during every HTTP request requires an extra IO operation which will add overhead
      • If the database fails for any reason, users will be locked out of the application

      I very much appreciate your input. The point of this blog is to initiate this type of discussion, and I’d welcome any further ideas.

  4. cjlarose

    I’m surprised that your post did not include a comparison of the Encrypted Token Pattern against JWTs[1]. It seems like JWTs solve many of the same problems when signed, and are almost identical to the Encrypted Token Pattern when combined with encryption through the JWE specification. Could you elaborate on some of the differences?

    I realize now that this post was written in September 2013, and it’s possible that JWTs weren’t very popular at the time of writing–do you think that JWTs make your recommendation obsolete?

    [1]: http://self-issued.info/docs/draft-ietf-oauth-json-web-token.html

    1. Paul Mooney Post author

      No, the design is not rendered obsolete by JWT structure. I chose JWT because it’s lightweight, and JSON lends itself well to the web. JWT though, is a standard for structuring data, and not a means to prevent CSRF attacks. The Encrypted Token Pattern allows you to store any metadata you like in the token, and requires a small number of fields only (Nonce, UserId, Timestamp). It also does not specify the mode of encryption (though I recommend Rijndael at the time of writing).
      All 3 CSRF prevention methods require a token of some description, so the emergence of JWE, etc., adds strength to the pattern at the very least.

  5. B

    So now the whole operation depends on the secret key, that is used for symmetric encryption and decryption in steps 4 and 10, being secret. Is it possible to remove this attack vector. There could be a developer on a team who has access to source code and knows this secret key and leaves the team or company. Now that person can encrypt valid tokens that the server will successfully validate and this whole encrypted token is a moot.

    1. Paul Mooney Post author

      Generally, key-generation is considered out-of-band in terms of cryptography-based solutions. If a key becomes compromised, then the entire system fails, as you suggest. However, this is true of any system. I suggest limiting exposure insofar as possible. For example; implement a black-box key-generation service, that cannot be interfered with manually (e.g., disgruntled employee), and design your system so that it retrieves keys directly from this service. Rotating those keys at short intervals reduces the scope of attack, as keys will only be valid for short periods of time.

  6. Nikhil

    Few queries, since we are trying a practical implementation in java. We cannot use Armor framework.
    1) What does user ID refers to? Is it session Id, which is also travelling inside cookie along with the request. Or does it refer to actual username used during authentication. In diagram at step11, does this user ID also need to be validated?
    2) How should timestamp be verified? If server A generating token uses present time T1, then server B, receiving the response at present time T2, where expiration happens only when (T1+30mins)<T2 ?

    1. Paul Mooney Post author

      Hi Nikhil, thanks for your comment.

      1) The UserId refers to the actual username in your application. It’s not related to session at all, so that ARMOR can operate in stateless environments.

      2) Timestamp can be verified in any manner that you see fit. Implementing a short timestamp value limits exposure by reducing the likelihood that that same token may be used again, in a replay attack for example. A longer timestamp could potentially introduce vulnerabilities, given that the token would be valid for an extended period of time, although this is an edge-case. I would take care in implementing a timestamp that is too short, to account for network latency. For example, if the timestamp is set at 1 second, and your round-trip in terms of HTTP response takes 2 seconds, the token will be invalidated. Based on your example, I’m assuming that you plan on implementing a 30-minute timestamp. In that case, the token would be considered expired if T2 > (T1 + 30 minutes).

      I hope that this clarifies things for you. Incidentally, I’m happy to contribute to a Java implementation of ARMOR.

      1. Nikhil

        1) So at step 11, username must also be validated, which is not shown in the diagram. Right?
        2) And as per my best understanding, timeout needs to be considerable long (equivalent to normal session timeouts), because in genuine cases, application should prevent frequent logouts, where a valid user needs to submit actual data using forms.

      2. Paul Mooney Post author

        1) Yes, exactly. I’ll update the diagram to include this step.
        2) Yes, if you’re leveraging session-based authentication, then your ARMOR Token lifetime should be equivalent to, if not slightly greater than your session lifetime, to prevent premature logout.

      3. Gatsby

        Hi, how would you really protect yourself against replay attacks?

        While implementing the pattern, I came to the conclusion that this pattern does not protect you from such attacks. Unless I’m missing something, of course. And if so, I would be more than happy to stand corrected by the designer of the pattern.

        By giving an “expiry date” in seconds in the form of a timestamp, it doesn’t mean you’re protected against replay attacks. If you give i.e. 10 seconds as a threshold, every request within this time frame will result in a successful request. So with this pattern, if the attacker could make you i.e. pay twice. Note also that with any time based claims the user experience is at stake, because if I am a user which opens a page, fill in some form fields, then go to get a coffee and come back after 1 minute, I would get an error and I should start all over again.

        So unless the designer has found a _stateless_ solution to this the pattern here advertised doesn’t protect you from replay attacks. This should also be reflected in the official OWASP CSRF prevention cheat sheet, because the information is incorrect. – https://www.owasp.org/index.php/Cross-Site_Request_Forgery_(CSRF)_Prevention_Cheat_Sheet

      4. Paul Mooney Post author

        A random “nonce” value is included in the message. You can check the nonce on message arrival and determine whether it has been replayed or not by retaining a server-side nonce cache. This is not covered in the design (of an anti-CSRF pattern, FAIA), and is left up to the reader. I’ll follow up with a post on this in future to avoid confusion.

      5. Gatsby

        So the claim “It does not require server-state” it’s not true. If you require use of a cache of some sort, you rely on server state. This should be amended, especially in the OWASP page where the encrypted token pattern is mentioned and described.

      6. Paul Mooney Post author

        None of the official mechanisms designed to block CSRF explicitly take into account replay attacks. Replay exists out-of-context to the problem scope.

      7. Gatsby

        I disagree 🙂
        A replay attack is just a repetition of the problem CSRF protection mechanisms try to solve, which is part of the problem scope: preventing an action to be performed on behalf of others, without their explicit consent/awareness.

        I’m marking the point just so that people reading will be aware of the security concern the pattern inevitably introduces when used (even though the summary above says otherwise): that of replay attacks.

  7. Nikhil

    At step 11, after username and timestamp validation, does nonce also need to be verified (for existence and length, may be)? Does any abuse case scenario exists if not validating nonce?

    1. Paul Mooney Post author

      No, the nonce does not need to be validated (unless you have a specific requirement to do so). The nonce exists simply to provide a suitable level of entropy to the token structure, so that a MITM or packet-sniffer cannot accurately graph batches of tokens and derive similarities. Essentially, the nonce guarantees a greater degree of uniqueness.

    2. Paul Mooney Post author

      As a matter of interest, what drove your decision to implement the Encrypted Token Pattern, versus alternatives? I’m excited to see a Java implementation.

      1. Nikhil

        Transparency to the user and no client side storage made me shortlist on Encrypted Token Pattern. Currently, the java implementation is just in PoC mode, with all kinds of security tests being going on. Future enterprise level implementation will only tell, how successful this Encrypted Token Pattern is.

      2. Paul Mooney Post author

        Hi Nikhil, I’m just checking in to see how your implementation is going. I spoke about the Encrypted Token Pattern at an OWASP event and received some interest in a Java version. Are you interested in sharing?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s