Author Archives: Paul Mooney

Implementing the Encrypted Token Pattern by Leveraging ARMOR

Overview

Download the code on GitHub

A.R.M.O.R (Advanced Resilient Mode of Recognition) is a C# Framework designed to protect your ASP.NET web applications against CSRF attacks. This article explains how you can leverage ARMOR across an application built on MVC, Web API, or a combination of both. For more information on ARMOR or the Encrypted Token Pattern, please read this article.

The ARMOR WebFramework

The ARMOR WebFramework is a toolkit that provides boilerplate components intrinsic to ASP.NET, such as custom Authorization attributes, Filters and a range of tools to facilitate Header-parsing and other mechanisms necessary to secure your application. You can download the code here.

Securing Your Application with ARMOR

Download the Nuget package Daishi.Armor.WebFramework:

Daishi.WebFramework NuGet

Once applied, you will have full access to the ARMOR API. Implementing the API is simple, as follows.

Add the appropriate Application Configuration settings

Add the following markup to your Application Configuration file:


<add key="IsArmed" value="true" /></b>
<add key="ArmorEncryptionKey" value="{Encryption Key}"/></b>
<add key="ArmorHashKey" value="{Hashing Key}"/></b>
<add key="ArmorTimeout" value="1200000"/></b>

The keys are as follows:

  • IsArmed – Toggle feature to turn ARMOR on or off at an application-wide level
  • ArmorEncryptionKey – The key that ARMOR will use to encrypt its Tokens
  • ArmorHashKey – The key that ARMOR will use to generate a Token Hash
  • ArmorTimeout – the time in milliseconds that ARMOR Tokens are valid for after generation

That’s it, we’re done with configuration. You can generate keys as follows using the RNGCryptServiceProvider class in .NET:


byte[] encryptionKey = new byte[32];
byte[] hashingKey = new byte[32];

using (var provider = new RNGCryptoServiceProvider()) {
    provider.GetBytes(encryptionKey);
    provider.GetBytes(hashingKey);
}

Both key values are 256 bits in length and are stored in the Application Configuration file in Base64-encoded format.

Hook the ARMOR Filter to your application

There are two main components to consider in the ARMOR WebFramework

  • Authorization Filter – validates the incoming ARMOR Token
  • Fortification Filter – refreshes and reissues a new ARMOR Token

The Authorization filter reads the ARMOR Token from the HttpRequest Header and validates it against the logged in user. You can authenticate the user in any fashion you like; ARMOR assumes that your user’s Claims are loaded into the current Thread at the point of validation.

Given that the MVC and Web API use different assemblies, the ARMOR Framework provides dual components that cater for both.

In terms of Authorization:

  • MvcArmorAuthorizeAttribute
  • WebApiArmorAuthorizeAttribute
  • MvcArmorFortifyFilter
  • WebApiArmorFortifyFilter

And Fortification:

  • MvcArmorFortifyFilter
  • WebApiArmorFortifyFilter

Generally speaking, it’s ideal that you refresh the incoming Token on every request, whether that request validates the Token or not; specifically GET requests. Otherwise, the Token may expire unless the user issues a POST, PUT, or DELETE request within the Token’s lifetime.

To do this, simple register the appropriate ARMOR Fortification mechanism in your application.

For MVC controllers:


public static void RegisterGlobalFilters(GlobalFilterCollection filters) {
    filters.Add(new HandleErrorAttribute());
    filters.Add(new HandleExceptionFilter("", "Error"));
    filters.Add(new MvcArmorFortifyFilter());
}

As you can see, the MvcArmorFortifyFilter is now registered in MVC.

For Web API Controllers:

config.Filters.Add(new WebApiArmorFortifyFilter());

Add the above in the WebApiConfig.cs file.

Now, each HttpResponse issued by your application will contain a custom ARMOR Header containing a new ARMOR Token for use with the next HttpRequest:

ArmorResponse

Decorate your POST, PUT and DELETE endpoints with ARMOR

In an MVC Controller simply decorate your endpoints as follows:

[MvcArmorAuthorize]

And in Web API Controllers:

[WebApiArmorAuthorize]

Integrate your application’s authentication mechanism

Your application presumably has a method of authentication. AMROR operates on the basis of Claims and provides default implementations of Claim-parsing components derived from the IdentityReader class in the following classes:

  • MvcIdentityReader
  • WebApiIdentityReader

Both classes return an enumerated list of Claim objects consisting of a UserId Claim. In the case of MVC, the Claim is derived from the ASP.NET intrinsic Identity.Name property, assuming that the user is already authenticated. In the case of Web API, it is assumed that you leverage an instance of ClaimsIdentity as your default IPrincipal object, and that user metadata is stored in Claims held within that ClaimsIdentity. As Such, the WebApiIdentityReader simply extracts the UserId Claim. Both UserId and Timestamp Claims are the only default Claims in an ArmorToken and are loaded upon creation.

If your application leverages a different authentication mechanism, you can simply derive from the default IdentityReader class with your own implementation and extract your logged in user’s metadata, injecting it into Claims necessary for ARMOR to manage. Here is the default Web API implementation. As you can see, the code is very straightforward:

public override bool TryRead(out IEnumerable<Claim> identity) {
    var claims = new List<Claim>();
    identity = claims;

    var claimsIdentity = principal.Identity as ClaimsIdentity;
    if (claimsIdentity == null) return false;

    var subClaim = claimsIdentity.Claims.SingleOrDefault(c => c.Type.Equals("UserId"));
    if (subClaim == null) return false;

    claims.Add(subClaim);
    return true;
}

ARMOR downcasts the intrinsic HTTP IPrincipal.Identity object as an instance of ClaimsIdentity and extracts the UserId Claim. Deriving from the IdentityReader base class allows you to implement your own mechanism to build Claims. It’s worth noting that you can store as many Claims as you like in an ARMOR Token, so feel free to do so. ARMOR will decrypt and deserialise your Claims so that they can be read on the return journey back to the server from the UI.

Adding the ARMOR UI Components

The ARMOR WebFramework contains a JavaScript file as follows:

var ajaxManager = ajaxManager || {
    setHeader: function(armorToken) {
        $.ajaxSetup({
            beforeSend: function(xhr, settings) {
                if (settings.type !== "GET") {
                    xhr.setRequestHeader("Authorization", "ARMOR " + armorToken);
                }
            }
        });
    }
};

The purpose of this code is to detect the HttpRequest type, and apply an ARMOR Authorization Header for POST, PUT and DELETE requests. You can leverage this on each page of your application (or in the default Layout page) as follows:

<script>
    $(document).ready(function () {
        ajaxManager.setHeader($("#armorToken").val());
    });
    $(document).ajaxSuccess(function (event, xhr, settings) {
        var armorToken = xhr.getResponseHeader("ARMOR") || $("#armorToken").val();
        ajaxManager.setHeader(armorToken);
    });
</script>

As you can see, the UI contains a hidden field called “armorToken”. This field needs to be populated with an ArmorToken when the page is initially served. The following code in the ARMOR API itself facilitates this:

var nonceGenerator = new NonceGenerator();
nonceGenerator.Execute();

var encryptionKey = Convert.FromBase64String(ConfigurationManager.AppSettings["ArmorEncryptionKey"]);

var hashingKey = Convert.FromBase64String(ConfigurationManager.AppSettings["ArmorHashKey"]);
var armorToken = new ArmorToken(User.Identity.Name, "MyApp", nonceGenerator.Nonce);
var armorTokenConstructor = new ArmorTokenConstructor();

var standardSecureArmorTokenBuilder = new StandardSecureArmorTokenBuilder(armorToken, encryptionKey, hashingKey);

var generateSecureArmorToken = new GenerateSecureArmorToken(armorTokenConstructor, standardSecureArmorTokenBuilder);

generateSecureArmorToken.Execute();

ViewBag.ArmorToken = generateSecureArmorToken.SecureArmorToken;

Here we generate the initial ARMOR Token to be served when the application loads. This Token will be leveraged by the first AJAX request and refreshed on each subsequent request. The Token is loaded into the ViewBag object and absorbed by the associated View:

<div><input id="armorToken" type="hidden" value=@ViewBag.ArmorToken /></div>

Now your AJAX requests are decorated with ARMOR Authorization attributes:

ArmorRequest

Summary

Now that you’ve implemented the ARMOR WebFramework, each POST, PUT and DELETE request will persist a Rijndael-encrypted and SHA256-hashed ARMOR Token, which is validated by the server before each POST, PUT, or DELETE request decorated with the appropriate attribute is handled, and refreshed after each request completes. The simple UI components attach new ARMOR Tokens to outgoing requests and read ARMOR Tokens on incoming responses. ARMOR is designed to work seamlessly with your current authentication mechanism to protect your application from CSRF attacks.

Connect with me:

RSSGitHubTwitter
LinkedInYouTubeGoogle+

Providing Core Functionality Lacking in Entity Framework.

Overview

Download the code on GitHub

Install with NuGet.

Entity Framework provides a very versatile data-persistence API, but like any ORM, it’s not without drawbacks. Recently, during a performance review of a .NET-based web application, I noticed several drawbacks attributable to EF.

EF doesn’t provide batch-processing support. Let’s say I have a class as follows:

class Person {
    public string FirstName { get; set; }
    public string Surname { get; set; }
}

class Team {
    public List<Person> Members { get; set; }
}

class MyClass {
    public void Run() {
        var team = new Team {
            Members = new List<Person> {
                new Person {FirstName = "Paul", Surname = "Mooney"},
                new Person {FirstName = "Some", Surname = "OtherGuy"}
            }
        };
    }
}

Entity Framework will invoke a separate SQL command for each Team Member – teams with thousands of members will incur a round trip to the DB for each member. The solution is provided in the SQLBatchBuilder class. This class allows you to batch such commands in a single command, persisting your data to the DB while maintaining optimal performance by minimising the number of round-trips to 1. The framework also contains a handy LINQ-style SQL command builder, adhering to the functional programming model.

Consider the above example. Let’s say we wanted to create a Team object, which consists of a single entry in a SQL table. The generated SQL would look something like this:

insert into dbo.team (id, name) values (1, ‘My Team’);

Nothing unusual here. But what if the team has 50 Members? That’s 50 round-trips to the DB to create each Member, plus 1 round-trip to create the Team. This is quite excessive.

What if we were building an application that saved customer invoices, or sombody’s Facebook friends? This could potentially result in thousands of round-trips to the DB for a single unit-of-work.

SQLBuilder provides a simple, but effective solution to this:

class Person {
    public string FirstName { get; set; }
    public string Surname { get; set; }
}

class Team {
    public List<Person> Members { get; set; }
}

class MyClass {
    public void Run() {
        var team = new Team {
            Members = new List<Person> {
                new Person {FirstName = "Paul", Surname = "Mooney"},
                new Person {FirstName = "Some", Surname = "OtherGuy"}
            }
        };

		List<SQLBuilder> builders = new List<SQLBuilder>();

		foreach(var member in Team.Members) {
		    var sqlBuilder = new SQLBuilder(connectionString, SQLCommandType.Scalar);
			sqlBuilder.Insert(@"dbo.member", new List<string> {@"FirstName", @"Surname"}, member.FirstName, member.Surname);
			builders.Add(sqlBuilder);
		}

		sqlBatchBuilder = new SQLBatchBuilder(connectionString, SQLCommandType.Scalar, builders);
		sqlBatchBuilder.Execute();

		Console.WriteLine(sqlBatchBuilder.Result);
    }
}

In the above sample, we initialise a list of SQLBuilders. Each builder encapsulates a single insertion statement associated with each Member. Once we’ve looped through all members, we attach the list of SQLBuilders to a SQLBatchBuilder instance, which parses each SQLBuilder’s raw SQL and formats a single command comprised of each insertion statement. This is then executed as a single transaction, resulting in a single round-trip.

The SQLBuilder class itself supports the SQL language itself, allowing clauses, parameters, etc., to be applied:

    builder = new SQLBuilder(connectionString, SQLCommandType.Reader);

    builder.Select(@"Member.FirstName", @"Member.Surname")
           .From(@"Member");
		   .InnerJoin(@"Team", @"Member", @"Member.TeamId", @"Team.TeamId")
		   .Where(@"Member", @"Surname")
		   .EqualTo(@"Mooney");

	builder.Execute();

More updates to follow, including an in-depth analysis into the LINQ-style SQLBuilder.

Connect with me:

RSSGitHubTwitter
LinkedInYouTubeGoogle+

Leveraging the Encrypted Token Pattern

Overview

Download the code on GitHub

CSRF attacks involve leveraging user’s authenticated state in order to invoke malicious attacks, with the general purpose of manipulating data. There are two established approaches designed to prevent such attacks:

  1. Synchronizer Token Pattern
  2. Double-Submit Cookie Pattern

For more information on these, please visit the following resource:

https://www.owasp.org/index.php/Cross-Site_Request_Forgery_(CSRF)_Prevention_Cheat_Sheet

Both approaches succeed in preventing CSRF attacks, while introducing architectural and security consequences. Below is a brief synopsis.

Synchronizer Token Pattern

This pattern is recommended by owasp.org as the method of choice in preventing CSRF attacks, and is leveraged by CSRFGuard. While successfully preventing CSRF attacks, it introduces an architectural concern, in that the framework requires session state on web servers. This incurs two issues:

  1. Session-state costs memory
  2. Sessions result in an imbalance in terms of load distribution across web servers

While sessions generally cost a nominal amount of memory, significant user-load can exponentially increase that memory footprint. In general, it is best-practice to avoid sessions. More importantly, if a user has an active session on a specific web server, load-balancers will generally route that user’s subsequent requests to that specific server instead of distributing requests evenly. This results in over-utilization of that server and potential underutilization of adjacent servers. This feature can be disabled on load-balancers (generally), however doing so will result in associated sessions created on more than one web server for a specific user. This will cause synchronization issues, and require implementation of a session management tool to avoid loss of cached data across web servers.

Double-Submit Cookie Pattern

This pattern is a more lightweight implementation of CSRF-protection. While relatively new and generally considered somewhat untested (it’s just as effective as the Synchronizer Token Pattern in my opinion; the arguments against it are weak at best), it achieves protection while avoiding the use of state. The implementation of this pattern, like the Synchronizer Token Pattern, produces design and security consequences:

  1. Cookies cannot be tagged as HTTPONLY
  2. Potential XSS vulnerabilities in subdomains can introduce poisoned cookies in upper domains

Cookies that contain sensitive server metadata, such as session cookies, should be tagged as HTTPONLY. This prevents client-side scripts from reading values from the cookie, adding a layer of protection. Given that this pattern requires client-side scripts to read the token from the cookie and apply it to the HTTP header, we cannot tag the cookie as HTTPONLY, introducing a potential security concern.

Leveraging this pattern requires that all software in our suite of applications are fully XSS-resistant. If an application in a subdomain, below our application domain, is compromised within the context of an XSS attack, an attacker could potentially introduce a poisoned cookie to that site, which would be valid in our upper domain, and allow an attacker to circumnavigate our CSRF protection framework.

Conclusion

Both methods of protection introduce design and potential security consequences. As a result, I’ve created a new pattern, the Encrypted Token Pattern, to address these concerns.

Encrypted Token Pattern

This pattern addresses the shortfalls of both the Synchronizer Token Pattern and the Double-Submit Cookie Pattern as follows:

  • It does not require server-state
  • It does not require cookies
  • It does not require two tokens
  • It does not require any effort on the client-side other than including the token in HTTP requests
  • It does not require any other application in a subdomain to be XSS-proof

The Encrypted Token Pattern is described here.

Summary

The Encrypted Token Pattern solves the shortfalls of other CSRF protection patterns and allows us greater control over CSRF-defense, without introducing new security concerns or architectural problems.

Check out this post for a simple walkthrough outlining the steps involved in leveraging ARMOR to protect your application against CSRF attacks.

Connect with me:

RSSGitHubTwitter
LinkedInYouTubeGoogle+

JSON Parsing Using JsonTextReader

Download the code on GitHub
Install with NuGet.

JSON.net is the de facto standard in terms of ASP.NET JSON parsing. Recently I began performance tuning an ASP.NET Web API application. Most of the work involved identifying bottlenecks and resolving them by leveraging the async and await operators in C#5, optimising IIS thread-management, etc., then I started looking at deserialisation.

James Newton King mentions that the fastest possible method of deserialising JSON is to leverage the JsonTextReader. Before I talk about that, let’s look at how a typical implementation works:

var proxy = WebRequest.Create("http://somefeed.com");

var response = proxy.GetResponse();

var stream = response.GetResponseStream();

Note that we’re pulling the request back as an IO.Stream, rather than a string. The problem with caching the response in a string is that in .NET, any object larger than 85KB is automatically assigned to the Large Object Heap. These objects require the Garbage Collector to suspend all threads in IIS in order to destroy them, which has major implications from a performance perspective. If the returned feed is reasonably large, and you cache it in a string, you’ll potentially introduce significant overhead in your web application. Caching to an IO.Stream avoids this issue, because the feed will be chunked and read in smaller portions as you parse it.

Now, let’s say our feed returns a list of people in JSON format:

[
    {
        firstName: "Paul",
        surname: "Mooney"
    },
    {
	    firstName: "Some",
        surname: "OtherGuy"
    }
]

We can parse this with the following:

var result = JsonConvert.DeserializeObject(stream.ReadToEnd());

Assuming that we have a C# class as follows:

class Person {
	public string FirstName { get; set; }
	public string Surname { get; set; }
}

JSON.net deserialises this under the hood using Reflection, a technique which involves reading the classes metadata and mapping corresponding JSON tags to each property, which is costly from a performance perspective. Another downside is the fact that if our JSON objects are embedded in parent objects from another proprietary system, or oData for example, the above method will fail on the basis that the JSON tags don’t match. In other words, our JSON feed needs to match our C# class verbatim.

JSON.net provides a handy mechanism to overcome this: Object Parsing. Instead of using reflection to automatically construct and bind our C# classes, we can parse the entire feed to a JObject, and then drill into this using LINQ, for example, to draw out the desired classes:

var json = JObject.Parse(reader.ReadToEnd());

var results = json["results"]
	.SelectMany(s => s["content"])
        .Select(person => new Person {
            FirstName = person["firstName"].ToString(),
	    Surname = person["surname"].ToString()
	});

Very neat. The problem with this is that we need to parse the entire feed to draw back a subset of data. Consider that if the feed is quite large, we will end up parsing much more than we need.

To go back to my original point, the quickest method of parsing JSON, using JSON.net, is to us the JsonTextReader. Below, you can find an example of a class I’ve put together which reads from a JSON feed and parses only the metadata that we require, ignoring the rest of the feed, without using Reflection:

public abstract class JsonParser<TParsable>; where TParsable : class, new() {
        private readonly Stream json;
        private readonly string jsonPropertyName;

        public List<T> Result { get; private set; }

        protected JsonParser(Stream json, string jsonPropertyName) {
            this.json = json;
            this.jsonPropertyName = jsonPropertyName;

            Result = new List<TParsable>();
        }

        protected abstract void Build(TParsable parsable, JsonTextReader reader);

        protected virtual bool IsBuilt(TParsable parsable, JsonTextReader reader) {
            return reader.TokenType.Equals(JsonToken.None);
        }

        public void Parse() {
            using (var streamReader = new StreamReader(json)) {
                using (var jsonReader = new JsonTextReader(streamReader)) {
                    do {
                        jsonReader.Read();
                        if (jsonReader.Value == null || !jsonReader.Value.Equals(jsonPropertyName)) continue;

                        var parsable = new TParsable();

                        do {
                            jsonReader.Read();
                        } while (!jsonReader.TokenType.Equals(JsonToken.PropertyName) && !jsonReader.TokenType.Equals(JsonToken.None));

                        do {
                            Build(parsable, jsonReader);
                            jsonReader.Read();
                        } while (!IsBuilt(parsable, jsonReader));

                        Result.Add(parsable);
                    } while (!jsonReader.TokenType.Equals(JsonToken.None));
                }
            }
        }
    }

This class is an implementation of the Builder pattern.

In order to consume it, you need only extend the class with a concrete implementation:

public class PersonParser : JsonParser
    {
        public PersonParser(Stream json, string jsonPropertyName) : base(json, jsonPropertyName) { }

        protected override void Build(Person parsable, JsonTextReader reader)
        {
            if (reader.Value.Equals("firstName"))
            {
                reader.Read();
                parsable.FirstName = (string)reader.Value;
            }
            else if (reader.Value.Equals("surname"))
            {
                reader.Read();
                parsable.Surname = (string)reader.Value;
            }
        }

        protected override bool IsBuilt(Person parsable, JsonTextReader reader)
        {
            var isBuilt = parsable.FirstName != null &amp;&amp; parsable.Surname != null;
            return isBuilt || base.IsBuilt(parsable, reader);
        }
    }

Here, we’re overriding two methods; Build and IsBuilt. The first tells the class how to map the JSON tags to our C# object. The second, how to determine when our object is fully built.

I’ve stress-tested this; worst case result was 18.75 times faster than alternative methods. Best case was 45.6 times faster, regardless of the size of the JSON feed returned (in my case, large – about 450KB).

Leveraging this across applications can massively reduce thread-consumption and overhead for each feed.

The JsonParser class accepts 2 parameters. First, the JSON stream returned from the feed, deliberately in stream format for performance reasons. Streams are chucked by default, so that we read them one section at a time, whereas strings will consume memory of equivalent size to the feed itself, potentially ending up in the Large Object Heap. Second, the jsonPropertyName, which tells the parser to target a specific serialised JSON object.

These classes are still in POC stage. I’ll be adding more functionality over the next few days. Any feedback welcome.

Connect with me:

RSSGitHubTwitter
LinkedInYouTubeGoogle+