Tag Archives: microsoft azure

HSTS Supercookies with ASP.NET

Fork me on GitHub
HSTS, or HTTP Strict Transport Security is essentially a means of ensuring that your connection is secure. It is a feature of modern browsers that is designed to prevent, for example, man-in-the-middle attacks, where you request a secure resource, such as https://mybank.com, and are redirected by a malicious 3rd party over a non-secure connection, to http://mybank.com. Note the missing “s” in the 2nd URL Scheme.

How HSTS Works

Browsers typically solve this problem by storing security preferences in a small data structure. In its simplest form, this is a key-value pair index, where the key is the resource URL and the value is a boolean variable indicating whether or not the connection to the associated resource should be established in a secure manner:

_____________________
 google.com    | 1 |
 bing.com      | 1 |
 apple.com     | 1 |
 ____________________

Note that the above example indicates that all requests to google.com, bing.com, and apple.com should be made in a secure manner, over HTTPS. We can infer then, that entries that do not exist in the HSTS database can be said to allow non-secure connections. If we were to view this in tabular-format, it would resemble the following, where entries for both http://yahoo.com and http://wordpress.com do not exist in the browsers HSTS database:

_____________________
 google.com    | 1 |
 yahoo.com     | 0 |
 bing.com      | 1 |
 wordpress.com | 0 |
 apple.com     | 1 |
 ____________________

When read as a single value, the complete sequence of boolean values for this table reads as “10101”. It is therefore possible to leverage this table to store arbitrary binary values.

However, it is the sights themselves that determine whether or not they should be accessed over a secure connection or not. This is achieved by returning a HTTP 301 response to requests established over non-secure channels. The HTTP response also includes a reference to the secure URL. The browser will honour this response by redirecting to the secure URL, which returns the following HTTP Header:

Strict-Transport-Security: max-age=31536000

Note that the above max-age parameter may be set as required; the above is simply an example.

The browser, upon receiving this response, will add an entry to its HSTS database, indicating that all future requests should be established over a secure channel.

How to Hack HSTS

In order to “save” a binary value to the HSTS database, we need to control the URL entries that will reside within the database. Let’s assume that I own the following 4 domains:

1.supercookies.com
2.supercookies.com
3.supercookies.com
4.supercookies.com

I configure each of these sites to indicate the connections should only be established over secure channels. Imagine then, that I create a website that contains a JavaScript file that creates a random 4-digit binary value – in this case, “1010″.

In order to “save” this value, my JavaScript file should contain a function that connects to both 1.supercookies.com and 3.supercookies.com. This will create the following entries in the HSTS database:

_____________________________
 1.supercookies.com    | 1 |
 3.supercookies.com    | 1 |
_____________________________

We can infer from this that taking into account both other domains, out view of each domain expressed in tabular format might represent the following:

_____________________________
 1.supercookies.com    | 1 |
 2.supercookies.com    | 0 |
 3.supercookies.com    | 1 |
 4.supercookies.com    | 0 |
_____________________________

In other words, by implementing a custom endpoint in each domain that simply returns a boolean value indicating whether or not the inbound HTTP request is secure or not will indicate to us whether or not there is an entry in the browsers HSTS database for that domain. For example, if we invoke a connection to http://1.supercookies.com (note the non-secure HTTP Scheme) then we would expect the browser to force a redirect to the secure equivalent of that URL (https://1.supercookies.com). Thus, if out endpoint returns a positive boolean, we can infer that this domain is present in our browsers HSTS database. Otherwise, the domain is not present, and our endpoint will return a negative boolean. By establishing connections to each domain, we can build a series of boolean values; in this case, “1010“.

Practical Example with ASP.NET Web API

Add the following ASP.NET Web API Controller method to write an entry to the HSTS database for the domain that hosts your ASP.NET application:

public HttpResponseMessage Write()
{
    HttpResponseMessage response;

    if (Request.RequestUri.Scheme.Equals("https"))
    {

        response = Request.CreateResponse(HttpStatusCode.NoContent);
        response.Headers.Add("Strict-Transport-Security", "max-age=3153600");

        return response;
    }

    response = Request.CreateResponse(HttpStatusCode.MovedPermanently);
    response.Headers.Location = new Uri(Request.RequestUri.AbsoluteUri.Replace("http", "https"));

    return response;
}

In simple cases, the above method simply returns a HTTP 301, that indicates to the browser to redirect to the secure equivalent of the origin URL. Upon redirecting, the browser receives the HSTS Header that results in an entry in the HSTS database for the domain that hosts your ASP.NET application.

Add the following method in order to read the HSTS entry (if present) for the domain that hosts your ASP.NET application:

public class HSTSResponse
{
    public bool IsSet { get; set; }
}

public HSTSResponse Read()
{
    if (Request.RequestUri.Scheme.Equals("https"))
    {
        return new HSTSResponse
        {
            IsSet = true
        };
    }
    return new HSTSResponse();
}

This method returns a positive boolean value if the inbound HTTP request is secure, implying that the upstream browser contains an entry in its HSTS database for the domain that hosts your ASP.NET application.

Generating Tracking IDs


It is not necessary to compile or run the source code – simply browse to the included index.html file in order to demonstrate the process. You can, of course, run the application locally if you wish.

The complete code leverages 4 external websites, as per the above example, in order to generate a binary value and indirectly store it in the HSTS database. Leveraging 4 external websites yields a total of 24 possible unique values – hardly enough to constitute a unique tracking mechanism. However, consider that if we own 32 external domains we can now control over 2.6 billion unique tracking IDs using this method. Note that the tracking ID in the sample code is rendered as Base-36 for legibility.

Why use the HSTS database as a storage mechanism

Cookies can be removed, edited, and faked. Leveraging the HSTS database as a storage mechanism potentially reduces the possibility that your tracking ID will be deleted. While this style of design is generally considered unscrupulous, the purpose of this post is to educate; whether or not this mechanism should be implemented in the wild is a matter of opinion that I leave up to the reader.

Connect with me:

RSSGitHubTwitter
LinkedInYouTubeGoogle+

Building Stateful Services with Azure Service Fabric

Fork me on GitHub
Azure Service Fabric offers two modes of operation: stateful and stateless. Both implementations allow for microservice-style application development. This tutorial focuses on building a simple, stateful microservice; that is, a microservice that maintains a degree of state between calls such that underlying objects retain their properties’ state after a client applies changes to the object.

Why Stateful

Stateful services provide the best of both worlds in terms of scalability and reliability. Azure Service Fabric encapsulates the scaling process to a great degree, by handling virtualisation and hardware-provisioning. The framework also handles load-balancing, failover, and state-synchronisation. The following is a step-by-step guide to implementing the simplest stateful service possible with Azure Service Fabric. The guide assumes that you have completed the steps involved in configuring your development environment.

1. Create a new Azure Service Fabric project

Create a new Azure Service Fabric project called “MyApplication”, selecting the Stateful Reliable Actor type from the list of available templates:

Stateful Reliable Actor Template

Stateful Reliable Actor Template

2. Add Custom Method Stubs

Locate the ISimpleActor” interface and note the existing method stubs. This interface and associated implementation is created automatically and includes two built-in method stubs. Add the following stubs so that the interface includes a “Name” property accessor and modifier:

    public interface ISimpleActor : IActor
    {
        Task<int> GetCountAsync();

        Task SetCountAsync(int count);

        Task<string> GetNameAsync();

        Task SetNameAsync(string name);
    }

Note that all method stubs are asynchronous by default. We have added two new methods; a simple name property accessor and modifier.

3. Add Name Property to State

Locate the ActorState class and modify so that the class includes a “Name” property:

        [DataContract]
        internal sealed class ActorState
        {
            [DataMember]
            public int Count { get; set; }

            [DataMember]
            public string Name { get; set; }

            public override string ToString()
            {
                return string.Format(CultureInfo.InvariantCulture, "SimpleActor.ActorState[Count = {0}]", Count);
            }
        }

4. Add Custom Methods

Add the following implementation methods to the SimpleActor class so that it satisfies the ISimpleActor interface:

        public Task<string> GetNameAsync()
        {
            return Task.FromResult(State.Name);
        }

        public Task SetNameAsync(string name)
        {
            State.Name = name;
            return Task.FromResult(true);
        }

5. Manage in Service Fabric Explorer

Deploy the application using Visual Studio:

Build -> Deploy Solution

Note that the application is now running locally in Service Fabric Explorer:

Azure Service Fabric Cluster Manager

Azure Service Fabric Cluster Manager

6. Create Proxy Client

Add a new Console Application to the solution and install the Microsoft.ServiceFabric.Services NuGet package:

Azure Service Fabric NuGet Package

Azure Service Fabric NuGet Package

Note: You may need to change the Target Framework property to 4.5.1, and also the Platform Target to X64:

Modify Target Framework

Modify Target Framework

Modify Platform Target

Modify Platform Target

Finally, include a reference to the SimpleActor.Interfaces project, and add the following to the Main method in the Program class:

            var actorId = ActorId.NewId();

            var simpleActor =
                ActorProxy.Create<ISimpleActor>(actorId, "fabric:/MyApplication");

            simpleActor.SetNameAsync("Bob");
            var name = simpleActor.GetNameAsync();

            Console.WriteLine("Hello, " + name.Result);
            Console.ReadLine();

Summary

Run the program and observe that the text “Hello, Bob” is printed to the command window. The Name property retains its state across calls. Subsequent accessor calls to the SimpleActorService will return the value “Bob”, unless that value is explicitly modified. Note that the value is synchronised across all running instances and nodes.

Connect with me:

RSSGitHubTwitter
LinkedInYouTubeGoogle+

Levaraging Azure Service Bus with C#

Fork me on GitHubAzure Service Bus
Microsoft Azure provides offers Azure Service Bus as a means of leveraging the Decoupled Middleware design pattern, among other things in your application. This post outlines a step-by-step guide to implementation, assuming that you have already established an Azure account, and have initialised an associated Service Bus.

Start with the Abstraction

This library abstracts the concept of a Service Bus to a level that is not restricted to MS Azure alone. Both ServiceBus and ServiceBusAdapter classes offer any Service Bus implementation the means to establish associated implementations in this library. Having said that, this library explicitly implements concrete classes that are specific to MS Azure Service Bus.

The MS Azure Service Bus

The MSAzureServiceBus class provides a succinct means of interfacing with an MS Azure Service Bus, consuming messages as they arrive. Upon initialisation, MSAzureServiceBus requires that a delegate be established that determines appropriate behaviour to invoke in the event of inbound new messages. Very simply, this functionality is exposed as follows:

Incoming Message-handling

public override event EventHandler<MessageReceivedEventArgs<BrokeredMessage>> MessageReceived;

private void OnMessageReceived(MessageReceivedEventArgs<BrokeredMessage> e) {
    var handler = MessageReceived;
    if (handler != null) handler(this, e);
}

Duplicate Message-handling

Similarly, behaviour applying to duplicate messages, that is, messages that have already been processed by MSAzureServiceBus, can also be established:

public override event EventHandler<MessageReceivedEventArgs<BrokeredMessage>> DuplicateMessageReceived;

private void OnDuplicateMessageReceived(MessageReceivedEventArgs<BrokeredMessage> e) {
    var handler = DuplicateMessageReceived;
    if (handler != null) handler(this, e);
}

Receiving Messages Explicitly

Bootstrapping delegates aside, MSAzureServiceBus provides a method designed to retrieve the next available message from the MS Service Bus. This method may be invoked on demand, or as part of a continuous loop, polling the MS Service Bus and consuming new messages immediately after they become available.

        protected override void ReceiveNextMessage(string publisherName, TimeSpan timeout, bool autoAcknowledge) {
            var message = serviceBusAdapter.ReceiveNextMessage(publisherName, timeout, autoAcknowledge);
            if (message == null) return;

            var isValidMessage = messageValidator.ValidateMessageId(message.MessageId);

            if (isValidMessage) {
                messageValidator.AddMessageIdToCache(message.MessageId);
                OnMessageReceived(new BrokeredMessageReceivedEventArgs(message));
            }
            else {
                OnMessageReceived(new BrokeredMessageReceivedEventArgs(message));
            }
        }

The MS Azure ServiceBus Adapter

The MSAzureServiceBusAdapter class is a Bridge that encapsulate the underlying mechanisms required to establish a connection to, send, and receive messages to and from MS Azure Service Bus. Let’s consider the functionality in that order:

Initialising a Connection

Firstly, we must establish a NamespaceManager based on an appropriate connection-string associated with out MS Azure Service Bus account:

            var connectionString = CloudConfigurationManager.GetSetting("Microsoft.ServiceBus.ConnectionString");
            _namespaceManager = NamespaceManager.CreateFromConnectionString(connectionString);

Now we return a reference to a desired Topic, creating the Topic if it does not already exist:

                _topic = !_namespaceManager.TopicExists(topicName) ?
                _namespaceManager.CreateTopic(topicName) : _namespaceManager.GetTopic(topicName);

Lastly, we create a Subscription to the Topic, if one does not already exist:

                if (!_namespaceManager.SubscriptionExists(_topic.Path, subscriptionName))
                _namespaceManager.CreateSubscription(_topic.Path, subscriptionName);

The Complete Listing

        public override void Initialise(string topicName) {
            var connectionString = CloudConfigurationManager.GetSetting("Microsoft.ServiceBus.ConnectionString");
            _namespaceManager = NamespaceManager.CreateFromConnectionString(connectionString);

            _topic = !_namespaceManager.TopicExists(topicName) ?
                _namespaceManager.CreateTopic(topicName) : _namespaceManager.GetTopic(topicName);

            if (!_namespaceManager.SubscriptionExists(_topic.Path, subscriptionName))
                _namespaceManager.CreateSubscription(_topic.Path, subscriptionName);

            _isInitialised = true;
        }

It’s worth noting that all methods pertaining to MSAzureServiceBusAdapter will implicitly invoke the Initialise method if a connection to MS Azure Service Bus has not already been established.

Sending Messages

This library offers the means to send messages in the form of BrokeredMessage objects to MS Azure Service Bus. Firstly, we must establish a connection, if one does not already exist:

if (!_isInitialised) Initialise(topicName);

Finally, initialise a SubscriptionClient, if one has not already been established, and simply send the message as-is, in BrokeredMessage-format:

            if (_topicClient == null)
                _topicClient = TopicClient.Create(topicName);
            _topicClient.Send(message);

The Complete Listing

        public override void SendMessage(string topicName, BrokeredMessage message) {
            if (!_isInitialised) Initialise(topicName);

            if (_topicClient == null)
                _topicClient = TopicClient.Create(topicName);
            _topicClient.Send(message);
        }

Receiving Messages

Receiving Messages
Messages are consumed from MS Azure Service Bus in a serial manner, one after the other. Once again, we must initially establish a connection, if one does not already exist:

            if (!_isInitialised)
                Initialise(topicName);

Next, we initialise a SubscriptionClient, if one has not already been established, and define a BrokeredMessage instance, the desired method return-type:

            if (_subscriptionClient == null)
                _subscriptionClient = SubscriptionClient.Create(topicName, subscriptionName);

            BrokeredMessage message = null;

Next, we return the next available message, or null, if there are no available messages:

                message = _subscriptionClient.Receive(timeout);
                if (message == null)
                    return null;

Note that this method defines an “autoAcknowledge” parameter. If true, we must explicitly acknowledge the consumption of the message:

                if (!autoAcknowledge) return message;
                message.Complete();

Finally, we return or abandon the message, depending on whether or not an Exception occurred:

            catch (Exception) {
                if (message != null) message.Abandon();
                throw;
            }
            return message;

The Complete Listing

        public override BrokeredMessage ReceiveNextMessage(string topicName, TimeSpan timeout, bool autoAcknowledge = false) {
            if (!_isInitialised)
                Initialise(topicName);

            if (_subscriptionClient == null)
                _subscriptionClient = SubscriptionClient.Create(topicName, subscriptionName);

            BrokeredMessage message = null;

            try {
                message = _subscriptionClient.Receive(timeout);
                if (message == null)
                    return null;
                if (!autoAcknowledge) return message;
                message.Complete();
            }
            catch (Exception) {
                if (message != null) message.Abandon();
                throw;
            }
            return message;
        }

A Practical Example

Let’s build a small Console Application to demonstrate the concept. Our application will interface with MS Azure Service Bus and continuously poll for messages until the application terminates:

            var serviceBus = new MSAzureServiceBus(new MSAzureServiceBusAdapter(), new MessageValidator());
            serviceBus.MessageReceived += serviceBus_MessageReceived;

            private static void serviceBus_MessageReceived(object sender, MessageReceivedEventArgs<BrokeredMessage> e) {
                Console.WriteLine(e.Message.MessageId);
            }

Message Validation

Notice the MessageValidator instance in the above code snippet. Let’s pause for a moment and consider the mechanics.

Messages contain message identifiers in GUID format. Our application retains an index that maps these identities. Incoming messages are validated by comparing the incoming message ID to those IDs stored within the index. If a match is found, the message is determined to be a duplicate, and appropriate action can be taken.

Here we can see that our inbound message IDs are stored in a simple HashSet of type String. Incidentally, we leverage a HashSet here to achieve what is known as constant complexity in terms of time. Essentially, the time taken to perform a lookup will remain constant (external factors such as garbage collection aside) regardless of HashSet size:

private readonly HashSet<string> _cache = new HashSet<string>();

public IEnumerable<string> Cache { get { return _cache; } }

Newly added messages are formatted to remove all hyphens, if any exist, so that the same standard is applied to message IDs, regardless of format:

        public void AddMessageIdToCache(string messageId) {
            _cache.Add(messageId.Replace('-', '\0'));
        }

        public bool ValidateMessageId(string messageId) {
            return _cache.Contains(messageId);
        }

Once initialised, the application will continuously poll MS Azure Service Bus until the return key is pressed:

            serviceBus.StartListening("TestTopic", new TimeSpan(0, 0, 1), true);
            Console.WriteLine("Listening to the Service Bus. Press any key to quit...");

            Console.ReadLine();
            serviceBus.StopListening();

            Console.WriteLine("Disconnecting...");

The Complete Listing

    internal class Program {
        private static void Main(string[] args) {
            var serviceBus = new MSAzureServiceBus(new MSAzureServiceBusAdapter(), new MessageValidator());
            serviceBus.MessageReceived += serviceBus_MessageReceived;

            serviceBus.StartListening("TestTopic", new TimeSpan(0, 0, 1), true);
            Console.WriteLine("Listening to the Service Bus. Press any key to quit...");

            Console.ReadLine();
            serviceBus.StopListening();

            Console.WriteLine("Disconnecting...");
        }

        private static void serviceBus_MessageReceived(object sender, MessageReceivedEventArgs<BrokeredMessage> e) {
            Console.WriteLine(e.Message.MessageId);
        }
    }

Simply add a new message to your MS Azure Service Bus instance. The application will consume the message and display the message ID on-screen.

Connect with me:

RSSGitHubTwitter
LinkedInYouTubeGoogle+

Building a Highly Available, Durable in-memory Cache

Overview

Caching strategies have become an integral component in today’s software applications. Distributed computing has resulted in caching strategies that have grown quite complex. Coupled with Cloud computing, caching has become something of a dark art. Let’s walk through the rationale behind a cache, the mechanisms that drive it, and how to achieve a highly available, durable cache, without persisting to disk.

Why We Need a Cache

Providing fast data-access

Data stores are growing larger and more distributed. Caches provide fast read capability and enhanced performance vs. reading from disk. Data distributed across multiple hardware stacks, across multiple geographic locations can be centralised at locations geographically close to application users.

Absorbing traffic surges

Sudden bursts in traffic can cause contention in terms of data-persistence. Storing data in memory removes the overhead involved in disk I/O operations, easing the burden on network resources and application threads.

Augmenting NoSQL

NoSQL has gained traction to the extent that it is now pervasive. Many NoSQL offerings, such as Couchbase, implement an eventual-consistency model; essentially, data will eventually persist to disk at some point after a write operation is invoked. This is an effective big data management strategy, however, it results in potential pitfalls on the consuming application-side. Consider an operation originating from an application that expects data to be written immediately. The application may not have the luxury of waiting until the data eventually persists. Caching the data ensures almost immediate availability.

Another common design in NoSQL technology is to direct both reads and writes, that are associated with the same data segment, to the node on which the data segment resides. This minimises node-hopping and ensures efficient data-flow. Caching can further augment this process by reducing the NoSQL data-store’s requirement to manage traffic by providing a layer of cached metadata before the data-store, minimising resource-consumption. The following design illustrates the basic structure of a managed cache in a hosted environment using Aerospike – a flash optimised, in-memory database:

Distributed Cache

Distributed Cache

 

High Availability and the Cloud

High availability is a principal applied to hosted solutions, ensuring that the system will be online, if even partly, regardless of failure. Failure takes into account not just hardware or software failure, such as disk failure, or out-of-memory exceptions, but also controlled failure, such as machine maintenance.

How Super Data Centers Manage Infrastructure

Data Centers, such as those managed by Amazon Web Services and Microsoft Azure, distribute infrastructure across regions – physical locations separated geographically. Infrastructure contained within each region is further segmented into Availability Zones, or Availability Sets. These are physical groupings of hosted services within hardware stacks – e.g., server racks. Hardware is routinely patched, maintained, and upgraded within Data Centers. This is applied in a controlled manner, such that resources contained within Availability Zone/Set X will not be taken offline at the same time as resources contained within Availability Zone/Set Z.

Durability and the Cloud

To achieve high availability in hosted applications, the applications should be distributed across Availability Zones/Sets, at least. To further enhance the degree of availability, applications can be distributed across separate regions. Consider the following design:

Highly available, durable, cloud-based cache

Highly available, durable, cloud-based cache

When Things Fall Over

Notice that the design provides 8 Cache servers, distributed evenly across both region and availability zone. Thus, should any given Availability Zone fail, 3 Availability Zones will remain online. In the unlikely event that a Data Center fails, and all Availability Zones fail, the second region will remain online – our application can be said to be highly available.

Note that the design includes AWS Simple Queue Service (SQS) to achieve Cross Data Center Data Replication (XDR). The actual implementation, which I will address in an upcoming post, is slightly more complex, and is simplified here for clarity. Enterprise solutions, such as Aerospike and Couchbase offer XDR as a function.

Traffic is load balanced evenly (or in a more suitable manner) across Availability Zones. A Global DNS service, such as AWS Route 53, directs traffic to each region. In situations where all regions and Availability Zones are available, we might consider distributing traffic based on geographic location. Users based in Ireland can be routed to AWS-Dublin, while German users might be routed to AWS-Frankfurt, for example. Route 53 can be configured to distribute all traffic to live regions, should any given region fail entirely.

Taking Things a Step Further by Minimising PCI DSS Exposure

Applications that handle financial data, such as Merchants, must comply with the requirements outlined by the PCI Data Security Standard. These requirements apply based on your application configuration. For example, storing payment card details on disk requires a higher level of adherence to PCI DSS than offloading the storage effort to a 3rd party.

Requirements for Handling Financial Data

The PCI DSS define data as 2 logical entities; data-in-transit and data-at-rest. Data-at-rest is essentially data that has been persisted to a data-store. Data-in-transit applies to data stored in RAM, although the requirements do not specify that this data must be transient – that it must have a point of origin and a destination. Therefore, storing data in RAM would, at least from a legal-perspective, result in a reduced level of PCI DSS exposure, in that requirements pertaining to storing data on disk, such as encryption, do not apply.

Of course, this raises the question; should sensitive data always be persisted to hard-storage? Or, is storing data in a highly available and durable cache sufficient? I suspect at this point that you might feel compelled to post a strongly-worded comment outlining that this idea is ludicrous – but is it really? Can an in-memory cache, once distributed and durable enough to withstand multiple degrees of failure, operate with the same degree of reliability as a hard data-store? I’d certainly like to prove the concept.

Summary

Caching data allows for increased throughput and optimised application performance. Enhancing this concept further, by distributing your cache across physical machine-boundaries, and further still across multiple geographical locations, results in a highly available, durable in-memory storage mechanism.

Hosting cache servers within close proximity to your customers allows for reduced latency and an enhanced user-experience, as well as providing for several degrees of failure; from component, to software, to Availability Zone/Set, to entire region failure.

Connect with me:

RSSGitHubTwitter
LinkedInYouTubeGoogle+