Monthly Archives: January 2015

Object Oriented, Test Driven Design in C# and Java: A Practical Example Part #1

Download the code in C#
Download the code in Java

Check out my interview on .NET Rocks! – TDD on .NET and Java with Paul Mooney

For a brief overview, please refer to this post.

At this point, many tutorials start by launching into a “Hello, World” style tutorial, with very little practical basis.

HelloWorld

This isn’t the most exciting concept, so let’s try a more practical example. Instead of churning out boring pleasantries, our application is going to do something a bit more interesting…build robots.

Robot

Specifically, our application is going to build awesome robots with big guns.

Ok, let’s get started with a narrative description of what we’re going to do.

“Mechs with Big Guns” is a factory that produces large, robotic vehicles designed to shoot other large, robotic vehicles. Robots are composed of several robotic parts, delivered by suppliers. Parts are loaded into a delivery bay, and are transported by worker drones to various rooms; functional parts such as arms, legs, etc., are dispatched to an assembly room. Guns and explosives are dispatched to an armoury.
The factory hosts many worker drones to assemble the robots. These drones will specialise in the construction of 1 specific robot, and will require all parts that make up that robot in order to build it. Once the drone has acquired all parts, it will enter the assembly room and build the robot. Newly built robots are transported to the armoury where waiting drones outfit them with guns. From time to time, two robots will randomly be selected from the armoury, and will engage one another in the arena, an advanced testing-ground in the factory. The winning robot will be repaired in the arena by repair drones. Its design will be promoted on a leader board, tracking each design and their associated victories.

The first thing we’ll do is look at the narrative again, this time highlighting each noun.

“Mechs with Big Guns” is a factory that produces large, robotic vehicles designed to shoot other large, robotic vehicles. Robots are composed of several robotic parts, delivered by suppliers. Parts are loaded into a delivery bay, and are transported by worker drones to various rooms; functional parts such as arms, legs, etc., are dispatched to an assembly room. Guns and explosives are dispatched to the armoury.
The factory hosts many worker drones to assemble the robots. These drones will specialise in the construction of 1 specific robot, and will require all parts that make up that robot in order to build it. Once the drone has acquired all parts, it will enter the assembly room and build the robot. Newly built robots are transported to the armoury where waiting drones outfit them with weapon assemblies. From time to time, two robots will randomly be selected from the armoury, and will engage one another in the arena, an advanced testing-ground in the factory. The winning robot will be repaired in the arena by repair drones. Its design will be promoted on a leader board, tracking each design and their associated victories.

Each highlight represents an actual object that will exist in our simulation.

“But where do I start with all of this?”

I recommend starting with a low-level component; that is, a component with little or no dependency on other objects. Supplier looks like a good place to start. It doesn’t do much, other than deliver RobotParts to a DeliveryBay, so let’s start with that.

I’m assuming at this point, if you’re working with .NET, that you have a new Visual Studio solution and have created a Class Library project including NUnit as an included package. NUnit is the testing framework that we will use to validate our core components.

Or,if you’re working with Java, that you’ve set up a new Java 8 project in your IDE of choice, and have configured JUnit, the testing framework that we will use to validate our core components.

Add a class called RobotPartSupplierTests and add the following method:

C#

        [Test]
        public void RobotPartSupplierDeliversRobotParts() {
            var mechSupplier = new RobotPartSupplier {
                RobotParts = new List<RobotPart> {
                    new MockedRobotPart(),
                    new MockedRobotPart()
                }
            };

            var deliveryBay = new MockedDeliveryBay();
            mechSupplier.DeliverRobotParts(deliveryBay);

            Assert.AreEqual(2, deliveryBay.RobotParts.Count);
            Assert.AreEqual(0, mechSupplier.RobotParts.Count);
        }

Java

    @Test
    public void supplierDeliversRobotParts() {
        robotPartSupplier robotPartSupplier = new robotPartSupplier();
        List<robotPart> robotParts = new ArrayList<robotPart>();

        robotParts.add(new mockedRobotPart());
        robotParts.add(new mockedRobotPart());

        robotPartSupplier.setRobotPart(robotParts);

        deliveryBay mockedDeliveryBay = new mockedDeliveryBay();
        robotPartSupplier.deliverRobotParts(mockedDeliveryBay);

        assertEquals(2, mockedDeliveryBay.getRobotParts().size());
        assertEquals(0, robotPartSupplier.getRobotParts().size());
    }

Here, we declare an instance of RobotPartSupplier, an implementation of the Supplier abstract class:

C#

 public abstract class Supplier {
        public List<RobotPart> RobotParts { get; set; }
    }

Java

public abstract class supplier {
    private List<robotPart> _robotParts;

    public List<robotPart> getRobotParts() {
        return _robotParts;
    }

    public void setRobotPart(List<robotPart> value) {
        _robotParts = value;
    }
}

“Why create an abstraction? Can’t we just deal directly with the concrete implementation?”

Yes, we can. We’ve abstracted to the Supplier class because at some point, we’re going to have to test a component that has a dependency on a RobotPartSupplier. The point of each test in a Test Driven project is that it tests exactly 1 unit of work, and no more. In our case, we’re testing to ensure that a RobotPartSupplier can deliver RobotParts to a DeliveryBay. But if you follow the logic through, we’re also testing concrete implementations of DeliveryBay and RobotPart. This can have negative consequences later on.

What would happen, for example, if we were to change the underlying behaviour of DeliveryBay? For starters, it would break associated DeliveryBay tests.

“OK, so what?”

That’s fine – that’s what the tests are there for. But, it will also break our RobotPartSupplier tests. The point is to isolate each problem domain into a set of tests, and to apply boundaries to those tests such that changes in other problem domains do not impact.
This might sound a bit pedantic. If so, bear with me. It actually provides a level of neatness to our code, which is particularly helpful as codebases grow larger.

Essentially, the rule of thumb is to abstract everything.

Notice that our RobotPartSupplier contains a List of RobotPart. Based on the above concept, The actual RobotPart instances that are loaded in this test are mocked instance. These are dummy implementations of the core abstraction, RobotPart, that will never make it to our actual core components.

“Why bother with them then?”

They provide simple implementations of dependent classes in order to protect our RobotSupplier tests from changes in concrete implementations of those dependent classes.

As you can see, we’ve mocked DeliveryBay and RobotPart abstractions.
Let’s carry on. The first thing that we want to prove here is that our RobotPartSupplier can deliver RobotParts to a DeliveryBay, so our test requires us to implement the DeliverRobotParts method:

C#

    public class RobotPartSupplier : Supplier {
        public void DeliverRobotParts(DeliveryBay deliveryBay) {
            deliveryBay.RobotParts = new List<RobotPart>(RobotParts);
            RobotParts.Clear();
        }
    }

Java

    public void deliverRobotParts(deliveryBay deliveryBay) {
        deliveryBay.setRobotPart(new ArrayList<robotPart>(this.getRobotParts()));
        getRobotParts().clear();
    }

Notice that two things happen here:

  • The RobotPartSupplier’s RobotParts are copied to the DeliveryBay
  • The RobotPartSupplier’s RobotParts are emptied

As per the brief, this concludes our delivery-related functionality. Our test’s assertions determine that the RobotParts have indeed been copied from our RobotPartSupplier to our mocked DeliveryBay.

In next week’s tutorial, we’ll continue our implementation, focusing on WorkerDrones, and their functionality.

Connect with me:

RSSGitHubTwitter
LinkedInYouTubeGoogle+

Object Oriented, Test Driven Design in C# and Java

Check out my interview on .NET Rocks! – TDD on .NET and Java with Paul Mooney

Overview

Providing performance-optimised frameworks is both a practical and theoretical compulsion. Thus far, my posts have covered my own bespoke frameworks designed to optimise performance or enhance security. I’ve outlined those frameworks’ design, and provided tutorials describing several implementation examples.

It occurred to me that providing such frameworks is not just about the practical – designing and distributing code libraries – but also about the theoretical – how to go about designing solutions from the ground up, with performance optimisation in mind.

With that in mind, this post will mark the first in a series of posts aimed at offering step-by-step tutorials outlining the fundamentals of Object Oriented and Test Driven design in C# and Java.

“But what do Object Oriented and Test Driven Design have in common with performance optimisation? Surely components implemented in a functional, or other capacity will yield similar results in terms of performance?”

Well, that’s a subjective opinion, regardless of which is beside the point. Let’s start with TDD. In essence, when designing software, always subscribe to the principal that less is more, and strive to deliver solutions of minimal size. You enjoy the following when applying this methodology:

  • Your code is more streamlined, and easier to navigate
  • Less code, less components, less working parts, less friction, potentially less bugs
  • Less working parts mean less interactions, and potentially faster throughput of data

Friction in software systems occurs when components interact with one another. The more components you have, the more friction occurs, and the greater the likelihood that friction will result in bugs, or performance-related issues.
This is where Test Driven Design comes in. Essentially, you start with a test.

“OK, but what exactly is a test?!?”

Let me first offer a disclaimer: I won’t quote scripture on this blog, nor offer a copy-and-paste explanation of technical terms. Instead, I’ll attempt to offer explanations and opinions in as practical a manner as possible. In other words, plain English:

A TEST IS A SOFTWARE FUNCTION THAT PROVES THE COMPONENT YOU’RE BUILDING DOES WHAT IT’S SUPPOSED TO DO.

That’s it. I can expand on this to a great degree, but in essence, that’s all you need to know.

“Great. But how do tests help?”

Tests focus on one thing only – ensuring that the tested component achieves its purpose, and nothing more. In other words, when our component is finished, it should consist of exactly the amount of code necessary to fulfil its purpose, and no more.

“That makes sense. What about Object Oriented Design? I don’t see how that helps. Will systems designed in an object-oriented manner run more efficiently than others?”

No, not necessarily. However, object-oriented systems can potentially offer a great degree of flexibility and reusability. Let’s assume that we have a working system. Step back and consider that system in terms of its core components.

In an object-oriented design, the system will consist of a series of objects, interacting with one and other in a loosely-coupled fashion, so that each object is not (or at least should not be) dependent on the other. Theoretically, we achieve two things from this:

  • We can identify and extract application logic replacing it with new objects, should requirements change
  • Objects can be reused across the application, where logic overlaps

These are generally harder to achieve in unstructured systems. Using a combination of Object Oriented and Test Driven Design, we can achieve a design that:

  • is flexible
  • lends itself well to change
  • is protected by working tests
  • does not contain superfluous code
  • adheres to design patterns

Let’s explore some of these concepts that haven’t been covered so far:

Think of your tests like a contract. They define how your components behave. Significant changes to a component should cause associated tests to fail, thus protecting your application from breaking changes.

There are numerous articles online that argue the merits, or lack thereof, of design patterns. Some argue that all code should be structured based on design pattern, others that they add unnecessary complexity.

My own opinion is that over time, as software evolved, the same design problems occurred across systems as they were developed. Solutions to those problems eventually formed, until the most optimal solutions matured as established design patterns.

Every software problem you will ever face has been solved before. A certain pattern, or combination of patterns exists that offer a solution to your problem.

Let’s explore these concepts further by applying them to a practical example in next week’s follow-up post.

Connect with me:

RSSGitHubTwitter
LinkedInYouTubeGoogle+

JSON# – Tutorial #5: Deserialising Complex Objects

Fork on Github
Download the Nuget package

The previous tutorial focused on deserialising simple JSON objects. This tutorial describes the process of deserialising a more complex object using JSON#.

Let’s use the ComplexObject class that we’ve leveraged in earlier tutorials:

class ComplexObject : IHaveSerialisableProperties {
    public string Name { get; set; }
    public string Description { get; set; }
    public List<ComplexArrayObject> ComplexArrayObjects { get; set; }
    public List<double> Doubles { get; set; }

    public SerialisableProperties GetSerializableProperties() {
        return new SerialisableProperties("complexObject", new List<JsonProperty> {
            new StringJsonProperty {
                Key = "name",
                Value = Name
            },
            new StringJsonProperty {
                Key = "description",
                Value = Description
            }
            }, new List<JsonSerialisor> {
                    new ComplexJsonArraySerialisor("complexArrayObjects",
                        ComplexArrayObjects.Select(c => c.GetSerializableProperties())),
                    new JsonArraySerialisor("doubles",
                        Doubles.Select(d => d.ToString(CultureInfo.InvariantCulture)), JsonPropertyType.Numeric)
            });
        }
    }

Let’s instantiate this with some values, and serialise to JSON. I won’t bloat this post with details on how to serialise, covered in previous posts. Here is our serialised ComplexObject instance:

{"complexObject":{"name":"Complex Object","description":"A complex object","complexArrayObjects":[{"name":"Array Object #1","description":"The 1st array object"},{"name":"Array Object #2","description":"The 2nd array object"}],"doubles":[1,2.5,10.8]}}

Notice that we have 2 collections. A simple collection of Doubles, and a more complex collection of ComplexArrayObjects. Let’s start with those.

First, create a new class, ComplexObjectDeserialiser, and implement the required constructor and Deserialise method.

Remember this method from the previous tutorial?

    var properties = jsonNameValueCollection.Parse(mergeArrayValues);

This effectively parses the JSON and loads each element into a NameValueCollection. This is fine for simple properties, however collection-based properties would cause the deserialiser to load each collection-element as a separate item in the returned NameValueCollection, which may be somewhat cumbersome to manage:

Flattened JSON collection

Flattened JSON collection

This is where the Boolean parameter mergeArrayValues comes in. It will concatenate all collection-based values in a comma-delimited string, and load this value into the returned NameValueCollection. This is much more intuitive, and allows consuming code to simply split the comma-delimited values and iterate as required.

Compressed JSON Collection

Compressed JSON Collection

Here is the complete code-listing:

class ComplexObjectDeserialiser : Deserialiser<ComplexObject> {
        public ComplexObjectDeserialiser(JsonNameValueCollection jsonNameValueCollection) : base(jsonNameValueCollection) {}

        public override ComplexObject Deserialise(bool mergeArrayValues = false) {
            var properties = jsonNameValueCollection.Parse(mergeArrayValues);

            var complexObject = new ComplexObject {
                Name = properties["complexObject.name"],
                Description = properties["complexObject.description"],
                ComplexArrayObjects = new List<ComplexArrayObject>(),
                Doubles = new List<double>()
            };

            var complexArrayObjectNames = properties["complexObject.complexArrayObjects.name"].Split(',');
            var complexArrayObjectDescriptions = properties["complexObject.complexArrayObjects.description"].Split(',');

            for (var i = 0; i &lt; complexArrayObjectNames.Length; i++) {
                var complexArrayObjectName = complexArrayObjectNames[i];

                complexObject.ComplexArrayObjects.Add(new ComplexArrayObject {
                    Name = complexArrayObjectName,
                    Description = complexArrayObjectDescriptions[i]
                });
            }

            var complexArrayObjectDoubles = properties["complexObject.doubles"].Split(',');

            foreach (var @double in complexArrayObjectDoubles)
                complexObject.Doubles.Add(Convert.ToDouble(@double));

            return complexObject;
        }
    }

As before, we deserialise as follows:

    Json.Deserialise(new ComplexObjectDeserialiser(new StandardJsonNameValueCollection("<JSON string...>")), true);

JSON# does most of the work, loading each serialised element into a NameValueCollection. We simply read from that collection, picking and choosing each element to map to an associated POCO property.
For collection-based properties, we simply retrieve the value associated with the collection’s key, split that value into an array, and loop through the array, loading a new object for each item in the collection, building our ComplexObject step-by-step.

This is the final JSON# post covering tutorial-based material. I’m working on a more thorough suite of performance benchmarks, and will publish the results, as well as offering a more in-depth technical analysis of JSON# in future posts. Please contact me if you would like to cover a specific topic.

The next posts will feature a tutorial series outlining Object Oriented, Test Driven Development in C# and Java. Please follow this blog to receive updates.

Connect with me:

RSSGitHubTwitter
LinkedInYouTubeGoogle+

JSON# – Tutorial #4: Deserialising Simple Objects

Fork on Github
Download the Nuget package

The previous tutorial focused on serialising complex JSON objects. This tutorial describes the process of deserialisation using JSON#.

The purpose of JSON# is to allow memory-efficient JSON processing. At the root of this is the ability to dissect large JSON files and extract smaller structures from within, as discussed in Tutorial #1.

One thing, among many, that JSON.NET achieves, is allow memory-efficient JSON-parsing using the JsonTextReader component. The last thing that I want to do is reinvent the wheel; to that end, JSON# also provides a simple means of deserialisation, which wraps JsonTextReader. Using JsonTextReader alone, requires quite a lot of custom code, so I’ve provided a wrapper to make things simpler and allow reusability.

At the root of the deserialization process lies the StandardJsonNameValueCollection class. This is an implementation of the JsonNameValueCollection, from which custom implementations can be derived, if necessary, in a classic bridge design, leveraged from the Deserialiser component. Very simply, it reads JSON node values and stores them as key-value pairs; they key is the node’s path from root, and the value is the node’s value. This allows us to cache the JSON object and store it in a manner that provides easy access to its values, without traversing the tree:

class StandardJsonNameValueCollection : JsonNameValueCollection {
    public StandardJsonNameValueCollection(string json) : base(json) {}

    public override NameValueCollection Parse() {
    var parsed = new NameValueCollection();

    using (var reader = new JsonTextReader(new StringReader(json))) {
        while (reader.Read()) {
            if (reader.Value != null &amp;&amp; !reader.TokenType.Equals(JsonToken.PropertyName))
                parsed.Add(reader.Path, reader.Value.ToString());
            }
            return parsed;
        }
    }
}

Let’s work through an example using our SimpleObject class from previous tutorials:

class SimpleObject : IHaveSerialisableProperties {
    public string Name { get; set; }
    public int Count { get; set; }

    public virtual SerialisableProperties GetSerializableProperties() {
        return new SerialisableProperties("simpleObject", new List {
            new StringJsonProperty {
                Key = "name",
                Value = Name
            },
            new NumericJsonProperty {
                Key = "count",
                Value = Count
            }
        });
    }
}

Consider this class represented as a JSON object:

{
    "simpleObject": {
        "name": "SimpleObject",
        "count": 1
    }
}

The JSON# JsonNameValueCollection implementation will read each node in this JSON object and return the values in a NameValueCollection. Once stored, we need only provide a mechanism to instantiate a new SimpleObject POCO with the key-value pairs. JSON# provides the Deserialiser class as an abstraction to provide this functionality. Let’s create a class that accepts the JsonNameValueCollection and uses it to populate an associated POCO:

class SimpleObjectDeserialiser : Deserialiser {
    public SimpleObjectDeserialiser(JsonNameValueCollection parser) : base(parser) {}

    public override SimpleObject Deserialise() {
        var properties = jsonNameValueCollection.Parse();
        return new SimpleObject {
            Name = properties.Get("simpleObject.name"),
            Count = Convert.ToInt32(properties.Get("simpleObject.count"))
        };
    }
}

This class contains a single method designed to map properties. As you can see from the code snippet above, we read each key as a representation of corresponding node’s path, and then bind the associated value to a POCO property.
JSON# leverages the SimpleObjectDeserialiser as a bridge to deserialise the JSON string:

const string json = "{\"simpleObject\":{\"name\":\"SimpleObject\",\"count\":1}}";
var simpleObjectDeserialiser = new SimpleObjectDeserialiser(new StandardJsonNameValueCollection(json));
var simpleObject = Json.Deserialise(_simpleObjectDeserialiser);

So, why do this when I can just bind my objects dynamically using JSON.NET:

JsonConvert.DeserializeObject(json);

There is nothing wrong with deserialising in the above manner. But let’s look at what happens. First, it will use Reflection to look up CLR metadata pertaining to your object, in order to construct an instance. Again, this is ok, but bear in mind the performance overhead involved, and consider the consequences in a web application under heavy load.

Second, it requires that you decorate your object with serialisation attributes, which lack flexibility in my opinion.

Thirdly, if your object is quite large, specifically over 85K in size, you may run into memory issues if your object is bound to the Large Object Heap.

Deserialisation using JSON#, on the other hand, reads the JSON in stream-format, byte-by-byte, guaranteeing a small memory footprint, nor does it require Reflection. Instead, your custom implementation of Deserialiser allows a greater degree of flexibility when populating your POCO, than simply decorating properties with attributes.

I’ll cover deserialising more complex objects in the next tutorial, specifically, objects that contain complex properties such as arrays.

Connect with me:

RSSGitHubTwitter
LinkedInYouTubeGoogle+

JSON# – Tutorial #3: Serialising Complex Objects

Fork on Github
Download the Nuget package

The last tutorial focused on serialising simple JSON objects. This tutorial contains a more complex example.

Real-world objects are generally more complex than typical “Hello, World” examples. Let’s build such an object; and object that contains complex properties, such as other objects and collections. We’ll start by defining a sub-object:

class SimpleSubObject: IHaveSerialisableProperties {
    public string Name { get; set; }
    public string Description { get; set; }

    public SerialisableProperties GetSerializableProperties() {
        return new SerialisableProperties(&quot;simpleSubObject&quot;, new List&lt;JsonProperty&gt; {
            new StringJsonProperty {
                Key = &quot;name&quot;,
                Value = Name
            },
            new StringJsonProperty {
                Key = &quot;description&quot;,
                Value = Description
            }
        });
    }
}

This object contains 2 simple properties; Name and Description. As before, we implement the IHaveSerialisableProperties interface to allow JSON# to serialise the object. Now let’s define an object with a property that is a collection of SimpleSubObjects:

class ComplexObject: IHaveSerialisableProperties {
    public string Name { get; set; }
    public string Description { get; set; }

    public List&lt;SimpleSubObject&gt; SimpleSubObjects { get; set; }
    public List&lt;double&gt; Doubles { get; set; }

    public SerialisableProperties GetSerializableProperties() {
        return new SerialisableProperties(&quot;complexObject&quot;, new List&amp;lt;JsonProperty&amp;gt; {
            new StringJsonProperty {
                Key = &quot;name&quot;,
                Value = Name
            },
            new StringJsonProperty {
                Key = &quot;description&quot;,
                Value = Description
            }
        }, 
        new List&lt;JsonSerialisor&gt; {
            new ComplexJsonArraySerialisor(&quot;simpleSubObjects&quot;,
                SimpleSubObjects.Select(c =&amp;gt; c.GetSerializableProperties())),
            new JsonArraySerialisor(&quot;doubles&quot;,
                Doubles.Select(d =&amp;gt; d.ToString(CultureInfo.InvariantCulture)), JsonPropertyType.Numeric)
        });
    }
}

This object contains some simple properties, as well as 2 collections; the first, a collection of Double, the second, a collection of SimpleSubObject type.

Note the GetSerializableProperties method in ComplexObject. It accepts a collection parameter of type JsonSerialisor, whichrepresents the highest level of abstraction in terms of the core serialisation components in JSON#. In order to serialise our collection of SimpleSubObjects, we leverage an implementation of JsonSerialisor called ComplexJsonArraySerialisor, designed specifically to serialise collections of objects, as opposed to primitive types. Given that each SimpleSubObject in our collection contains an implementation of GetSerializableProperties, we simply pass the result of each method to the ComplexJsonArraySerialisor constructor. It will handle the rest.

We follow a similar process to serialise the collection of Double, in this case leveraging JsonArraySerialisor, another implementation of JsonSerialisor, specifically designed to manage collections of primitive types. We simply provide the collection of Double in their raw format to the serialisor.

Let’s instantiate a new instance of ComplexObject:

var complexObject = new ComplexObject {
    Name = &quot;Complex Object&quot;,
    Description = &quot;A complex object&quot;,

    SimpleSubObjects = new List&lt;SimpleSubObject&gt; {
        new SimpleSubObject {
            Name = &quot;Sub Object #1&quot;,
            Description = &quot;The 1st sub object&quot;
        },
            new SimpleSubObject {
            Name = &quot;Sub Object #2&quot;,
            Description = &quot;The 2nd sub object&quot;
        }
    },
    Doubles = new List&lt;double&gt; {
        1d, 2.5d, 10.8d
    }
};

As per the previous tutorial, we serialise as follows:

var writer = new BinaryWriter(new MemoryStream(), new UTF8Encoding(false));
var serialisableProperties = complexObject.GetSerializableProperties();

using (var serialisor = new StandardJsonSerialisationStrategy(writer))
    Json.Serialise(serialisor, new JsonPropertiesSerialisor(serialisableProperties));

Note the use of StandardJsonSerialisationStrategy here. This is the only implementation of JsonSerialisationStrategy, one of the core serialisation components in JSON#. The abstraction exists to provide extensibility, so that different strategies might be applied at runtime, should specific serialisation rules vary across requirements.

In the next tutorial I’ll discuss deserialising objects using JSON#.

Connect with me:

RSSGitHubTwitter
LinkedInYouTubeGoogle+