This tutorial expands on the previous tutorial, focusing on the Queue Pool concept. By way of quick refresher, a Queue Pool is a feature of the Daishi.AMQP library that allows AMQP Queues to be shared among clients in a concurrent capacity, such that each Queue will have 0…1 consumers only. The concept is not unlike database connection-pooling.
We’ve built a small application that leverages a simple downstream Microservice, implements the AMQP protocol over RabbitMQ, and operates a QueuePool mechanism. We have seen how the QueuePool can retrieve the next available Queue:
var queue = QueuePool.Instance.Get();
And how Queues can be returned to the QueuePool:
QueuePool.Instance.Put(queue);
We have also considered the QueuePool default Constructor, how it leverages the RabbitMQ Management API to return a list of relevant Queues:
private QueuePool(Func<AMQPQueue> amqpQueueGenerator) { _amqpQueueGenerator = amqpQueueGenerator; _amqpQueues = new ConcurrentBag<AMQPQueue>(); var manager = new RabbitMQQueueMetricsManager(false, "localhost", 15672, "paul", "password"); var queueMetrics = manager.GetAMQPQueueMetrics(); foreach (var queueMetric in queueMetrics.Values) { Guid queueName; var isGuid = Guid.TryParse(queueMetric.QueueName, out queueName); if (isGuid) { _amqpQueues.Add(new RabbitMQQueue {IsNew = false, Name = queueName.ToString()}); } } }
Notice the high-order function in the above constructor. In the QueuePool static Constructor we define this function as follows:
private static readonly QueuePool _instance = new QueuePool( () => new RabbitMQQueue { Name = Guid.NewGuid().ToString(), IsNew = true });
This function will be invoked if the QueuePool is exhausted, and there are no available Queues. It is a simple function that creates a new RabbitMQQueue object. The Daishi.AMQP library will ensure that this Queue is created (if it does not already exist) when referenced.
Exhaustion is Expensive
QueuePool exhaustion is something that we need to avoid. If our application frequently consumes all available Queues then the QueuePool will become ineffective. Let’s look at how we go about avoiding this scenario.
First, we need some targets. We need to know how much traffic our application will absorb in order to adequately size our resources. For argument’s sake, let’s assume that our MathController will be subjected to 100,000 inbound HTTP requests, delivered in batches of 10. In other words, at any given time, MathController will service 10 simultaneous requests, and will continue doing so until 100,000 requests have been served.
Stress Testing Using Apache Bench
Apache Bench is a very simple, lightweight tool designed to test web-based applications, and is bundled as part of the Apache Framework. Click here for simple download instructions. Assuming that our application runs on port 46653, here is the appropriate Apache Bench command to invoke 100 MathController HTTP requests in batches of 10:
-ab -n 100 -c 10 http://localhost:46653/api/math/150
Notice the “n” and “c” paramters; “n” refers to “number”, as in the number of requests, and “c” refers to “concurrency”, or the amount of requests to run in simultanously. Running this command will yield something along the lines of the following:
Benchmarking localhost (be patient).....done
Server Software: Microsoft-IIS/10.0
Server Hostname: localhost
Server Port: 46653
Document Path: /api/math/150
Document Length: 5 bytes
Concurrency Level: 10
Time taken for tests: 7.537 seconds
Complete requests: 100
Failed requests: 0
Total transferred: 39500 bytes
HTML transferred: 500 bytes
Requests per second: 13.27 [#/sec] (mean)
Time per request: 753.675 [ms] (mean)
Time per request: 75.368 [ms] (mean, across all concurrent requests)
Transfer rate: 5.12 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.4 0 1
Processing: 41 751 992.5 67 3063
Waiting: 41 751 992.5 67 3063
Total: 42 752 992.4 67 3063
Percentage of the requests served within a certain time (ms)
50% 67
66% 1024
75% 1091
80% 1992
90% 2140
95% 3058
98% 3061
99% 3063
100% 3063 (longest request)
Adjusting QueuePool for Optimal Results
Those results don’t look great. Incidentally, if you would like more information as regards how to interpret Apache Bench results, click here. Let’s focus on the final section, “Percentage of the requests served within a certain time (ms)”. Here we see that 75% of all requests took just over 1 second (1091 ms) to complete. 10% took over 2 seconds, and 5% took over 3 seconds to complete. That’s quite a long time for such a simple operation running on a local server. Let’s run the same command again:
Benchmarking localhost (be patient).....done
Server Software: Microsoft-IIS/10.0
Server Hostname: localhost
Server Port: 46653
Document Path: /api/math/100
Document Length: 5 bytes
Concurrency Level: 10
Time taken for tests: 0.562 seconds
Complete requests: 100
Failed requests: 0
Total transferred: 39500 bytes
HTML transferred: 500 bytes
Requests per second: 177.94 [#/sec] (mean)
Time per request: 56.200 [ms] (mean)
Time per request: 5.620 [ms] (mean, across all concurrent requests)
Transfer rate: 68.64 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.4 0 1
Processing: 29 54 11.9 49 101
Waiting: 29 53 11.9 49 101
Total: 29 54 11.9 49 101
Percentage of the requests served within a certain time (ms)
50% 49
66% 54
75% 57
80% 60
90% 73
95% 80
98% 94
99% 101
100% 101 (longest request)
OK. Those results look a lot better. Even the longest request took 101 ms, and 80% of all requests completed in <= 60 ms.
But where does this discrepancy come from? Remember, that on start-up there are no QueuePool Queues. The QueuePool is empty and does not have any resources to distribute. Therefore, inbound requests force QueuePool to create a new Queue in order to facilitate the request, and then reclaim that Queue when the request has completed.
Does this mean that when I deploy my application, the first batch of requests are going to run much more slowly than subsequent requests?
No, that’s where sizing comes in. As with all performance testing, the objective is to set a benchmark in terms of the expected volume that an application will absorb, and to determine that maximum impact that it can withstand, in terms of traffic. In order to sufficiently bootstrap QueuePool, so that it contains an adequate number of dispensable Queues, we can simply include ASP.NET controllers that leverage QueuePool in our performance run.
Suppose that we expect to handle 100 concurrent users over extended periods of time. Let’s run an Apache Bench command again, setting the level of concurrency to 100, with a suitably high number of requests in order to sustain that volume over a reasonably long period of time:
ab -n 1000 -c 100 http://localhost:46653/api/math/100
Percentage of the requests served within a certain time (ms)
50% 861
66% 938
75% 9560
80% 20802
90% 32949
95% 34748
98% 39756
99% 41071
100% 42163 (longest request)
Again, very poor, but expected results. More interesting is the number of Queues now active in RabbitMQ:
In my own environment, QueuePool created 100 Queues in order to facilitate all inbound requests. Let’s run the test again, and consider the results:
Percentage of the requests served within a certain time (ms)
50% 497
66% 540
75% 575
80% 591
90% 663
95% 689
98% 767
99% 816
100% 894 (longest request)
These results are much more respectable. Again, the discrepancy between performance runs is due to the fact that QueuePool was not adequately initialised during the first run. However, QueuePool was initialised with 100 Queues, a volume sufficient to facilitate the volume of request that the application is expected to serve. This is simple an example as possible.
Real world performance testing entails a lot more than simply executing isolated commands against single endpoints, however the principal remains the same. We have effectively determined the optimal size necessary for QueuePool to operate efficiently, and can now size it accordingly on application start-up, ensuring that all inbound requests are served quickly and without bias.
Those already versed in the area of Microservices might object at this point. There is only a single instance of our Microservice, SimpleMathMicroservice, running. One of the fundamental concepts behind Microservice design is scalability. In my next article, I’ll cover scaling, and we’ll drive those performance response times into the floor.
Connect with me: