You have written your test plan and are ready to run your cloud-based load test with RedLine13, leveraging AWS resources to easily scale your load profile. However, before you can launch your test, you’ll need to decide on how many EC2 load testing servers you will need. In this brief guide, we will discuss the concepts and simple calculations to help you determine that number.
EC2 class size
A prerequisite to scaling your test across multiple load testing servers is determining the instance class and size. In another blog post, we provide a good starting point for making this determination. This post focuses on vCPU which represents an often-encountered constraint. For the purposes of this example, we will use the default server class and size of m3.medium.
Scaling up your test
In determining your ideal EC2 class and size, likely you have arrived at a load profile for your test. Usually this represents a certain number of requests over a given timeframe. For our hypothetical example let us say that we want to generate 750 requests per second at peak throughput. Let us consider the following profile from a JMeter test plan:
According to this load profile, our test can be expected to maximally generate 150 simultaneous requests. If each request transaction takes about one second, that equates to approximately 150 requests per second at peak throughput. Since load testing servers run in parallel, our max throughput for our entire load test run from RedLine13 is a function of this maximum throughput times the number of servers we have configured. This of course assumes that we have appropriately selected instance sizes as not to “max out” their capabilities.
For a test involving five m3.medium servers, our maximum throughput can be calculated as:
150 requests/second/server x 5 servers = 750 requests/second
|
There are unpredictability factors especially including network response time variability, however we can expect throughput to be reasonably close to this estimate.
Another scaling example
In a slightly different scenario, let us consider this load profile:
It is similar to the previous except that at peak load we are simulating about 500 simultaneous users. We can also assume the behavior of this test is such that for each thread (i.e., virtual user) we can expect up to three requests to complete each second. This means that our throughput will approach 1,500 requests per second. We also know that these requests happen to be more memory and CPU intensive, therefore we’ll select the m5.4xlarge instance type. In this case we only want a maximum of 1,500 requests per second to hit our test endpoints, so we will start our test on a single instance.
Conclusion
One of the advantages of load testing in the cloud is that it confers the benefit of almost unlimited scalability “on tap”. We can design a single, modular test plan, and then effortlessly achieve massive scale in RedLine13 with just a few mouse clicks. The above examples are intentionally simple to make a point, however the same concepts apply to more complex scenarios. There may be an element of trial and error to ascertain request volumes per unit test, however once this is known your tests can be just as effortlessly scaled to any desired means.
RedLine13 offers a full-featured free trial account, which will allow you to experience all our platform has to offer. Sign up today and see how RedLine13 can work to save time and reduce your load testing costs.