Before you start load testing, you probably have goals for what you want to accomplish with your load tests. Obviously that is crucial. But it’s usually extremely important to have more than just the goals. That’s where the Performance Test Plan for Load Testing comes in.
If your load testing is fairly straightforward, the test plan will probably be also. But if your load testing is either complicated or high volume, you will probably save yourself a lot of time and anguish if you spend some time up front working on a test plan.
We will walk you through the components of a test plan for load testing. We have also provided a link for you to download a template with an example test plan for load testing.
Downloading the Template
To help you get started, we have provided a ready-made template including all the common headings that you can fill in with your own information. You can access the template as a Google document here, and modify as necessary for your specific load testing project.
Purpose and Definitions
Here we want to briefly describe the reasons for executing our load test. We don’t necessarily need a long description, but rather a single sentence or simple phrase which states the general purpose. An example might be: “To determine the ultimate ‘failure point’ of our back-end widget inventory API system.”
Definitions is an optional section which provides a place where we can spell out acronyms, or define terms which may have special meanings for this particular test.
Performance Criteria
This section is important for describing the foundational concepts of our load test. Under “Goals” we can define some specific endpoints of our test (i.e., we can make or defend a specific business case).
We also want to specify the “Test Type” in this section in terms of certain well-agreed upon standards. Here are a few examples of standard test types which we might select:
- Stress Test
- Baselining Test
- Saturation Point Test
- Spike Test
- Soak Test
In the description area, you can additionally elaborate on the testing framework utilized (e.g., JMeter, Gatling, log replay, custom, etc.).
“Failure Criteria” is where we declare what conditions will cause a test to fail, and conversely when these conditions are not met the test will be considered successful. For certain kinds of tests, we may want to substitute “Acceptance Criteria” if specific conditions must be met for the test to be considered a success. Here are some examples of failure criteria:
- During any 5-minute interval:
- Peak ASG average CPU usage ≥ 70% measured
- Peak average endpoint response time ≥ 200ms
- Over the course of the entire test run:
- Error rate exceeds 0.01% of all requests
Technical Requirements
Subheadings under this section can be variable and dependent upon your particular test scenario and needs. One common heading would be “Environment” where the general architecture of the testing application is described. To give a few examples, this would include such things as a description of any API being tested; relevant hosting environment considerations; or front-end descriptions if the target application is a website. It is entirely possible for some or all of this information to be opaque from the perspective of the load test, and in that situation the environment description may be intentionally vague. Another item to discuss here might be server instance sizes, types, and numbers used to generate loads.
If your test requires any credentials or other types of secrets, these should also be identified under the “Credentials” heading. A description is fine as we of course do not want to include actual values of credentials in the document.
Load Profile
Especially in cases where we are using credentials, there may be unique requirements for what defines a “User Lifecycle”. Are users being newly registered for this test? Alternatively, are we using pre-registered users from a defined user/credential data source? These are questions we want to answer here in our documentation.
We may also want to describe our “Throughput Goals” which can be a brief description in a few words, or even simply conveyed as a graph:
Post-Test Analysis
In this section we can describe all of our data collection and reporting methods. For example, we would list the Azure Metrics dashboard, AWS Cloud Watch metrics, and log files as data sources for a hypothetical test. Particularly with RedLine13 tests, we make available a cross-server JMeter report dashboard which provides very detailed metrics and useful graphs. Here is an example of one of those graphs:
Summary
This final section of the template is optional, but affords a space to write a few closing lines to summarize everything covered in the test plan. It can also be an opportunity to tie in business arguments and/or explain justification for the test. Our sample template (found at the bottom of the fillable form) contains a good example of this. You can make an editable copy by selecting “Make a Copy” from the “File” menu with Google Docs:
Are you ready to try RedLine13 for yourself? Sign up with our full-featured free trial plan and start testing today!