Especially when running distributed tests, there are cases where we may wish to automatically terminate a JMeter test in progress based on certain in-test conditions. In a previous post we explored how to create a custom script that can terminate a load test upon evaluating a predetermined error condition. The JMeter AutoStop plugin allows us to take a different approach based on aggregate data. While nothing would prevent us from using either of these two approaches even in the same test, the AutoStop plugin represents a complementary use case. In this article we will show you how to configure the AutoStop plugin within your JMeter load test.
Adding the AutoStop Plugin to Your Test
As with other publicly available plugins managed under jmeter-plugins.org, the AutoStop plugin can be installed from the JMeter Plugins Manager.
Search for “AutoStop” under the “Available Plugins” tab to find the AutoStop plugin. You will have the check the box on the left to select it for installation, and then click “Apply Changes and Restart JMeter”:
Configuring the AutoStop Plugin
To any existing test, you can add the AutoStop plugin as a listener element. The component itself will have various configurable parameters which may be used to terminate your test when conditions are met. These parameters include:
- Average Response Time
- Average Latency
- Error Rate (as a percentage)
As an example, you may wish to reject a test if the error rate exceeds a certain threshold such as 1%. Using the AutoStop controller this is easily achieved by inputting this as a parameter and specifying the sampling interval (e.g., 60 seconds):
Use Cases for the AutoStop Plugin
If you configure this component with a distributed cloud-based test on a provider such as RedLine13, each load agent will handle this condition individually. This can preferentially terminate only load instances meeting the shutdown conditions (e.g., problematic nodes). You may elect to reject the test entirely in such a case, but a few examples where this would be desirable behavior would be:
- Probing load agent capability using different load agent sizes – complex load tests need to be matched to an underlying EC2 class as their load agent. This is often done as a series of short tests which can be modified to run a single test.
- Load agents which parameterize using different data sources – if your load agents use unique data (either from separate or a split CSV) datasets which generate more than the expected number of errors can be excluded from the final result without necessarily needing to re-run your test.
Did you know that RedLine13 offers a full-featured, time-limited free trial? Sign up now, and start testing today!