We are always looking for ways to make load testing even cheaper. Many of our frequent users are running larger and longer load tests as they progress through integrating performance testing into the software development lifecycle. With that growth, each load agent could potentially generate logs or .jtl files which are multiple gigabytes per load agent.
As a post-test process, we merge JMeter JTL files with the generated Apache JMeter Dashboard Report (similar is done for the Gatling Generated Report). This makes for extremely large size reports and merged files. While the CPU needed for these large tests was not an issue, larger tests were taking up too much storage on the EC2 instances, thus fighting for disk resources and negatively impacting performance.
Our initial reaction was to dramatically increase the provisioned storage on the server which does the number crunching. However, this data comes and goes, and the costs would increase linearly per month.
Solution: Amazon Elastic File System
Amazon Elastic File System is a distributed file system that changes based on consumption and not provisioned storage. Amazon EFS can be mounted to an EC2 instance, offering a file system interface and file system access semantics. Multiple EC2 instances can access the same Amazon EFS file system at the same time, so data can easily be aggregated and CPU for job processing can be scaled up easily.
As we complete the analysis and report generation, we can clean up and store results back in S3. This makes EFS a great place to do the work and then remove artifacts. Instead of attaching larger volumes to EC2 and see a linear cost increase, EFS gave us a petabyte scale and a cost structure that will match customer workload.
So what does this mean for RedLine13? You can keep enjoying cheap load testing at even larger scales. Instead of investing more into EC2 hardware or adding fixed storage that would make our costs go up, we were able to implement a scalable solution.