“Don’t Be Afraid to Start Load Testing”: AMA with Load Testing Expert, Tim Koopmans

Tips for getting started with load testing by one of the founders of Flood

How does load testing and tuning improve software speed and quality?

Speed is a common interest or goal and is typically easier to quantify and measure. For example, testers will often focus on single metrics like response time performance to describe performance and infer quality. It's a well touted metric and often quoted (along with concurrency) as the primary objective or goal.

However, metrics such as 'speed' or 'concurrency' are simply data. Looked at in isolation, they lack context or meaning. Combined as observations, they can start to explain behavior. Understanding system behavior lets us make more accurate predictions which can be further tested. It's not until we are able to ask these questions and test them that we get closer to more qualitative objectives such as quality.

Load testing is simply the means to achieve an objective. It can be described as putting demand on a system and measuring it for performance. Quality is not just a single objective. It might include things like reliability, availability, scalability and so on.

Great load testing delves into questions about performance, gathering evidence to build confidence, or reduce risk for a range of tasks and objectives. Examples might be describing performance in a fail-over scenario; or perhaps it will address issues of scalability and cost effectiveness for a given architecture.

Maybe it will help forecast capacity and predict demand. There are many questions about performance which can be addressed through testing.

What are some real-world problems that load testing addresses?

The most obvious problems are related to general site availability and reliability under load. Many companies start load testing in response to an event that has already happened to them in production - for example, application servers crashing under load.

The more proactive companies are testing in advance of an event, such as a sale or high volume period. Quite often, the real world problems are the same - the difference is when you find out about it, in production or in test.

What are the biggest pitfalls surrounding load testing today?

Whilst platforms such as Flood enable customers to get started with load testing early and continue it through a DevOps type process, my biggest concern still lies with an over-dependence on the performance test 'expert'.

Nobody can reasonably expect a single person to be responsible across the wide variety of platforms and technologies that are common to production systems these days.

Performance testing and tuning is a shared responsibility.

What are some of the most common issues you see people face when load testing?

Lack of preparation is a common issue. Thankfully with a cloud-based approach, the impact of this is reduced as customers can start and stop their test efforts on demand. During test execution, I see many testers fail to anticipate or plan for the size of the test being conducted. This might mean that things like intrusion detection or DDOS prevention mechanisms are triggered, hampering the test. Or it may simply mean that they exhaust available capacity in a quick succession of tests. These aren't necessarily bad outcomes, as they help explain system behavior, but I do see testers caught out by it.

Understanding the workload being generated through to the observable metrics in the system can also be an issue for less experienced teams. An over-reliance on single metrics or narrow aspect views of the system under test can compound these type of issues. The best success is enjoyed when one understands end-to-end system performance. More often than not, the black box left out of scope, for example a load balancer, becomes the primary culprit to unexplained poor performance.

Another common issue is a single person or entity being nominated as the performance expert. Performance has such a wide impact these days that having a multi skilled team, or the ability to engage with a wider team, means you will generally achieve better outcomes from your testing and tuning.

What skills do developers need to ensure their code and applications perform well with performance testing?

I would encourage developers to not only write code, but to understand the nature of the platform they are ultimately deploying code to. Even if a developer does not have production access, virtual machines or containerized environments make development environments one step closer to production in terms of configuration, if not size.

The flip side of that is enabling development teams to replicate production performance defects or issues in relative safety. Nothing beats understanding complex system performance by being able to observe it directly in production, or production like environments.

Testing as a whole is changing. What have you observed as some of the biggest changes to load testing in the past year?

Cloud-based infrastructure gives us an unprecedented realized economy of scale, where we can load test production-sized systems with production-sized load and beyond. The throw away nature of cloud based resources means that we can quickly scale to simulate demand, in response to questions that we ask through the course of load testing.

I would say that cloud-based load testing means the testing itself has become more exploratory. I am seeing more scenarios generated which are dictated by results observed through testing itself.

This is a departure from the more statically defined performance test strategies of the past.

In terms of tuning, the Application Performance Management space has really blossomed. There is such a wide variety of tools and platforms available to testers, that we are no longer locked into one tool set or approach. This also extends to the ways in which we generate load.

Open source tools like JMeter and Gatling are increasingly popular. There are plenty of commercial tools and platforms available too. A competitive market gives customers plenty of options and I would say performance testing is much more accessible than it was a decade ago.

Where do you think load testing is headed in the next one to five years?

Performance testing is everyone's responsibility, more than it ever has been in the past. Platforms that enable testers with different technical skills to get started with load testing provide significant opportunities.

Giving customers more ways to create and express tests in a load testing context is a great progression in the load testing space.

Things like unit tests are almost synonymous with quality code, appearing alongside code in application repositories. I would expect load tests to be alongside those in the future. Load testing in isolation is a thing of the past, we see more and more load tests happening during continuous integration or deployment pipelines, so I expect that to grow.

What aspects of load testing and tuning are you involved with?

Tricentis Flood is a distributed load testing platform. Aside from the day to day operations, tuning and capacity management of our own systems, we also assist thousands of testers with their performance test efforts. We see companies executing a full range of performance tests including things like seasonal / peak demand load testing, project based testing, post-failure or remediation-based load testing, stress to break testing, profiling and tuning, benchmarks and even curiosity based (what if?) types of performance testing.

What 3 pieces of advice would you give new load testers?

First, it's easy to get lost in load testing and to feel overwhelmed by the sheer number of metrics and things to observe through testing. For me, performance testing is less of a pass/fail activity, and more of a risk management activity. I like to qualify test candidates by their likelihood and impact of risk to production. This can help shape what it is you need to test. For example, I might choose to test some batch type interfaces because although infrequently called, they contribute significant volume to back end systems. Alternatively, I might choose something public facing over internal endpoints, as it has a more readily described customer impact.

Second, when testing, I like to think more in terms of a scientific method. That is, it's more a process of experimentation that is used to explore observations and answer questions. It can be quite liberating to test 'off plan' or follow a line of thought through to conclusion. Best case, you head off performance defects before they occur in production, worst case, you learn a little more about the way a system component behaves.

Thirdly, don't be afraid to start load testing. It's a very challenging but satisfying aspect of systems performance, and will give you great insight into the way components interact. Over time you will develop better intuition and gain experience on where to first look. You'll also be stumped by defects you perhaps haven't experienced. It's a great way to learn.

Start load testing now

It only takes 30 seconds to create an account, and get access to our free-tier to begin load testing without any risk.

Keep reading: related stories
Return to the Flood Blog