This post is a transcript of the video at floodio.tv
I wanted to take a few minutes to tell you more about the exciting news around us joining forces with Tricentis and their continuous testing platform.
Flood IO is a distributed load testing platform lets you run load tests at scale as well as view and share results in real time. Our goal is to make load testing available to everyone. We believe everyone has a responsibility for performance and want to make it easy for you to learn.
Flood supports open source tools such as JMeter, Gatling and Selenium. Unlike other load testing platforms, we don’t charge per the number of users you simulate or the number of tests you run. You simply pay for the infrastructure that you use. Our grid infrastructure lets you scale out your load tests with hundred of load generators around the world, on demand, within minutes. The Flood API also means you can seamlessly integrate load testing within your continuous integration and deployment pipelines.
When we were first approached to join forces, Tricentis Tosca’s approach to test automation really struck a chord with the way we think about load testing. Test automation is commonplace these days and it seems everyone is having a go at it. Where it gets tricky is how you build, execute and maintain test automation suites.
Where I think Tricentis Tosca really shines is its approach to the whole life cycle of test automation. Starting with a risk based approach towards requirements, then on to abstraction and decoupling of business logic from automation, solid test data management and the ability to control and execute scenarios…not to mention features which make tests less fragile and artifacts re-usable. All of this really appealed to us in terms of the direction we want to take load testing.
Those who have done load testing before will know that the traditional way of approaching load test scripting is at the protocol level. Flood already supports what we call Protocol Level Users with tools like JMeter and Gatling. However, although simulating load at the protocol level (say, HTTP), has the advantage of being able to generate a lot of concurrent load from a single resource, that power also comes at a cost. The learning curve is steep and the complexity is easily underestimated in planning a load test effort.
Why is it so complex? When you’re capturing/recording the tons of protocol-level requests made by modern web applications, it’s always necessary to clean up, make sense of the request and response data, extract relevant information, etc. in order to realistically simulate user interactions at a business level. Many tools have tried to simplify this, but few make significant headway. You still need to fill the gap between the technical and business level, which requires both time and technical specialization. The bottom line is that simulating Protocol Level Users may seem OK at the start, but it can quickly spiral out of control. Fortunately, there’s another option.
If we think about load testing through simulating Browser Level Users, something we have been doing now for the past couple of years with Selenium WebDriver, you can approach load testing without a lot of the shackles associated with simulating Protocol Level Users. If you had asked me 10 years ago to spin up 1,000 load generators I would’ve rolled my eyes. However, what I see now (and certainly what customers are asking for) is the ability to spin up more and more nodes to support Browser Level Users.
First, the economy of scale that the cloud gives us. At Flood, we regularly have customers launching hundreds of grid nodes to execute load tests, with a start up time measured in minutes, and an effective cost measured in cents to the hour.
Second, customers enjoy the relative simplicity of simulating user behavior at the Browser Level compared to the Protocol Level. One business action translates to say 2 automation commands in a browser as compared to tens if not hundreds of requests at the protocol level. Browser Level functions like cache, cookie and authentication / session management work without intervention.
Conceptually there’s less to understand.
Objectively there’s less code to keep track of.
Subjectively the tests become less fragile depending on the approach to how you build, execute and maintain your test automation.
So that brings me back full circle to why I’m really excited about joining forces with Tricentis and our forthcoming Tosca Flood integration. On the Tricentis Tosca side, we’re seeing how far down the automation stack we can re-use in a load testing context. On the Flood side, we’re putting Browser Level Users on a diet and getting some great concurrency per node as a result. We’ll also continue supporting Protocol Level Users where it makes sense; for example simulating load against APIs.
Put it all together and I’m really excited for the direction we’re headed as a team.