Benchmarking JMeter and Gatling

Gatling vs JMeter: which one is more resource-intensive?

JMeter and Gatling are two of the most popular open source performance testing tools available, and flood.io proudly supports both of them on its distributed load testing platform.

Which begs the question, is one tool better than the other? Who would win in a Gatling vs. JMeter showdown?

Competitive Benchmarks

Ordinarily, we're not in the business of playing one tool off the other as we think different tools meet different requirements of our testers. We consider "all competitive benchmarking is institutionalized cheating." Guerrilla Manifesto

Since we offer both tools on our platform, it was in our interest to offer an objective comparison for both our testers and also the hard working developers who give up their own time making fantastic software like Gatling and JMeter. To that end, we'll continue to make these benchmarks available at the following URLs on a regular basis:

JMeter

Current Release https://flood.io/benchmarks/jmeter

Latest Release https://flood.io/benchmarks/jmeter?tag=benchmark-latest

Gatling

Current Release https://flood.io/benchmarks/gatling

Latest Release https://flood.io/benchmarks/gatling?tag=benchmark-latest

The Target Site

We needed a target site that could comfortably handle the types of concurrency and volume that we'd be throwing at it. We chose nginx for this task, an extremely fast HTTP server with low resource overheads.

We also needed the target site to behave like an application server; that is, respond to normal HTTP GETs but also respond to HTTP POSTs whilst serving up static and dynamic content. The site had to generate artificial latency in response time, much like a normal web tier would behave. To that end, we were able to mock this mix of transactions with our custom nginx configuration.

We tuned the OS kernel / TCP network settings and allocated 4 virtual CPUs and 15 GB RAM to make sure there were no bottlenecks on the target site.

The Load Generator

Flood.io is a distributed load testing platform that lets you scale out on your own dedicated Grid of flood nodes within minutes. Whilst customers normally launch multiple nodes per Grids in regions across the globe, for the sake of benchmarking we chose to test with just one node, our lowest common denominator. A flood node is equivalent to an m1.xlarge which sports a 64 bit processor, 4 virtual CPUs and 15 GB RAM.

We run the Java HotSpot JVM with JRE version 1.7.0_13 on Ubuntu 12.04 LTS. Each node allocates a 4GB JVM max. heap size to the test tool running, be it JMeter or Gatling, with the following JVM options:

-Xms4096m -Xmx4096m -XX:NewSize=1024m -XX:MaxNewSize=1024m
-XX:MaxTenuringThreshold=2 -XX:MaxPermSize=128m -XX:PermSize=64m
-Xmn100M -Xss2M
-XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42
-XX:+AggressiveOpts -XX:+OptimizeStringConcat -XX:+UseFastAccessorMethods
-XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled
-XX:+CMSClassUnloadingEnabled -XX:SurvivorRatio=8
-XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly
-Dsun.rmi.dgc.client.gcInterval=600000
-Dsun.rmi.dgc.server.gcInterval=600000
-XX:+HeapDumpOnOutOfMemoryError
-verbose:gc -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps
-XX:+PrintGCDetails -Xloggc:/var/log/flood/verbosegc.log
-XX:-UseGCLogFileRotation

The remaining resources are utilized by our test runner and distributed elasticseach engine. We also tune the OS kernel / TCP network settings in a similar fashion to the target site.

The Target Scenario

Our load scenario consists of the following user transactions:

  • 20% of transactions fetching a slow resource in approx. 3.5s
  • 40% of transactions making conditional requests to a cache-able resource in < 10ms
  • 30% of transactions fetching a non cache-able resource in approx. 2s
  • 10% of transactions posting to a slow resource in approx. 4s

The Test Plans

Our test plans are available for Gatling and JMeter with the latter auto generated by our popular ruby-jmeter DSL.

The Target Benchmarks

Ordinarily we recommend a planning figure of 1,000 users per flood node as a "finger in the air" guesstimate. It's hard to recommend a planning figure without first knowing your test plan complexity, target volumetrics and target site behavior under load. To establish a target for these benchmarks we went the traditional exploratory route, and came up with the following that works well for this particular scenario:

ConcurrencyVolumeDuration10,000 users30,000 requests per minute20 minute duration with 10 minute rampup

The Results

Pleasingly, we found that at these volumes, there was not much variance in results between the tools. But compare if you must!

ToolBenchmarkDateMean RT +/- SDevGatling-1.5.310,000 Users2013-09-30 09:52:321788 +/- 362 msJMeter-2.910,000 Users2013-09-30 10:13:151625 +/- 322 msJMeter-2.1010,000 Users2013-09-30 10:33:591698 +/- 31 ms

Key Observations

  • Gatling does not record response size in bytes, hence flood.io uses an estimate based on Content-Length headers if they exist, which is optimistic and does not accurately reflect true network throughput. Request rate per minute should be used as a measure of throughput instead if using Gatling. Alternatively use external network monitors during your test. The following graph demonstrates network utilization parity between the tools.
  • JMeter is more resource heavy on the JVM compared to Gatling. At Flood we use Concurrent Mark Sweep (CMS) for garbage collection in an effort to lower the latency of GC pauses.
  • JMeter is more resource heavy on system CPU and Memory as the following graphs demonstrates this in terms of CPU and JVM Heap utilization. This may affect you more if the complexity of your test plans increase or perceived concurrency on the JVM increases with a slower performing target site.
  • Both JMeter and Gatling demonstrated the desired characteristics of relatively flat response times for measured transactions during rampup and under load, with little variance. Mean response time shouldn't be used as a measure of the tool's performance aside from the prior observation in this sense.
  • Both JMeter and Gatling were able to sustain an average throughput in the region of 30,000 requests per minute with no deviation.
  • Both JMeter and Gatling were able rampup to 10,000 concurrent users within 10 minutes, which is ordinarily considered an aggressive target from a single load generator.
  • Both JMeter and Gatling demonstrated correct caching behavior, particularly when making conditional requests for static resources that respond with a HTTP 304. Gatling were able to promptly provide us with a patch to ensure this.
  • Both JMeter and Gatling test plans included extraction of content via regular expressions from the response body, as well as assertions for contained text and HTTP response codes without detriment to performance.

TL;DR

In terms of concurrency and throughput achievable from a single load generator, there is little to differentiate between Gatling and JMeter. Gatling has some limitations in the ability to accurately record response payload in bytes, which can be compensated by external monitors. JMeter generally demonstrates higher resource usage in terms of CPU, Memory and JVM performance, but can otherwise manage the load when run with appropriate memory allocation.

We don't anticipate users ordinarily run JVMs at their peak as we did in this benchmark, and Flood IO automatically warns the user if any of the Grid nodes are exhausting available resources in such a case.

For the sake of these benchmarks, we chose a simplistic scenario to reduce the number of variables that can affect a side by side comparison. As such results should be analyzed in context of the test boundaries described above. It is possible that performance will differ in more realistic scenarios. The best way to explore is to try for yourself. We host a free node on Flood IO which lets you run JMeter or Gatling tests, and registration is free.

At the end of the day, the choice between JMeter and Gatling is purely subjective, and is better made on some of the other features that each tool independently provides.

We hope this brings some clarity to the relative performance of these great tools.

Special Thanks!

A special thank you to Philippe Mouawad and Stéphane Landelle, core contributors to the Apache JMeter and Gatling-Tool projects. They both helped improve the quality of these benchmarks as well as provide advice / code / patches where appropriate. Thanks!

Start load testing now

It only takes 30 seconds to create an account, and get access to our free-tier to begin load testing without any risk.

Keep reading: related stories
Return to the Flood Blog