Do’s and Don’ts in Gatling

Recently, I spent a decent amount of time searching for a suitable load testing tool for a work project. Previously, we used JMeter, but it is quite difficult to write custom logic with it, which sometimes happens during load tests (preparing data in a tricky way, etc.). For this purpose, I turned to the Internet for info. We selected several options and finally settled on a tool called Gatling. Here are its main features:

  • Knows how to “shoot at a target”
  • Builds detailed reports
  • Draws beautiful graphics
  • Easily integrates into assembly
  • Sufficiently rich and useful DSL (although based on Scala)
  • It’s still a code (it will be an advantage for those who are tired of programming in XML)

We will not dwell on the features, you can study them yourself on the website of the maintainer. Let’s better go through the issues we faced in the process of working with it.

PS: the post is relevant at the time of publication, if you are reading this a year later, then remember that something might have changed.

No script dependencies

In the world of Gatling, all requests and actions required for testing are combined into so-called scenarios. So, there is no dependence between these scenarios. That is, you cannot first start one, wait until it is executed, and then execute another. I know it sounds a little strange, but sometimes you want to run a big fat task, say, to process a large amount of data in some way, and then check how the rest of the system works. Not to say that it is a very popular task, but it does happen sometimes.

What to do about it?

Do not allow such scenarios, or else combine them into one. Also on the forum, I saw an offer to try to resolve this situation through various synchronization methods (semaphores, etc.), but this is merely a workaround.

after and before do not work with DSL

Gatling DSL has before and after blocks. As you probably already guessed, they serve to perform the actions before and after the test. For example, to create test data and then delete it after the start. So, if you write exec (…) in these blocks, then nothing will be executed, no requests will happen. This seems counter-intuitive at first, because why not prepare the data using the same tools as for the “live-fire”? Nonetheless, everything is logical. This DSL is tailored specifically for running tests, and not executing single requests. Such description language would certainly require many workarounds and would be much more complicated than one made for one specific task.

What to do about it?

Drag a separate library into the project that will do everything you need.

There is no way to get quantitative results of execution

When you write asserts, sometimes you need to pull out the number of successfully completed requests. For example, to check some additional system parameters at the end of the tests. But it will not be possible to obtain quantitative results, you can only compare them with the outside value.

.assertions(details(paymentGroup).successfulRequests.count.is(totalCount))

count itself cannot be obtained. Fortunately, this is also a rather rare situation.

What to do about it?

Try to calculate what you need at the very beginning or during execution.

DSL is not a code

For some reason, I myself and several of my colleagues struggled with this, so I want to draw your attention to this. You need to understand that through the DSL, you are simply describing the launch configuration, and not executing queries. Can you see the difference? This is what it leads to:

Let’s say you want to pass a unique request id in the header, which is randomly generated.

.headers(Map(
   "RequestId" -> RandomRequsetId.generate()
))

So now, in each request, you will have the same RequestId. This is because you are doing the configuration and not running this code every time.  RandomRequsetId.generate () will be called once and will be the same for all requests. That’s just how it works.

What to do about it?

  1. You can use placeholders (works for strings – a new string will be generated each time):
.headers(Map(
   "RequestId" ->s"${RandomRequsetId.generate()"
))

You can work with sessions. It will look something like this:

.feed(requestIdFeeder)

The feeder generates a value (or reads it from a file, there are many options) and puts it into the session. Then we can use this value in a variety of places.

.headers(Map(
   "RequestId" -> “${RequsetId}”
))

Quite simple and convenient.

No possibility to group results

Let’s say we’re dealing with asynchronous processing of the request. That is, we sent an HTTP request, the system said that the request was accepted and now we wait until our operation completes with the OK status. We just check this status periodically in the tryMax block.

But we cannot do an assert for the duration of the entire chain of requests from the moment of registering the operation to its immediate completion. We can only check the duration of a single request. Moreover, in the tables of the report, this figure can still be seen by enabling the setting gatling / charting / useGroupDurationMetric = true

What to do about it?

As of today, there is no solution to this. One of the options is to set assert for the entire execution time, taking into account the number of requests. Specifically, in our scenario, the total processing speed of all operations is important to us, and not each one of them separately. In any case, you can just look at the graphs yourself.

In conclusion

Gatling is a simple and powerful tool, but in order to use it correctly, you need to have a basic understanding of its features. I hope this post saves you some time while working with it.

Leave a Reply

Your email address will not be published. Required fields are marked *