This post is the final part of a series on API performance testing. Other posts in this series are as follows:
In Part 2 of our post, we discussed how load testing with LoadUI Pro does not have to add a significant amount of bandwidth to your API testing workload.
When starting a new project with LoadUI Pro, you can select a template that will allow you to easily configure a load test so you can quickly get started on your performance testing journey. The templates are based on popular load testing scenarios and provide basic settings that fit these scenarios.
The templates highlight that performance testing is not a particular type of test, but a collective set of test types that provide information about how your API performs under different conditions. Which begs the question … what type of test should I run? Let’s go over some of the most common types so you can successfully start your API performance testing journey.
If you’re an API provider and someone wants to interact with your API, they might say: “What’s the service-level-agreement (SLA) you’re willing to commit to?” Just coming up with a number in your head obviously isn’t the right approach here. It might cause a breach of contract and ruin a great relationship. So, you can’t just say, “Under three seconds I think.” With the built-in baseline scenario in LoadUI, you’ll be able to calculate this exactly.
Using the baseline load test template, your test will help you determine what response is normal for the server. You’ll be able to see real numbers and check how your server performs to determine an SLA.
Simply put, peak testing is used to check how your server works during the busiest periods. But truth be told, this scenario often causes some confusion with testers. They say, “I’m going to have a situation like the rush on Black Friday and I’m going to have 10,000 users, so let’s generate a baseline scenario with 10,000 users.” But that’s not what happens – even on Black Friday. People do not get on the shopping sites all at once! In the morning, they gradually wake up and then go out shopping. Yes, there’s a heavier load, but it involves a buildup.
In LoadUI Pro, the peak scenario literally starts at zero, then the system simulates more and more usage, until it’s at peak throughput (10,000 users in this example). When you have thousands of people in the system, you gradually grow up to it. This is fundamentally different than the baseline scenario where, in the first second there’s no one – and then suddenly you have a stampede of 10,000 requests hammering down on your system.
The peak scenario is much more life-like, and it can give you piece of mind that your APIs can grow to service that many transactions. Whereas if you ran it as a baseline test, you might see it crash immediately even though your system actually may have the capacity to withstand 10,000 users.
One question a lot of people ask when we’re talking about peak load tests: “Can I configure a load test that would add load to the point where the API breaks?” This type of scenario is called a stress test.
Stress testing means simulating a heavy load on the server to find the maximum number of users the server can handle. This number is also called a crash point. The crash point does not necessarily mean that the server crashes or hangs. It can mean that errors start happening or that the server performance or response time is below the level that your SLA defines.
Say you have an SLA of 300 milliseconds. In a stress test you’re going to ramp up the profile from no users to some undefined, unlimited amount. As the users get closer to the point that exceeds 300 milliseconds in response time, that’s where you would stop the test. From there, you can find out how high your concurrent user count was before the test broke.
Everything might work and everything might be configured well for a burst of heavy traffic, but is your system setup right for the long haul? Testing your system under such conditions is important, as there might be a memory issue that will get uncovered only when hit by traffic for a long duration of time. A soak load test involves a low number of users but is run for a long period of time – say 12 to 24 hours. After running the load test for a few hours to a day, you can go in and see if there’s been any kind of increases in memory consumption. Ideally, at the end of a test run, the server performance should be the same as it was at the beginning of the test. The decrease in performance can indicate that the server code has some issues.
Overall, these are the most common load tests. Baseline for establishing SLA, peak for managing heavy volumes, stress to find the number of boundaries that can be serviced with your defined SLA, and soak to identify any kind of hardware memory leaks. Alternatively, if there’s a requirement of a strategy very particular to your use case, you can always draw a custom graph with different peaks and valleys, and different measures, at different times.
Start your API Load Testing Journey Now!
If you want to learn more about API performance testing, I encourage you to check out our associated webinar, API Performance Testing: The Why, The How, and the Measures of Success.
It provides you with a guide to API performance testing that can help your organization succeed, and help bring API testing to the forefront of your SDLC.
If you’re ready to start your stress or soak tests, then download your free trial of LoadUI Pro here.
Till next time – Happy ‘Load’ Testing