Introduction to Continuous Integration Testing for APIs

I keep in touch with the local tester community through a slack channel and the odd meetup. There is a high demand for technical staff and not enough people to fill that demand, so the skilled folks tend to move around every couple of years.

Recently, I was talking with a colleague that had moved into a new gig. He inherited a product that was a few systems deep -- iOS app, Android app in the works, web front end, and an API for the developer community. There are very few unit tests, and they just recently started using a Continuous Integration (CI) system.

One of his big problems at the moment was figuring out how to shrink the time it takes to perform regression testing before sending out a new release.

I suggested starting with the API, and I want to tell you why.

Quick Success Stories

Approaching test automation on products that were not designed to be testable can be something between "a challenge" and impossible. When I find I'm in that situation, usually I'll try to plant the idea with developers; "hey, it might be cool to write a unit test or two for that." Sometimes it works, and sometimes they don't have time. Getting started is a political matter; I have to convince a developer that testing is important, talk to a manager about why a developer isn't spending as much time on production code (protect them) and then work with operations to get things set up in CI.

Testing the API, on the other hand, is something I have the power to do. I don't need permission from managers because testing is the job, and I don't need to convince developers to spend even more time on features than they already are.

I like to start with small wins like creating a few tests that cover authentication. Just a few to show that this can work, nothing big. Authentication, after all, is one of the most fundamental parts of software. If you can't authenticate, then you won't be able to do anything else. Once I have authentication API checks running on my test environment, I'll work with operations and get that set of tests running on every build. Since we are only talking about a framework and one file at this point, it shouldn't look like a scary amount of work.

The end result is a valuable set of checks that runs with every build. And when they fail, everyone on the development team will know, all done in a couple days of work.

Diagnosing Problems

One of the interesting aspects of automation is that the testing happens before, and immediately after the script runs. When that script is running, your code is running through a series of steps and answering 'yes' or 'no' to whatever questions you though were important enough to ask.

Most of the time, I find the really interesting bugs while I'm writing the scripts. Once, for example, I was testing an API that allowed me to create some marketing data. I ran a test script that POSTed some data to a new marketing campaign, and when the script was done there were a couple of failures. For some reason, dollar values were being returned as a String. I was expecting a JSON Number. I was able to work with the programmer that worked on this feature and get the problem diagnosed right away.

My preference is to diagnose problems quickly, at least to figure out how important that problem will be for our customers.

Once the checks are in CI, you need to take note and take action on test failures; otherwise the errors become noise to be ignored. One way to force this idea I've had some luck with, is considering a build to be 'failed' if the test suite doesn't pass. That means no one installs the new build until we investigate the failure, figure out if it matters, and get either the test or the software fixed.

Tests are information. We should pay attention when they try to talk with us.

Stability and Growing

False positives are great destroyers of confidence in testing efforts. Time gets sucked up in investigating test failures that don't lead anywhere, and worse than that, people stop caring when tests fail, creating the "noise effect."

One of my favorite aspects of API testing is stability. With a good API testing tool recreating scenarios is pretty simple, most of the effort is in formatting and sending data. So, when a test fails I don't have to wonder if it was one of the libraries in the test tool stack, or the test code, or the product. Usually a failing API test will point to a system failure.

An unstable test suite is a suite that will never grow.

After I created the authentication tests and had them running with every build, there wasn't much question about the value. When a test failed there was a problem somewhere, and when they came back green we had a basic understanding of how things were functioning in that endpoint. This stability encouraged developers to ask me to build out some coverage for the changes they were working on.

API testing is a lot of fun, and can be a powerful tool for your development group. Getting those checks running early and often through a Continuous Integration system will make them more useful and help the team to understand the value.