The subject of API testing is often used in passing, but the exact API test types are wide and varying. From functional testing, to penetration testing, error detection, fuzz testing, and beyond, there are many ways to validate API performance and security. For this reason, it’s a topic that bears clarification and further discussion.

Today, we’re going to discuss nine types of API tests and why they are important to API providers. We’ll look at what these testing types intend to test, and how they are generally implemented. We’ll then discuss the appropriateness of each test for given use cases, and identify appropriate testing methodologies.

The Importance of API Testing

Before we discuss types of testing, we should first establish why testing is so important. Generally speaking, testing should be employed for three basic purposes – to validate a solution, to maintain a solution, and to eradicate an error. The most fantastic API is utterly worthless if it can’t be depended upon. Thus, testing to validate the implementation is key.

When testing to maintain a solution, what’s really being tested is the implementation of said solution and all of the results of said implementation. Testing to see whether the API is using resources correctly, whether there are better avenues for data handling, and other such focuses is important. While validation should hammer out most of these concerns from the get-go, some can only be found during implementation, and security holes can sometimes only become apparent once the solution is implemented fully into the target system.

While the two previous testing purposes cover the general development and later implementation of a codebase, our final purpose, eradicating an error, is very specific to a given vulnerability. Memory leaks, insecurity, and other such concerns can be directly targeted with this type of testing, and in many respects, targeted vulnerability tests can sometimes be more accurate, more effective, and more complete than general holistic tests.

Ultimately, what type of test is performed is directly determined by the need for said test, and as such, the types of tests that will be discussed shortly each have their place and purpose in a much larger approach to holistic API testing.

9 Types of Tests For Holistic API Testing

With all of this being said, what specific types of tests can an API provider expect to run on their codebase? While there are certainly specialty tests, and no list can be asked to be comprehensive in this realm, most tests fit broadly into one of nine categories.

1. Validation Testing

Validation testing is one of the last steps in the development process, but it is one of the more important tests that can be run. Validation testing is typically done at the very end of the basic development process, specifically after verification of the API’s constituent parts and functions is completed. Whereas many of the tests we’ll discuss throughout this piece deal with specific facets of the codebase or specific functions, validation testing is a much more high-level consideration.

Validation testing is essentially a set of simple questions applied to the entirety of the project. These questions include:

  • Product: Did we build the correct product? Is the API itself the correct product for the issue that was provided, and did the API experience any significant code bloat or feature creep that took an otherwise lean and focused implementation into an untenable direction?
  • Behavior: Is the API accessing the correct data in the correctly defined manner? Is the API accessing too much data, is it storing this data correctly given the confidentiality and integrity requirements of the dataset?
  • Efficiency: Is the API the most accurate, optimized, and efficient method of doing what is required? Can any codebase be removed or altered to remove impairments to the general service?

All of these questions essentially serve to validate the API as a holistic solution, and are performed after the API is developed against an established and agreed upon criteria to ensure correct environment integration, adherence to standards, and deliverance of specific end goals and results. Ultimately, this test can be simply said to be an assurance of correct development against the stated user needs and requirements.

2. Functional Testing

Functional testing is still a very broad methodology of testing, but is less broad than those under Validation testing. Functional testing is simply a test of specific functions within the codebase. These functions in turn represent specific scenarios to ensure that the API functions within expected parameters, and that errors are handled well when the results are outside of the expected parameters.

Functional testing is much easier to explain with a scenario. Let’s assume our API processes music for ordering via an online portal. When a user searches for a song, they search by Track Name and Artist Name. Functional testing in this case takes a layered approach, and handles a few specific scenarios.

First, the function of the API is tested with proper inputs – for example, Song 2 by Blur. The API validates the request and serves the expected results. Additional testing is needed, however – our testing thus also includes errata, searching Song2, song2, or song lyrics.

Due to the nature of the test, we should expect a few stated responses. We should expect either an error (and thus, the appropriate error codes and handling instructions) or a corrected response that bears the material we’ve requested.

Functional testing should deliver on all of these points – not only should the regular test case be included, but scenarios of both errata and edge cases should be implemented in the testing regimen.

3. UI Testing

While both validation and functional testing are somewhat generalized in their approaches, UI testing is more specific. UI testing is exactly what it says on the tin – a test of the user interface for your API and its constituent parts. This test is specifically concerned with the function of the UI, whether that interface is graphical in nature or depends upon command-line endpoint calls.

This is in many ways less of a test of the API itself, and more a test of the interface that ties into the API and the developer experience of using that interface. Though not a direct test of the API in terms of codebase, this gives a very generalized view of the health, usability, and efficiency of both the front-end and the back-end.

In fact, this is why UI testing is often used as a substitute for functional testing – in many ways, this test serves the same function, albeit in a less complete and more general sense. That being said, this is a poor approach in modern testing, and UI testing should be strictly limited to ensuring that the UI itself functions as intended.

It should be mentioned that web UI testing is a subset of this type of test, and is concerned more with the end-to-end integrations between web instances the the APIs they represent. Though web UI testing is indeed a subset that is distinct from other UI testing, it bears mentioning and inclusion in this category.

4. Load Testing

Load testing is a test obsessed with reality – it purposely eschews the theoretical (does this code work in theory?) and errs on the practical (will this code work with 1k requests 10k requests, and 100k requests?). Load testing is thus typically done after the completion of a specific unit or the codebase as a whole, testing whether the theoretical solution works as a practical solution under a given load.

Load testing takes on a few different scenarios in order to ensure peak performance. The first of these scenarios is called the “baseline“, and tests the API against the theoretical regular traffic the API expects in normal, day-to-day usage. This includes regular sized tests peppered with some extremely large requests in an effort to measure any impact between the two request types in practice.

A second load test is typically done with the theoretical maximum traffic. This is done to ensure that, even during times of full load, methods are in place to safely throttle requests. While the API may never actually reach this theoretical maximum, it is at least good to ensure it can be safely reached with the API reacting in an adequate fashion.

Finally, an overload test is typically done, testing to the theoretical maximum and adding 10–20% additional traffic on the top. While this type of testing all but anticipates some sort of failure, it is as much a test of the API function as it is a test of the error code generation and handling built into the API, and as such, almost becomes a hybrid test, concerned with what occurs during high load operation and how any failures are handled during said high load operation.

5. Runtime/Error Detection

This type of test is entirely concerned with the actual running of the API. Whereas most of our other tests are chiefly concerned with the result of implementing the API in an environment or scenario, this test is chiefly concerned with the universal results of utilizing the API codebase. These types of tests generally follow one of a few focuses:

  • Monitoring: The runtime of the compiled code is tested for various implementation errors, handler failures, and other intrinsic issues with the implementation to ensure there is no insecurity in the codebase through malfunction.
  • Execution Errors: The code should respond to valid requests in a predictable, known way, and should fail invalid requests just the same; predictably, and with a known pattern.
  • Resource Leaks: Invalid requests, purposefully overflowing commands, and other “illegal but common” types of requests are submitted to the API to test for memory, resource, data, or operational leaks and insecurities.
  • Error Detection: The code is put through known failure scenarios to ensure that errors are properly detected, handled, and routed.

Note that many of these could arguably be considered part of previous categories; this is because runtime/error detection is a near final review of the known errors and issues generated by previous tests, and is designed to holistically ensure resolutions have been applied successfully.

6. Security Testing

Security testing, penetration testing, and fuzz testing are often launched as three separate components of a greater security auditing process, and for this reason, they’ll be discussed jointly. These types of tests are designed to ensure that the implementation of the API is secure from external threats.

Security testing, as previously mentioned, encompasses penetration and fuzz testing, but entails additional steps, including validation of encryption methodologies and validating the design of the access control solution for the API. This includes user rights management and validating authorization checks for resource access.

7. Penetration Testing

Penetration testing takes this a step further, and is generally the second step in the greater auditing process. In this type of test, the API is attacked by someone who has limited working knowledge of the API itself in order to assess the threat vector from an outside perspective. These attacks can be limited to certain functions, resources, or processes, or can target the entirety of the API and its constituent parts.

8. Fuzz Testing

Finally, fuzz testing is typically a later step in the overall security audit, and is certainly less refined than Penetrating testing or the previous tests mentioned. In Fuzz testing, massive amounts of purely random data, sometimes referred to as “noise” or “fuzz,” is forcibly input into the system in order to attempt a forced crash, overflow, or other negative behavior. This is done to test the API at its absolute limits, and serves somewhat as a “worst case scenario.”

9. Interoperability and WS Compliance Testing

While this is not necessarily a common series of tests, nor is it one that RESTful API providers will likely come up against, it’s something that should be discussed given the still wide use of SOAP in the enterprise environment. Interoperability and WS Compliance Testing is a type of testing that really only applies to SOAP APIs, and specifically checks for two general fields of function.

First, Interoperability between SOAP APIs is checked by ensuring conformance to the Web Services Interoperability profiles. By conforming to these guidelines and utilizing these tests, interoperability between SOAP APIs can be confirmed and supported. This also has the added benefit of assuring that your APIs are compatible with some relatively large members of the consortium that stated these standards, including IBM, Microsoft, BEA Systems, Oracle, Intel, and more.

Secondly, WS-* compliance is tested to ensure standards such as WS-Addressing, WS-Discovery, WS-Federation, WS-Policy, WS-Security, and WS-Trust are properly implemented and utilized. This is a general step in assuring that your specific SOAP implementation matches the current industry standards, and is secure in its operations.

Final Thoughts

It’s very common for IT professionals to use varying terms to describe the many testing procedures. We’ve tried to define what exactly they all mean to get a clearer image of what specific tests could be applied to your API environments.

While not all of these tests are going to be valid given your codebase, implementation, and specific use case, it’s more than likely that at least some of them will be standard for the average web API development lifecycle.

That being said, the amount of tests, especially those which are proprietary, is even growing and morphing to meet the needs of the modern industry. Accordingly, this list should be considered a general guide to supplement the specific tests and requirements of your contractual agreements and those systems a client or partner will require for integration.

Are we missing any types of tests? Let us know in the comments below!

Kristopher Sandoval

About Kristopher Sandoval

Kristopher Sandoval is a web developer, author, and artist based out of Northern California with over six years of experience in Information Technology and Network Administration. He has been writing articles for Nordic APIs since 2015.