9 Common Errors Made During API TestingAPI testing is an important facet of the API development process. Such testing can help disclose important security flaws, data processing errors, and even breaks in basic functionality. With all that said, it’s an unfortunate fact that many API testing processes are fundamentally flawed, and because of this, issues are often kept around and extended well beyond their reasonable lifespan.

Today, we’re going to discuss 9 of the most common errors made during API testing. We’ll highlight why these errors are a big deal, and provide some simple solutions to improve testing methodologies, results, and overall API health.

This post was inspired by a white-paper from APIFortress. Download it here, and read our review of their API testing offering

1: Errant Entries

One of the largest failures an API can endure is a failure in basic call functionality. These issues are especially frustrating when the issue at hand is something simple that works properly alone, but fails when called with other resources.

This is the nature of errant entries. Errant entries are essentially points in the API code where a piece of choke, reference, category, or function is improperly defined as part of a set but functions as an individual. When this situation arises, the API will seem to run perfectly fine, throwing no errors into the mix. When the individual endpoint is tested, however, issues come flooding through.

For example, let’s say you have a commerce application that sorts items by category. Let’s then say you have a large portion of your entries with a ‘NULL’ entry. In this case, while the call would seem to work perfectly fine, when that ‘NULL’ category is called in an API, the test would fail in execution. For our commerce app, what’s essentially happening is a drastic cut off of users from other resources. With an errant category like ‘NULL’, you’re taking potentially thousands of items off the store in an unsearchable way.


Test, test, test. Simple issues like a ‘NULL’ entry or other errant data entry point can easily be identified during early testing, especially when testing single endpoints. Test both upstream and downstream, identifying potential areas where poor, malformed entries might be affecting the overall health of the API.

2: Outdated Caching

Caching makes the internet world go round. By saving data into a public file, users can access the same resources time and time again without adding additional load to the server and stressing it beyond its capabilities.

This is generally a best practice, but like anything simple and considered a best practice, improper implementation can be just as bad as not having it at all.

Let’s again reference that hypothetical eCommerce API. The system is setup to cache every 30 minutes in order to update stock and reduce server load during busy cycles. Today, the API is having a major sale, and because of that, far more items than usual are being added to the listing endpoint.

The problem? The listing endpoint is being cached, but the data is being presented in live form via a dynamic web front. Because of this, customers can see the new data, but the poorly implemented caching results in a clickable item, a picture, and even a description, all of which, when clicked, lead to a 404 page — the resolution for that endpoint has not been cached yet.


Consider how the user is going to interface with your API. Are they going to be checking an item every 30 minutes during a sale, or will traffic be higher than ever before, resulting in more interaction? If that’s the case, the old caching setup that was used for a medium traffic flow over a long period of time is obviously not going to handle a high traffic flow over a very short time.

Test the API as if you are a consumer. Hit every endpoint you can think of in as many ways as you can. Add entries, delete them, and attempt to manipulate them. If at any point your experience is hampered, fix it so that the future, actual customer will not face the same issue.

3: Invalid Fields

Another huge issue in the API space is data being returned in a manner that is incorrect and unexpected. Applications always have their unique quirks and caveats, and because of this, small issues may in fact become big disasters given other integrations.

For example, when returning a URL object, developers typically favor returning either HTTP or NULL as a response. If this response is incorrectly formatted, however, returning HTTP:NULL, then many third party apps, devices, or browsers will read this as a valid URL, attempting to navigate to the resource.

Obviously this is not ideal for a number of reasons, and it creates inconsistency in testing. One testing application might find this acceptable, parsing HTTP:NULL as a valid address, while another testing application will determine, correctly, that it is an invalid field.


Test field validity with permutation upon permutation. The harder you are on your API now, the easier it will deal with failures in the long run. Machines must be trained — they do not have intrinsic knowledge, and thus must be guided towards correct behavior when faced with certain stimulus.

As part of this, proper documentation is key. An API is only as good as its documentation (or as good as its sandboxing and virtualization). Users must know what to expect so that they can code around it. If developers and users know to expect either HTTP or NULL, then when HTTP:NULL is returned due to quirks in a specific system, it can be auto-resolved correctly.

For help constructing docs, see our Ultimate Guide to 30+ API Documentation Solutions

4: False Negatives

Part of the fundamental nature of testing is the expectation of either a positive or a negative. Regardless of the nature and mode of testing, the entire purpose is to get concise, reliable, repeatable responses. Unfortunately, development sometimes gets in the way of this.

When using APIs, a 200 response indicates that everything is fine — it’s the universal “all clear.” When developing an API, however, many providers will put the default state at 200, meaning that a NULL error or other failure in response will still return a 200.

To the API developer, nothing looks wrong. To the testing systems, they’re getting the expected responses. However, to the user, everything says it’s fine, but it’s clearly not.

This kind of false positive is incredibly damaging because it doesn’t allow the developer to see errors when they occur. It’s like trying to complete a jigsaw puzzle in the dark. That’s basically impossible, and breaks the entire purpose of the puzzle. The same is true with APIs.


In this industry, we often gauge API reliability with uptime and status. That’s not entirely right, though. Rather, we should place more emphasis on functional uptime; the time when an API is functioning properly. To do so, check your error responses often, and ensure they actually behave the way they should.

More importantly, avoid shortcuts in your codebase. 200 responses should be reserved only for positive responses, not as a standard response. Responses, for that matter, should be clear, concise, and related to the issue at hand.

As part of the continual theme throughout this piece, test, and test often. Only through rigorous and constant testing can these issues be discovered.

5: Non-standard Standardization

When developing APIs, standardization is incredibly helpful for the nascent dev team. This can help inform how things should run, how calls should look, at the expected behavior of the language. Unfortunately, many developers try to develop outside of the standard, and they often do not share these derivations. What ends up happening is intended functionality that is just non-standard ends up being considered a bug or issue.

An undocumented return displaying numbers changing within a pattern might come across as random or derived from the code base, when in actuality it’s simply a time stamp. An endpoint fails during querying, returning “field is void”, when the entry is titled “field”. To the developer, they’re wondering what field is empty, when in actuality they’re simply not understanding what is broken.

This lack of standardization might be fine for an internal API where everything is known to everyone, but in a publicly consumed API, this leads to segmentation, confusion, and failure to communicate.


Keep to the well-known solutions whenever possible. While this is a rather simple answer, it does come with a caveat — moving outside of these standard solutions is perfectly acceptable, especially when a new functionality is required that standard languages and practices do not support.

The key in this case, however, is proper documentation. Make note of what is expected, allowed, or inferred, and try to make this public. Communication and documentation will set you free.

To assist with API Standardization, See the API Style Book’s Collections of Resources for API Designers

6: Failure in Team Communication

This brings up a good point — failure to communicate between disparate teams is incredibly damaging. Teams are often split amongst user experience, development, and support lines, and because of this, communication between each is vital.

Failure in this realm can lead to cascading issues. Development failing to notify a type change can result in support giving out false information and user experience being harmed. An interface change from the UX team not broadcast to the development team could result in broken site functionality and half the API being inaccessible.

Let’s consider that earlier eCommerce API. As the API grows, new types are added, specifically to support different versions of the same movie in various formats; DVD, Blu-ray, Director’s cut, etc. Unfortunately, the development team never notified the UX team or support team of the new data types. Because of this, the web front end fails to show any content in any of these three categories, which forces a failure in the other categories as well due to the system not being designed for three variable categories.


Again, a very simple solution — communication. API blueprinting and mocking platforms can mitigate much of the issues around lone feature set additions, and given this, there’s really no reason for features to be added to production resources without proper communication.

When a plan is set up via blueprinting, it should be stuck to, and every revision should be based around that concept and development path. Testing your API against this blueprint once in production, and then testing new features in the pre-formed mock can help alleviate most of these issues.

7: Compatibility

When adding new features, communication is not the only thing to be concerned about. Because of disparate and sometimes segmented development paths, not ensuring compatibility can result in broken functionality and huge issues of interoperability.

For instance, back to our eCommerce example, let’s assume the development team pushes a change to the code, and notifies the front end team. The front end is aware, and states they can support the update.

Unfortunately, nothing related to the code is searchable or interactive.

Upon investigation, the team finds that the front end is designed to read the API strictly and against a cached version for security purposes, and because of this, rejected the new feature set, making the data inaccessible, and for all purposes non-existent.


Production is not testing. When integrating a new process or function, it should not go to production unless it has been rigorously tested and staged to confirm it does what you want it to do. Testing a system in a singular process, such as an API running in a mock environment that does not mirror real life, is essentially worthless.

Test the same as you would a live production asset, and test every permutation and variation you can think of. Attempt to break the service. Ensure that the very worst can be handled, specifically the very worst that has been actively seen on the network before.

8: Ensure Readability

The world is international — millions of interactions each day between various language sets and character sets run through a wide, diverse array of systems.

This means one thing for the API provider — character sets. A character set is a set of letters that support a given language. Umlauts, Hiragana/Katakana, and Kanji all require different character sets that can fail a call if not present.

That’s not to say every language in the world must be supported, but the most common languages used would be a great start. When your API processes English, Spanish, and Chinese characters on a day to day basis, ensuring this data readability is paramount.


Advanced payload monitors can do wonders to verify the readability of each item as it’s entered. Sanitation can help as well, turning entries into user IDs or other unique identifiers and stripping the entries of special characters. And of course, character sets can be utilized to make these entries valid. The solution chosen should match with the most common character sets.

9: Test Intelligently

With everything that’s been said regarding the importance of testing, perhaps the most important element of testing is to ensure your tests are actually good. With a poorly crafted test, issues that arise between item relations or even sum-total data sets can come to the forefront and become a much larger issue.

Flawed method testing can also help give false positives. As dangerous as false negatives are, false positives are also incredibly dangerous due to the fact that it creates an environment where serious issues are left alone or unconsciously ignored.


Test smartly. Use tests that have proven themselves over multiple applications, and ensure that it is a commonly used, well vetted tool.

One of the best things a developer can do is to create a mock API that is intentionally broken. By breaking the API in a known way and testing against it, the testing methodology itself can be tested. If you know an endpoint is going to return a failure, but it instead returns a 200, you know for a fact your methodology is wrong.

Remember — test results are only as good as the testing mechanism itself.

Conclusion: Intelligent Testing is Vital to the API Dev Toolkit

Half of any battle is knowing your battlefield terrain. The same is true of API development — knowing what the most common errors in API testing are, and how to negate them, is a vital item in any developer’s toolkit. While there are a wide range of additional issues that can arise during API testing, these are by far and away the most common, and in many cases, the most damaging.

Thankfully, by keeping in mind these issues, many of the additional issues can be negated through cascading fixes. By properly expecting and resolving these issues, API development can be a smooth and positive process.

About Kristopher Sandoval

Kristopher Sandoval is a web developer, author, and artist based out of Northern California with over six years of experience in Information Technology and Network Administration. He writes for Nordic APIs and blogs on his site, A New Sincerity.