5 Ways to Detect Breaking Changes In Your APIs

5 Ways to Detect Breaking Changes In Your APIs

Posted in

The API space is dependent on the end user. All APIs ultimately need to have their end users get the data they expect from the requests they know how to make. When these requests do not result in the desired result due to code changes and inaccuracies in deployment, we call this a breaking change. Breaking changes are ultimately a net negative for both the developer and the user, so understanding what a breaking change is and having methods to detect it is of prime importance for any API developer.

What is a Breaking Change?

APIs are subject to change throughout their lifecycle in response to market shifts, changes to business approach, or new feature requests. Sometimes, an API may undergo a radical transformation to respond to these changes, which can fundamentally alter the API from the end user’s perspective. These changes can occur over multiple revisions with incremental adjustments but can also happen quickly and radically.

However, when these changes occur when the change is radical and causes an application to malfunction from the user’s perspective, this is known as a breaking change. This can result in endpoints no longer working, responses with unforeseen formatting, obsolete features replaced with new ones, or even the outright removal of certain features.

How Can We Detect Breaking Changes?

Detecting possible breaking changes both before and after release is of paramount importance, as these changes can dramatically alter the experience of the end user. With this in mind, let’s look at ways to detect breaking changes.

Schema Testing

Schemas are a wonderful approach to enforcing structure and creating a source of truth for your API. This rigid enforcement and the nature of a schema-based source of truth makes schema testing such a powerful solution.

Schemas are essentially API definitions that outline the expected form and function of an API. As such, they represent the end state that should be expected based upon the rationale and approach of the codebase. Accordingly, if the function of the API is against the schema — that is, if the API does not adhere to its own internal rules and definitions — it will naturally create a breaking experience for the end user.

There are a variety of tools that can be deployed for schema testing. Solutions like the OpenAPI Comparator can compare your specifications to ensure adequate compliance and expected form, enforcing the internal rules that define the function of your API. Other solutions such as Spectral employ linting, a process wherein a tool analyzes code for common errors, bugs, styling issues, standardization concerns, and so forth. In many cases, API definitions can contain problems depending on how they were built, so linting is a great secondary approach that complements a schema-centric approach to testing.

Contract Testing

APIs are fundamentally relational and based upon expectations — services that communicate expect a set of understood interactions and a consistent method of exchange defined by a specific set of rules. This collection of rules and expectations is broadly known as a contract — a service agrees to a contract, so interacting with that service consistently results in the same response.

Accordingly, testing this contract is one way to detect a breaking change. These agreed functions are the end result for most users, and as such, testing the contract is essentially testing the conformity of the application as it currently is with what it should be from a practical point of view.

Contract testing solutions such as Pact offer open-source approaches to test contracts in an automated fashion, but contract testing can also take a manual form. Whatever the implementation, contract testing should itself be consistently reviewed so that developers are sure the contracts, as stated, match with the business logic and assumptions that the developer assumes as active and accurate.

Manual Testing

Another way of testing for breaking changes is to manually review the code and test integrations and UI experience. Manual testing is essentially looking at the code for common errors and specific issues with the current implementation. This process does have a downfall because it relies on the expertise and experience of those doing the test. Accordingly, it’s often helpful to rotate who is doing the coding and who is doing the testing to ensure that assumptions and biases are not coming into play during the testing cycle, as this can impact the overall results of a manual review.

That being said, many testing solutions make this process more streamlined. Something like Postman allows developers to create testing systems through various scripts and debugging steps. Systems that enable custom testing will allow developers to test their API against common pitfalls, as well as specific issues that arise from the implementation and the environment to which it is deployed.

This kind of detection can also help flag breaking changes during iteration cycles. It’s not good enough to test once and assume everything works — testing must be continuous and based on the release’s current state rather than assumptions.

Continuous Monitoring

The idea of continuous monitoring is to proactively watch the state of the service and its requests to identify outages, errors, denied requests, inefficiencies, and other areas for improvement and repair. In essence, this is monitoring the service with a highly critical eye, which allows developers to surface much more than if they were to simply look at the code, contracts, and underlying schema. Put another way, this kind of detection is where the ‘rubber hits the road.’

There are a huge variety of API monitoring tools on the market, with each boasting specific benefits or drawbacks. Some of these solutions are standalone offerings, offering only a continuous monitoring solution, but many others are part of a larger collection of tools and systems, offering developers a more comprehensive deployment and lifecycle management system. The appropriateness of either approach will be determined largely by the implementation of the codebase, but as long as the solution offers comprehensive continuous monitoring, it should be considered.

User Feedback

The gulf between API developers and end users will always create a difference in experience. No matter how close the developers try to get to the end user, the interaction paradigm is still very different. As a result, end users will often interact with and engage in different parts of the codebase than the developer.

The good news is that this perspective can unlock major potential for detecting breaking changes. Ultimately, the end user will have the most to say about breaking changes because the impact falls squarely on their shoulders. Leveraging the end users’ understanding and ongoing user experience can help detect, resolve, and mitigate breaking changes that may not be readily apparent.

User feedback can be incorporated into a support cycle in quite a few ways. First and foremost, providing an open support system where users can actually report errors is paramount. This can take a variety of forms, from email to chat support, but ultimately, the mode of support is less important than ensuring it is adequate and obvious to the end user.

The most simple approach would be to simply provide an email for support that is consistently monitored and used to log issues. A more complex version of this can offer live streams for audience roundtables and live chat support. Or, a support system might integrate something like Zendesk, which provides an end-to-end customer service solution.

Conclusion

Breaking changes are a common issue for users utilizing APIs. Such problems can lead to poor adoption and even turn away potential consumers. API developers should view breaking changes as an existential threat and the solutions offered herein as ways to address these concerns for the health of both the product and the end user.

Ultimately, the proper solution will be highly dependent on the specific implementation. The good news is, however, that all the solutions offered in this piece are complementary — when they work in tandem, they can deliver effective detection for breaking changes.

Are there any solutions we missed? Let us know in the comments below!