Using Test-Driven Development for Microservices

The microservices design paradigm has become a prominent software design methodology now that API-first industries embrace a more decoupled operational strata. In the past we’ve discussed asynchronous choreography, BFF design, as well as identity control for microservices. However, one question remains: what is the most effective means for testing microservices?

Test-driven development or TDD is a development philosophy that emphasizes very short development cycles. With TDD, development teams chart out their business requirements, and outline specific behavioral use cases to verify. TDD for APIs has become a common practice, however, how do we perform efficient TDD with limited resources upon an entire microservice ecosystem?

Within this article, we’ll demonstrate a case study in establishing a reusable testing methodology for a microservice platform. Guided by Platform Summit speaker Michael Kuehne-Schlinkert of Cybus, we’ll identity test types (like platform testing, integration tests, functional testing, and unit testing). Then we’ll zone in on the Cybus testing strategy as a case study in testing a microservice ecosystem, tracking their cycle from holistic black box testing to precise white box unit testing.

This is a companion post to Michael Kuehne-Schlinkert’s presentation at Nordic APIs:

View presentation slides here

Test-Driven Development for APIs and Microservices

Using TDD for web development, specifically with microservices and APIs, means iterative testing of specific behaviors. This could involve iterative testing of API calls in a separate project for API tests, fixing failures, and coding new functionalities into a system. Software is either developed or improved to pass these use cases, and this process is repeated to introduce successive behaviors.
Steve Klabnik’s TDD process for APIs seeks to validate that the API is working as intended. In 2012 he outlined TDD for APIs with the following steps:

1. Write a test for some behavior you’d like to introduce into your system.
2. Run your test suite, and make sure that test fails.
3. Write the simplest code that implements the behavior.
4. Run your test suite, and make sure that test passes.
5. Refactor, because the simplest code often has undesirable properties.
6. Commit, and GOTO 1.

The concept of test-driven development has been popular since the early 2000s, and much of Klabnik’s process still applies. However, with the rise of microservices, some developers are augmenting traditional testing methods to answer new questions: How do we validate that a microservice system is behaving as intended? And more importantly, how do we create efficient and repeatable tests?

Michael believes the contemporary goal is to figure out how we efficiently validate that our microservice ecosystem is working as intended. Rather than zoning in on a single API, we must now envision the entire ecosystem working in synchronicity.

Cybus: Case Study

Michael’s startup, Cybus, works with industrial protocols to act as a secure layer between offline devices and online services, and as you’ve probably guessed, their platform is powered by a microservices architecture.

Naturally, Data-as-a-service providers like Cybus must perform rigorous testing to meet their Service Level Agreements. However, according to Michael, the real driver for creating a repeatable testing strategy was having an agile response to the needs of their customers. It doesn’t hurt to mention that Cybus’ TDD strategy must be efficient and lean, as they are a startup working with a three person development team.

Step One: Create A Story for the Ecosystem

So, for a microservice ecosystem, where do you start testing? Michael believes that a helpful place to start is by first creating a story. He lays out a simple rubric for describing a story:

As a 
I want 
So that 

For example, say you want to control the read access ability to devices so that users can only read the data they are allowed to read — a common technique of mapping permissions with provisioning tiers or data access levels. The above template could be filled out to describe a system administrator (role) that wants to control read access to my devices (feature) so that users can only see the data they are supposed to see (reason).

Crafting a story places yourself in the user’s seat to understand their motivations, and creating these stories is critical for creating technical scenarios to then test .

Step Two: Create the Scenario

Next, we can use the story to generate the scenario, which is more of a technical workflow behind the user motivation. There are various types of scenarios for different use cases, but a good place to start is at one scenario per service. The Cybus team structures their scenarios in an IFTTT-like way of phrasing similar to Gherkin, the language for behavior-driven development:

GIVEN 
WHEN 
THEN 

For a situation in which the admin wants to control access to devices, this scenario could involve three services in non-isolated servers that must communicate to each other: A UI service providing information to end users, a device service that is machine readable, and an OAuth service for authorization.

The scenario could be described as following:

GIVEN External Device
GIVEN Device Service
GIVEN OAuth Service
WHEN External Device provides new data
WHEN Read Access to Device is granted
THEN Device Service reads data from external service.

Like a recipe for a single story, we map out the the given assumptions, and logically explain the event and intended outcome. For cases where there is an external device or server communicating with the ecosystem, it is difficult to validate their behavior as you don’t have control over it. So, the Cybus team stubs external things they are unable to control.

Turn Scenarios into Acceptance Tests

Using these , the Cybus team conducts acceptance testing, in which they perform black box testing — they look at the entire arrangement before diving into the nitty gritty details.

Platform Test

First, they perform black box testing of the entire platform to verify functionality works throughout the entire ecosystem — they test all microservices involved in these large scope scenarios, stubbing external devices in the process.

In addition to testing the API, the team also tests the Graphical User Interface (GUI), simulating user behaviors such as clicking to create mock calls. Simulating the end user experience is critical for the team to ensure the API drives quality customer experiences.

Contract Tests

Contract tests are also part of the acceptance testing process. This is when the team tests the interface of microservices to validate their JSON schemas. What they are looking for here is whether or not the interface of the service deviates from the contract. This helps validate test-driven development, and helps avoid code-first habits.

Next Comes Integration Tests

So far, our tests have focused on the platform synchronicity, but as you might think, a purely large scope is not sufficient. What we really need is more fine-grained approach with integration tests, also known as component tests.

Integration tests take a specific scenario, and turns it into a test for a specific service. Harking back to our device access control scenario, integration tests would involve a single test case per service within each story, such as an isolated test of the OAuth server.

Though the microservice testing is performed as a black box, they stub and mock external tests. Since the Cybus style of integration testing is service-specific, but not completely end-to-end, they call this style Gray Box testing.

Functional Tests

Next comes an even deeper approach. For the Cybus team, functional testing within Node.js means testing communications between specific modules. They need to ensure that two individual components behave and interact with each other in the correct way. Functional testing involves white box testing for specific classes, and mocking all other communication around the services.

Michael references an example would be to test the communication between two components, such as the Controller and Device Handler within a microservice. This is a great start, but comprehensive testing requires an even deeper approach.

Unit Tests

Unit testing is the ultimate white box. All surrounding mechanisms are set aside, and tests focus on a single unit, such as a class or module within a specific domain. Unit testing is critical, essential to operations; APIs and microservices simply can’t live without it.

The TDD Cycle

Now that we’ve defined the individual test types, how does a team actually traverse the test-driven cycle? For Cybus, it goes something like this:

Repurposed from slide 13 “Our TDD Cycle

Testing begins with Black box acceptance, goes to Gray box integration, then white box unit testing. After that they code, perform more unit tests, reiterate some more, and perform integration tests. Interestingly, the Cybus team writes their functional tests on the way back in the cycle.

Michael notes that attempting to test all use cases and scenarios means a staggering number of test cases, but the goal is to have near 100% unit test coverage. From mastering this loop, the team has created reusable code for testing, as well as many testing libraries to re-implement. Michael still encourages developers to know their environment and what works for them.

Final Thoughts: Business Requirements Direct TDD

What’s interesting is that adopting a TDD mindset means that business needs direct development. Privileging the customer story means features are catered to the userbase wants, and thus business goals are sustained. For Cybus, this use-case first assessment for testing their microservices platform also results in a high percentage testing coverage, improved test reusability, and an expansive testing code library.

Some lasting thoughts on behavioral driven development:

  • Know your environment and what works for you: Since the terminology and processes established within this post are derived from the Cybus team, they may not apply to every situation.
  • Reusability: Developing reusable code for testing will make testing efficient, especially for small teams.
  • Labor-intensive pays off: Creating testing libraries is a hard start but is worth it in the end.
  • Dependencies: Create individual test libraries for each service, and make it a benchmark for introducing new dependencies.

Considering that we are attempting to build APIs that last for decades, Michael also notes that:

“One solid key to make sure we build APIs that last is to use test-driven development”

Helpful Resources on TDD