Why DevOps Tooling Relies on Great API Integrations

APIs drive DevOps in every meaning of the word. With that being said, we often look at APIs only in one light – we view them as a product rather than what they are, which is a core enabling system for the modern code infrastructure. APIs are involved in nearly every action in the DevOps space.

Why is that? What makes APIs so important to DevOps specifically? Today, we’re going to look at just that – we’ll consider use cases for excellent API integrations, and point out exactly why APIs are the unsung, ill-defined heroes of the DevOps age.

At the Platform Summit 2019 in Stockholm, we’ll host our first ever DevOps speaker track! CFP is still open here

APIs In The of Context DevOps

Before we dive into APIs as a DevOps tooling enabler, it would help to actually define what we mean when we say “API”. The word API often brings to mind a finished product. In other words, when a company says to use “their API,” the immediate thought is of a complete solution that performs a broad range of actions within a given class.

Even a microservice, which is really just an API split into several parts, is often seen in the lense of a product – if it enables anything, it enables that which the developer wants to monetize, such as implementing their server status to a mobile push system.

The reality is that APIs are in fact at every single layer of computing, and as such when we discuss “APIs,” we’re selling them short by only considering them a product. These APIs can be present in a visible way, such as a complete, discrete API offering that enables a core group of functions, but it can also be a relatively invisible element, making communication between layers smoother and more refined. In other words, an API can both offer an independent set of functions and facilitate other functions’ interconnection.

With this in mind, the world of DevOps and its relationship to APIs is made more clear. An API can serve one or more foundational roles in the world of DevOps – let’s take a look at a few now.

APIs as an Enabler

In DevOps, APIs serve chiefly as enablers between each stage of interaction. As each DevOps action is essentially a discrete function, a discrete unit of compute, interactions between them need to be supported by a system of commonly agreed upon methodology or knowledge.

For example, when an offering allows a developer to port information from one system to another, it’s not as simple as copying and pasting information – this would require manual effort and would consume too much time and compute power. Instead, APIs facilitate direct solution-to-solution communication, allowing platforms to directly communicate.

For instance, let’s say we have a case in which a developer wants their team to develop additions to a codebase in response to trouble tickets, and then track that implementation on mobile devices. At each level of this interaction, we require an enabling API. The ticketing system has its own API which feeds into the work queue API, which then is interacted with by the coder who then outputs this data to a tracking API, which then outputs to a mobile device using – you guessed it – an API. At every single step, APIs serve to enable the DevOps solution, and without these APIs, there is no such thing as a great solution.

In this realm, we typically see integrations between offerings. Linking Slack to Zapier, utilizing Google Translate in Ticketing Systems, and even enabling synchronous customer service support chats are all great examples of this type of API integration in the DevOps space.

APIs as a Translator

Not every system is going to be the same – as such, APIs function as a sort of translator. In the early days of IT, when everything was developed in-house (or at least in a system defined as compatible with one another), nothing had to be translated. Something for IBM compatible systems would be compatible across the board, largely speaking, and as such, there was no real need to port information around from form to form.

In the modern era, where each service and core function is available piecemeal and instead leverages a great number of integrations from different providers, this is not always the case. What is doable in one solution may not be in another, and when moving data from one service to another, translation has to occur.

In such cases, APIs are not just enablers, they’re the entire backbone of the system itself – there’s no way to port information from Scala into Ruby on different hardware stacks with varying transit package forms and types of encryption without the ample use of APIs.

APIs as an Automation Tool

DevOps provides a lot of benefits in the streamlining of systems. One such benefit is the possibility of automation. When a system is automated, APIs are required at each step in order to efficiently process the requested information and hand it off to the relevant process. Every single core function needs a way to understand the context and expectations of the data system automating the request – the inverse is also true, in that the system needs some sort of layer of understanding for each core function.

APIs enable this understanding. Amply documented, commented, and defined APIs can enable systems to plug into an automated network, leveraging existing code structure to great effect. Automation is different than a regular programming solution, because the relationship is one of implicit trust and a sort of data escrow – there needs to be an intermediary that takes the client data being automated, holds onto it, and then sends it to something else in the form expected. This is exactly what an API does.

For instance, if a developer wants to implement a monitoring solution, APIs are needed to understand not only what the expected function is, but to test the output of those functions against that expected result. The APIs at each level of the interaction need to be able to look at the function and compare it to the expected value, and if the value deviates significantly, they also need to be able to report this deviation and cycle the code back in for error checking.

APIs as a Factorization

Finally, it should be mentioned that APIs in the DevOps space functionally serve as a method of refactorizing efforts. Each implementation and integration of an API creates a web of integrations, making a single core function more effective, more comprehensive, and ultimately more than the sum of its parts.

For instance, if one were developing a Quality Assurance API to aid in code development, automatic error checking would be expected. What might not be expected, however, is value in integrating an API to check for recent error reports on GitHub repositories. Checking in this way can help identify whether the problem the codebase is experiencing is native to the code, or the result of a library or function performing incorrectly.

Additional APIs could even expand this kind of interaction out, allowing for intelligent, hypermedia rich integrations that make each performed function in the development lifecycle easier to contextualize – for instance, an API could provide links to specific schema documentation areas when an invoked function fails, allowing the developer to quickly isolate and eliminate the problem code piece.

A single product can only do so much. When that product is married to a web of integrations that allows for translation, automation, understanding, contextualization, and enabling, it is made much greater than the sum of its parts.

Code Examples

Now that we generally understand the value behind API integrations for DevOps tooling, let’s look at some code examples. These code snippets are from official documentation and represent effective actions and operations that can help integrate each solution.


One aspect of DevOps testing involves testing specific user experiences. Unfortunately, this can sometimes be hidden behind the account actually doing the testing – what may fail for a user may work for an admin, in otherwords.

Gitlab provides an easy solution for generating an “impersonation token”. These tokens function exactly the same as a personal token does in the Gitlab codebase, but allows administrators to perform a call as if they are the user in question.

To start, the following call can be issued to create an impersonation token:

POST /users/:user_id/impersonation_tokens

From here, this token (or really any tokens attached to an account) can be used to impersonate the user, and thereby aid in the automated testing of the user workflow as well as the admin workflow.


During automated staging, DevOps users might find themselves wanting to prevent full automation in order to catch any emergent issues during certain stages. This “milestone limitation” of sorts is often referred to as a sanity check and is used to ensure that there is still a human element to the overall release pattern.

Jenkins does this pretty simply by generating a stage that requires human input. Consider the following code snippet.

pipeline {
    agent any
    stages {
        /* "Build" and "Test" stages omitted */

        stage('Deploy - Staging') {
            steps {
                sh './deploy staging'
                sh './run-smoke-tests'

        stage('Sanity check') {
            steps {
                input "Does the staging environment look ok?"

        stage('Deploy - Production') {
            steps {
                sh './deploy production'

In this approach, the Deploy – Staging stage has automated scripts that are running in order to ensure proper processing. Once this stage is completed, however, an input request is made during the so-called “Sanity check” in order to force human intervention. This could also support additional tools and external inputs, allowing for secondary processing or staging to occur outside of the automated system.


Finally, the use of ReplicaSet in Kubernetes can provide an effective solution for establishing availability and concurrency of services while rolling out updates to core services. The following code is utilized in their documentation to create a replica set of pods – in this case, it creates 3 identical pods of the “guestbook” app on the frontend of the service.

apiVersion: apps/v1
kind: ReplicaSet
  name: frontend
    app: guestbook
    tier: frontend
  # modify replicas according to your case
  replicas: 3
      tier: frontend
        tier: frontend
      - name: php-redis
        image: gcr.io/google_samples/gb-frontend:v3

What this is doing in DevOps terms is ensuring continuity of services by allowing endpoints to resolve to replica services while the core services are being upgraded or otherwise changed. In this flow, the user will have no idea that the original pods are being altered in any way, and if they do experience this, it will only be due to the use of new syntax/calls or through gradual rollout methodologies by the developer themselves.


APIs are the unsung heroes of DevOps – without great API integrations, DevOps fails on its promise for a better, more sensible, more automated future. With the power of APIs and API integrations, however, this promise can be delivered – and in spades.

What do you think about APIs in the DevOps space? Do you think the wide range of API options are more helpful or harmful? Let us know in the comments below.