Docker Containers and APIs: A Brief Overview

Docker-containers-and-apis-a-brief-overview-nordic-apisOne of the major issues universally faced in API development is the management, packaging, and distribution of dependencies. The dependency set required by an API might make it extensible, wonderful to use, and extremely powerful. However, if hard to manage, dependencies could spell adoption limbo.

A solution to this age-old problem has exploded onto the scene in recent years, however. Docker is a system by which a complete ecosystem can be contained, packaged, and shipped, integrating “code, runtime, system tools, system libraries – anything you can install on a server”.

In this piece, we’re going to take a look at Docker and its container system. We’ll discuss some cases where using Docker is a good idea, some cases where it may not be the best solution, and the strengths and weaknesses of the system as a whole.

What is Docker?

Before we can critique it, we need to fully understand what Docker is, and how it functions. Simply put, Docker is an open platform and methodology by which the entire development ecosystem can be provided to API users and consumers in a singular application. Docker is a methodology to handle dependencies and simplify functionality, and can be used with a variety of Microservice languages.

The “Classic” API Approach

Docker-classic-approach-containers-nordic-apis-diagramConsider the “classic” method of dependency handling and ecosystem management. An API is developed which issues remote calls to a server or service. These calls are handled by a framework that is referenced external to the API. This framework then requests resources external to the API server in the form of dependencies, which allow for the code to function in the methodology it was designed for. Finally, data is served to the client in the constrained format determined by the API.

Heavy and unwieldy, this system is antiquated in many ways. It depends on the developers of dependencies to update their systems, maintain effective versioning controls, and handle external security vulnerabilities.

Additionally, many of these dependencies are proprietary, or at the very least in the hands of a single developer. This means that code is maintained external to the API, and any change in functionality, failure in security, modification of additional dependencies used by the dependency author, and so forth can cause catastrophic failure.

Barring dependency issues, the “classic” approach is simply resource heavy and slow. It requires that developers host the entire system in a web of interlacing APIs, attempting to hack together a system that functions. It’s a delicate, functional ecosystem that is impressive in its complexity — but with this complexity comes the potential for the classic approach to become a veritable “house of cards”.

The Docker Approach

Docker-approach-API-nordic-apisDocker has created a completely different approach. Instead of depending on multiple external sources for functionality, Docker allows for the remote use of operating system images and infrastructure in a way that distributes all the dependencies, system functionalities, and core services within the API itself.

Docker calls these “containers”. Think of containers like virtual machines — but better.

A virtual machine (VM) packages an application with the binaries, libraries, dependencies, and an operating system — and all the bloat the comes with it. This is fine for remote desktop and enterprise workstation usage, but it leads to a ton of bandwidth waste, and isn’t really a great approach for APIs.

Docker containers, on the other hand, are more self sufficient. They contain the application and all of its dependencies, but use a communal kernel with other applications in the userspace on the host operating system. This frees up the container to work on any system, and removes the entirety of the operating system bloat of virtual machines by restricting contents to only what the API or application needs.

Image Source: Windows IT Pro

Why Docker?

With its similarities to virtual machines, a lot of developers are likely wondering what the buzz about Docker is. What, specifically, makes it so great? There are a lot of reasons to love docker:

  • Open Source – Docker is designed to take advantage of the wide range of open standards present on both the Linux and Microsoft OS ecosystem. This allows it to support pretty much any infrastructure configuration you throw at it, while allowing for transparency in the code base.

Unlike closed systems, open systems are routinely checked for security vulnerabilities by those who use them, and are thus considered by many to be “more secure”. Additionally, because these standards are meant to promote interoperability between disparate systems, compatibility between systems in the code base or library functionality is non-existent.

  • Security Through Sandboxing – Docker may not call it “sandboxing”, but that’s essentially what it is — every application is isolated from other applications due to the nature of Docker containers, meaning they each run in their own separate, but connected, ecosystem.

This results in a huge layer of security that cannot be ignored. In the classic approach, APIs are so interdependent with one another that breaching one often results in the entire system becoming vulnerable unless complex security systems are implemented. With applications sandboxed, this is no longer an issue.

  • Faster Development and Easier Iteration – Because a wide range of environments can be created, replicated, and augmented, APIs can be developed to work with a variety of systems that are otherwise not available to many developers.

As part of this benefit, APIs can be tested in enterprise environments, variable stacks, and even live environments before full deployment without significant cost. This process integrates wonderfully with Two-Speed IT development strategies, allowing for iteration and stability across a service. This has the additional benefit of supporting a truly effective microservices architecture style, due largely to the overall reduction of size and the lean-strategy focused methodology.

  • Lightweight Reduction of Redundancy – By the very nature of Docker containers, APIs share a base kernel. This means less system resources dedicated to redundant dependencies and fewer instances to eat up server RAM and processing chunks.

Common filesystems and imaging only makes this a more attractive proposition, easing the burden of space created by multiple-dependency APIs by orders of magnitude. This makes the API container simple to use and understand, making the API truly functional and useful.

Basic Banner-01

Simple Docker Commands

Part of the appeal of Docker is how simple the commands and variables therein are. For example, to create a container, you simply issue the following call:

POST /containers/create

Using a list of variables, you can completely change how this container functions, what its name is, and what the container has within. You can change everything from the name using:

 –name=""

To the MAC Address of the container itself:

  –mac-address=""

You can even change the way the container functions with the server itself, assigning the container to run on specific CPU cores by ID:

 –cpuset-cpus=""

When the “docker create” command is run, a writable layer is created over the imaged kernel that allows modification of the entire application functionality.

These variables are also available for the “run” command. Most importantly, the “run” command can also attach a static address to the API container utilizing the “-p” and “–expose” calls:

docker run -p 192.168.0.1:8910
docker run –expose 8910

These two calls will first assign the container to port 8910 of the IP 192.168.0.1, and then expose that port to forward facing traffic (in effect opening the port completely for API functionality).

In order to make these containers functional, of course, a container needs to connect to an image. These images are built utilizing the “build” call:

docker build -t ubuntu

This builds a simple image, which is then given an “IMAGE ID” variable that can be called using the “docker images” call:

docker images -a –no-trunc=false

This call lists the entirety of the Docker image library without truncation, which can then be called and utilized using the run variables.

Docker avoids a lot of the dependency loading inherent in the API process, simplifying code and making for a leaner network and system utilization metric. For instance, view a theoretical custom import in Golang:

package main
 
import (
     "encoding/json"
     "fmt"
     "net/http"
     “customlib”
     “main”
     “golang-local/issr”
     “functionreader”
     “payment_processor”
     “maths”
 )

In Docker, an equivalent request would simply be:

docker run –name tracecrt

Caveat Emptor

Docker containers are a good solution for a very common problem, but they’re not for everybody. While it significantly simplifies the API system at runtime, this comes with the caveat of an increased complexity in setting up the containers.

Additionally, because containers share kernels, there’s a good deal of redundancy that is lost by design. While this is good in terms of project scope management, it also means that when there is an issue with the kernel, with an image, or with the host itself, an entire ecosystem is threatened.

One of the biggest caveats here is actually not one of Docker itself, but of understanding concerning the system. Many developers are keen to treat Docker as a platform for development rather than for what it is — functionally speaking, a great optimization and streamlining tool. These developers would be better off adopting Platform-as-a-Service (PaaS) systems rather than managing the minutia of self-hosted and managed virtual or logical servers.

Conclusion

Docker containers are incredibly powerful, just like the language that backs them. With this considered, containers are certainly not for everyone — simple APIs, such as strictly structured URI call APIs, will not utilize containers effectively, and the added complexity can make it hard for novice developers to deal with larger APIs and systems.

That being said, containers are a great solution for a huge problem, and if a developer is comfortable with managing images and the physical servers or systems they run on, containers are a godsend, and can lead to explosive growth and creativity.