Top Kubernetes Service Meshes Compared

The cloud revolution has led companies to deploy a variety of services. These services are inherently connected through hybridized, multi-cloud environments. Microservice providers often grapple with mounting complexities as their services suite expands. How do companies conquer these challenges?

How typical microservices are organized and run. Image courtesy of Kubernetes.io.

Modern applications now run in containers, which are built atop a common operating system. These containers conversely have their own memory, filesystem, CPU, and processing space. This arrangement is relatively lightweight. It boosts environmental consistency, and DevOps teams find container management easier in isolation.

However, every system encounters obstacles. We’ll quickly tackle Kubernetes then hop immediately into a few of our favorite service meshes.

The Kubernetes Edge

Distributed applications face some common issues: downtime, poor scalability, and failover. These can be nightmarish to fix manually. Putting out fires is costly and time-consuming — mustn’t there be a better way?

Kubernetes arose as an automated management solution for cloud service providers. It’s also Google-backed and open source. DevOps teams can offload cumbersome maintenance duties to the framework, freeing up time and resources for other tasks. Overall, Kubernetes offers the following benefits:

  • Load balancing and traffic distribution to ensure service stability. Kubernetes can expose a container using either DNS or an IP address.
  • Storage agnosticism—allowing teams to choose between local storage and cloud providers
  • Automated creation and removal of containers, including dynamic resource allocation between them
  • Containerization of tasks via configurable nodes (AKA bin packing)
  • Automatic container management based on health checks, including restarts and process killing
  • Rebuild-free updates based on secrecy, authentication, and management of sensitive data

Kubernetes isn’t a traditional platform-as-a-service (PaaS). The documentation states that “Kubernetes provides the building blocks for building developer platforms, but preserves user choice and flexibility where it is important.” Many of those benefits we discussed are elective—teams can utilize features as required. Kubernetes is geared towards microservices without being monolithic itself.

What is a Service Mesh, Anyway?

A service mesh acts as an infrastructure layer to your existing environment. It facilitates communication between your API-powered microservices while bolstering security. While this sounds remarkably similar to Kubernetes, service meshes provide added levels of data control and configuration. They also sit atop your container layer.

Services meshes also include the following features:

  • Proxy networks called sidecars
  • Policy management
  • Telemetry and metric collection
  • Tracing

Service meshes are designed to be lightweight. Since these layers are performance-focused, they incur little overhead on the existing network. They also help manage application traffic to boot. A proper mesh implementation takes the burden away from individual services—allowing developers to code their applications individually while DevOps oversees the mesh.

Without further ado, let’s see what our three service meshes bring to the table:

Istio

Istio launched in 2017 and has developed into an all-encompassing service mesh solution for DevOps teams. The platform has enjoyed plenty of exposure, thanks to backing from Google, IBM, and Lyft. It’s one of the most popular service meshes for Kubernetes deployments today. The latest production build is version 1.5.1.

Features and Benefits

The Istio mesh focuses on four chief areas: connections, security, control, and observation. Istio offers a rich suite of traffic management controls, perfect for distributing API calls and related activity. DevOps teams can harness staged-and-canary rollouts, A/B testing, and percentage-based traffic allocation.

All traffic is routed through Envoy proxies—offering great performance and easy setup. Virtual services provide excellent integration with Kubernetes. You can configure a single virtual service to tackle “all services in a specific namespace.” Istio states that you can split a monolithic application into multiple small services using this method. That simplifies management without adversely impacting your users. Teams running a Kubernetes cluster can take advantage of service discovery, which automatically identifies your system’s critical endpoints. Istio also employs custom resource definitions unique to Kubernetes. Istio’s deployment code uses labels (or metadata) for compatibility. Istio implementations harness gateways and service entries—which support multiple Kubernetes clusters. Other traffic safeguards ensure seamless functionality when issues arise.

Istio handles authentication via a custom Kubernetes API. Getting set up is pretty simple. Security is dealt with through a specialized user account.

Finally, Istio provides a Kuber-specific template for quick attribute building. Overall, the solution offers many perks to Kubernetes users—on top of any framework-agnostic features your team will automatically enjoy.

Linkerd

Buoyant, Inc. originally launched Linkerd, and it later evolved to Linkerd2 in late 2018. The service mesh was built primarily for the Kubernetes framework. It’s also open source. Linkerd has a sizable Fortune 500 presence—powering microservices for Walmart, Comcast, eBay, and others. The company is quick to illuminate its grand following. Linkerd has over 3,000 Slack members and 10,000 GitHub stars. The latest production build is version 2.7.

Features and Benefits

So, what’s all the buzz about? Linkerd was built to be both lightweight and hassle-free. One of its key advantages is compatibility with existing platforms; very few (if any) coding changes are needed to get up and running. That means you can devote precious time to configuration and galvanizing your service network.

Indeed, Linkerd claims you can get started in mere seconds. Linkerd requires its command-line interface to mesh with Kubernetes. You can accomplish this in a single command. From here, simply validate your Kubernetes cluster and install Linkerd atop it. This entire process takes only a handful of commands from start to finish.

Traffic Management

How does Linkerd manage traffic? The mesh can route all traffic through proxies—including TCP, TLS, WebSockets, and HTTP tunneling. This is crucial for preventing traffic bottlenecks and maintaining service stability. You can even divert traffic to different destination services.

The framework also introduces “retry budgets” into the mix, maintaining a pre-set ratio of normal requests and retries. This prevents retry panics and facilitates better load management. The platform also uses Kubernetes admission webhooks for proxy injections. These are added to pods within your Kubernetes clusters.

Linkerd uses the KubernetesServiceAccount to handle mTLS within the cluster. Certificates and keys are placed into a Kubernetes Secret, or secure container.

Monitoring and metric availability are solid, especially through the Linkerd Dashboard. Teams can view success rates, requests per second, and latency. This is meant to supplement your existing dashboard as opposed to replacing it.

Consul

The premise of Consul is simple: connect and secure services across any platform, via any public or private cloud. The company offers its open-source platform, and even brings an enterprise-grade solution to the table. You can also extend Consul thanks to its compatibility with major service providers. Consul is the elder platform of the bunch—with development efforts reaching back as far as 2014. The latest production build is version 1.7.2.

Features and Benefits

Consul provides a suite of control features, focusing on configuration and segmentation. These can be harnessed together or individually. You can deploy Consul natively or as a proxy framework, the latter functionality being baked in automatically. That said, Consul also supports Envoy as a proxy option.

The framework’s primary features are as follows:

  • Service discovery via DNS or HTTP
  • Health checks across different services, nodes, and clusters—including traffic diversions away from problematic hosts
  • Hierarchical key-and-value stores via an HTTP API
  • Secure, intention-based service communication using TLS certificates
  • Baked-in data center support, bucking the need for abstraction

You can deploy Consul atop Kubernetes using a Helm chart. Consul employs what they call a local client, allowing teams to run Consul as pods on every node. The Consul API makes this possible. Like Istio, the mesh also uses sidecars to achieve mutual TLS connections. That paves the way for authentication, encryption, and stronger communication.

Overall, Consul was built to coexist with Kubernetes. Kubernetes service discovery makes it easy to connect with external services, thanks to Consul’s adaptive service registry.

The folks behind Consul have published a number of resources, aimed at unifying Kubernetes and Consul. These provide insights into development best practices. The Consul platform can also interface with an Azure-Kubernetes deployment.

Kuma

Kuma’s mission is simple: promote solid service connectivity via a modern, user-friendly GUI. It also focuses heavily on optimizations that squeeze every ounce of performance from your ecosystem. According to CTO Marco Palladino, Kuma is “the only Envoy-based service mesh with open governance…” Kuma’s creators are also in the process of donating their platform to the Cloud Native Computing Foundation (CNCF). This will ensure cloud application developers have yet another powerful management tool at their disposal. The latest production version is 0.5.0.

Features and Benefits

Kuma is built atop Envoy, granting it immense flexibility during implementation. Kuma brings its uncluttered interface and can be fully operated via CRDs or with RESTful APIs. Microservice linking is simple—allowing users to designate policies (L4 + L7) for security, routing, observability, and more using only one command. Other solutions require multiple steps to accomplish the same goal. Kuma can also run wherever you need it to. These setups include:

  • Kubernetes platforms
  • Virtual machines
  • Cloud environments
  • On-premises environments
  • Numerous K8 instances and clusters

Platform agnosticism was a chief goal from the start. Furthermore, the simplicity of Kuma’s dashboard extends to the setup process, which is streamlined. The mesh bests top competitors in startup time, cluster management, and logging—the latter working with both databases and applications. For example, other platforms might require one cluster for every mesh. You only need one Kuma cluster for upwards of 100 meshes.

Administrators enjoy hassle-free access to Kuma’s compiled metrics. You can also perform fault injection and tracing, making it easy to identify ecosystem weaknesses. Accordingly, you can establish customized health checks.

Low-level Envoy resources are configurable via proxy templates, which offer supplemental control beyond Kuma’s stock abilities. These written definitions aren’t activated in spite to Kuma—they’re integrated in a complementary manner. This ensures things stay cohesive. If API gateways are your thing, Kuma meshes harmoniously with Kong’s gateway. There are many things to like about Kuma without painstaking configuration, flattening the learning curve while providing rich functionality.

The company also maintains numerous online resources: namely their blog, hosted webinars, briefs, and eBooks. Documentation is clear and organized.

When to Use What

Service meshes have grown increasingly capable in recent years. They’ve evolved in lockstep with web communication standards, adapted to changing security protocols, and introduced richer management tools into the mix. Both minor and major microservice providers must no longer grapple with security or high traffic volumes.

Kubernetes adopters will be hard-pressed to find better options. As a quick assessment:

  • If you require ultra-granular traffic management, and favor Envoy over other providers, Istio is the way to go.
  • If you enjoy a bustling support community and want to make as few coding changes as possible, shoot for Linkerd.
  • Lastly, if you need ultimate extensibility or an enterprise-level solution, consider Consul.

Which service mesh do you prefer? Feel free to let us know below!