How Service Mesh Prevents a Microservices Mess Art Anthony March 23, 2021 One of the biggest challenges facing microservices architecture is growth. That might sound counterintuitive, but it’s often the case, particularly in legacy modernization. “When we talk about microservices, we’re often talking about sharding up an existing monolith into a bunch of smaller pieces,” said Geir Sjurseth from Google at our 2019 Platform Summit in Stockholm. The more a microservice architecture expands, which often equates to more of the existing monolith broken down, the denser the web of microservices becomes. But, in Sjurseth’s opinion, the alternative isn’t too appealing either: “Instead of breaking down the existing monolith, we might just wrap it with an API,” he says. But in reality, he argues, that’s just “putting makeup on a pig.” The solution? Service mesh. The aim of using a service mesh is, to use Google’s words from the video linked just below, to increase product velocity, manage the complexity of code, admit heterogeneity and empower developers. If you’re working in microservices, that’s bound to sound pretty appealing. What Is A Service Mesh? In a talk on their YouTube channel, Google Cloud Platform describes service mesh as “a way to build intelligence into the network.” That’s true, but it’s a little vague. Elsewhere on our site, we’ve put it like this: “Service mesh is a design pattern that provides a common networking framework, helping solve one pain of introducing a microservice architecture.” The aim here is to address a particular set of problems, defined by Sjurseth as follows: “I want my customer to have a good experience, so I don’t want them to have to use SOAP for this service, REST for this service and some funky custom XML thing for this other service.” It’s fair to say that service mesh aims to improve consistency, but, as we’ll see from Sjurseth’s points, there’s more to it than just that. Why Use Service Mesh? In his talk, Sjurseth outlines some of the advantages of service mesh as follows: Connect: control the flow of traffic, flexible testing, blue/green deployments Secure: automatically secure your services with authentication, authorization, and encrypted communication Control: apply policies and ensure they are enforced across services, manage traffic to use resources fairly across services Observe: rich operational logging, monitoring, and tracing In addition, service mesh allows us to: Manage the addition of mTLS Provision certs, distribute policies, collect logs Delegate all the complexity to a proxy (or sidecar) With this taken care of, developers can focus their attention where it belongs — on the APIs they’re building rather than securing services and managing traffic. Now, consider this alongside some of the advantages of robust API Management: Discover: using a developer portal makes your APIs discoverable to developers Modernize: using an outside-in approach to design APIs that people want to consume Report: access to metrics such as developer adoption and app usage Monetize: drive revenue from your data and services Sjurseth suggests that, by combining these two concepts, we can create a more optimal environment for the consumption of APIs. Internal clients can connect directly, while external clients connect through the API gateway. However, there’s a potential problem that remains with this approach… Containerization and Service Mesh As we add more and more services, there’s a risk we end up with “a gateway that’s trying to facilitate all these different pieces in a way that it really can’t handle because they’re deployed independently.” The result of this is a mess of dependencies that are hard to keep track of. One of the biggest pitfalls of microservices architecture is that the more “micro” you make things, the more services you’ll probably end up managing. A possible solution offered by Sjurseth is to containerize these services inside, say, a Kubernetes cluster and add something like Istio to that with an Apigee adapter. Two distinct advantages of this approach are that security policies can be implemented at different levels of granularity – service, namespace, mesh – and everything can be controlled from a central point, but control over individual elements is retained. Plus, we have a pluggable control plane via Istio. Sjurseth’s idea can be boiled down to a single idea — services and API both need management: The reason for that? Without a service mesh, it’s very easy for microservices architecture to become messy, with developers working in their own little silo, without any consistency between their output. That may sound aspirational to lone developers working on a single microservice, but that’s the thing about microservices projects; they rarely stay micro for long. Final Thoughts In a future world, service mesh and microservices may become synonymous. We’re not quite there yet, perhaps because container orchestrators like Kubernetes eased many headaches associated with microservices architecture. Nevertheless, 2020 saw service meshes gain steam. In addition to Istio, Linkerd, Kuma, and OpenShift, Amazon launched AWS App Mesh in 2019. That’s notable because, as we’ve seen before, it’s often the case that where Amazon leads (even when they’re not first to market), others tend to follow. With so many open-source service mesh tools out there — Istio was open-sourced in 2017, and Linkerd has been community-driven since its early days — it’s not like there’s a high cost associated with adopting a service mesh, which is always helpful for adoption. To a certain extent, the future popularity of service mesh will depend on the continued growth of microservices. You probably don’t need us to tell you that’s a space that’s showing no signs of slowing down any time soon.