Istio API: Programming Your Service Mesh Posted in Platforms Gilad David Maayan January 26, 2023 In traditional environments, applications are monolithic, meaning most of their components are collocated on the same server. This means that service calls do not involve networking. However, in a microservices architecture, communication between services occurs over a network, and services must manage complex communication patterns. A service mesh abstracts the ability to handle network communication, so you don’t have to implement it for every application. The service mesh also reduces the operational complexity of the network by providing secure communication channels, load balancing, traffic management, monitoring, and out-of-the-box monitoring capabilities. Istio is a popular open, platform-agnostic service mesh that provides traffic management, policy enforcement, and telemetry collection. Below, we’ll review Istio’s capabilities and explain how to programmatically work with it via the Istio API. Key Istio Capabilities First off, what are some of Istio’s core features? Here is a brief rundown of the most important areas. Traffic management for microservices applications: Istio includes routing for HTTP, gRPC, WebSocket, and TCP traffic. It enables service account-based authentication and authorization based on mutual TLS (mTLS) to ensure secure API communication and also provides a customizable policy layer. Service resilience: Istio provides features like retries, circuit breakers, and fault injection. It can also speed up testing and deployment tasks such as A/B testing, canary releases, and rate limiting. Observability: Istio automatically collects metrics and traces for all traffic in a Kubernetes cluster, providing improved visibility. You can also visualize the service mesh. Tools like Kiali integrate with Istio to show you how to connect with services that are part of your service mesh. Easy integration: Istio dynamically routes traffic to legacy or target environments. Traffic management is transparent to the services of the mesh, so no application configuration changes are required to support the migration, and microservices applications can easily communicate with legacy environments. Learn more about Istio in our LiveCast Introduction to Service Mesh Getting Started: Installing Istio Using Istioctl This installation tutorial employs the istioctl command-line interface to allow extensive modification of the Istio control and data plane sidecars. In addition, it includes the input of user verification to minimize installation failures and customizability to supersede any configuration component. Following these steps, users can choose from Istio’s predefined configuration profiles and then modify that profile to meet user’s requirements. The istioctl command provides the full IstioOperator API through command-line parameters for specific configurations or sending a YAML file with an IstioOperator customized resource (CR). Readers are encouraged to download the latest Istio release before beginning. Installing the binary istioctl using curl: Use this command to get the most recent update: $ curl -L https://istio.io/downloadIstio | sh - Include the system’s path to the istioctl client on macOS or Linux: $ export PATH=$HOME/.istioctl/bin:$PATH Default Istio profile installation: To install the default Istio configuration profile, execute the command given below: $ istioctl install The above command establishes the profile by default on the configuration-defined Kubernetes cluster. The default profile is a great place to start for developing an environment, in contrast to the greater demo profile, which is meant for assessing a wide range of Istio capabilities. There are many configuration options available for customized installations. For example, to activate access logs: $ istioctl install --set meshConfig.accessLogFile=/dev/stdout Check the installed elements: The IstioOperator customized resource (CR) employed to install Istio is saved by the istioctl command in a duplicate of the CR-designated installed state instead of analyzing the Istio-installed services, deployments, pods, and other resources. For example: $ kubectl -n istio-system get deploy The following examples have three types of installed elements: istio-egress gateway, istio-ingress gateway, and istiod. All have the value of ready equal to 1. Similarly, they all are up-to-date and have an available value equal to 1. The only difference between these 3 is their age in seconds. istio-ingress gateway has a 25s age, istio-egress gateway has a 24s age. Lastly, there’s istiod at 20s. Users can analyze the installed-state CR to examine the cluster’s installed components and all custom configurations. Then, use the below command to dump its information in a YAML file, for example: $ kubectl -n istio-system get IstioOperator installed-state -o yaml > installed-state.yaml The installed-state customized resource (CR) is needed to execute checks in certain istioctl commands and, therefore, should not be removed. Show list of profiles available: This command displays the lists of Istio configuration profiles which are accessible via istioctl: $ istioctl profile list Istio configuration profiles: default demo empty minimal openshift preview remote Understand Your Mesh With Istioctl Previously, we included the command istioctl experimental describe. This CLI tool offers the necessary information to comprehend a pod’s configuration. The command’s main usage is as follows: $ istioctl experimental describe pod <pod-name>[.<namespace>] Including a namespace into the pod name produces the same effect as specifying a non-default namespace with the -n command of istioctl. This section assumes that the Bookinfo sample has been deployed to your mesh. Ensure a pod is within the mesh: The istioctl explain command informs if the Envoy proxy hasn’t been existing in a pod or has not yet begun. Additionally, the command generates an alert if any Istio pod requirements are not satisfied. For instance, the below command generates a warning that a kube-dns pod isn’t a service mesh component since it lacks a sidecar: $ export KUBE_POD=$(kubectl -n kube-system get pod -l k8s-app=kube-dns -o jsonpath=’{.items[0].metadata.name}’) $ istioctl x describe pod -n kube-system $KUBE_POD Pod: coredns-f9fd979d6-2zsxk Pod Ports: 53/UDP (coredns), 53 (coredns), 9153 (coredns) WARNING: coredns-f9fd979d6-2zsxk is not part of mesh; no Istio sidecar -------------------- 2021-01-22T16:10:14.080091Z error klog an error occurred forwarding 42785 -> 15000: error forwarding port 15000 to pod 692362a4fe313005439a873a1019a62f52ecd02c3de9a0957cd0af8f947866e5, uid : failed to execute portforward in network namespace "/var/run/netns/cni-3c000d0a-fb1c-d9df-8af8-1403e6803c22": failed to dial 15000: dial tcp4 127.0.0.1:15000: connect: connection refused[] Error: failed to execute command on sidecar: failure running port forward process: Get "http://localhost:42785/config_dump": EOF For a mesh-integrated pod, like the Bookinfo rating service, the command will not provide a warning but instead display the pod’s Istio configuration: $ export RATINGS_POD=$(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}') $ istioctl experimental describe pod $RATINGS_POD Pod: ratings-v1-7dc98c7588-8jsbw Pod Ports: 9080 (ratings), 15090 (istio-proxy) -------------------- Service: ratings Port: http 9080/HTTP targets pod port 9080 The output displays the following details: In this case, the service container’s pod ports are 9080 for the rating container. In this case, the ports employed by the isio-proxy container within the pod, 15090. The protocol employed by the service within the pod, in this case, HTTP on port 9080. Check the configurations for the destination rules: Engineers can use the istioctl describe command to determine which destination regulations apply to pod requests. Implementing the mutual TLS destination rules for Bookinfo, for example, would look like this: $ kubectl apply -f samples/bookinfo/networking/destination-rule-all-mtls.yaml Now explaining the ratings pod: $ istioctl x describe pod $RATINGS_POD Pod: ratings-v1-f745cf57b-qrxl2 Pod Ports: 9080 (ratings), 15090 (istio-proxy) -------------------- Service: ratings Port: http 9080/HTTP DestinationRule: ratings for "ratings" Matching subsets: v1 (Non-matching subsets v2,v2-mysql,v2-mysql-vm) Traffic Policy TLS Mode: ISTIO_MUTUAL The command given below displays the following output: The rating destination regulation applies to rating service requests. The subset of rating destination rules corresponds to the pod, in this case, v1. The additional subsets specified by the destination rule. Requests can be sent to the pod using HTTP or mutual TLS, but clients often employ mutual TLS. Verifying traffic routes: Additionally, the command istioctl describe displays divided traffic weights. For example, execute the command given below to route 90% of the reviews service’s traffic toward the subset of v1 and 10% to the v2 subset: $ kubectl apply -f samples/bookinfo/networking/virtual-service-reviews-90-10.yaml Now summarize the v1 pod reviews: $ istioctl x describe pod $REVIEWS_V1_POD ... VirtualService: reviews Weight 90% The output indicates that the reviews virtual service carries a 90% weight for the v1 subset. This function is useful for other routing types. For example, the user might implement header-specific routing: $ kubectl apply -f samples/bookinfo/networking/virtual-service-reviews-jason-v2-v3.yaml Then, explaining the pod again: $ istioctl x describe pod $REVIEWS_V1_POD VirtualService: reviews WARNING: No destinations match pod subsets (checked 2 HTTP routes) Route to non-matching subset v2 for (when headers are end-user=jason) Route to non-matching subset v3 for (everything) Since it defines a pod within the subset of v1, the result generates a warning. Meanwhile, the user’s virtual service configuration directs traffic towards the subset of v2 if the header includes the end-user equal to “jason” and into the v3 subset otherwise. Istio Telemetry API Istio’s telemetry API allows for the flexible setting of metrics, tracing, and access logs. Istio’s configuration hierarchy allows child resources, such as those that make up the telemetry API, to inherit configuration from their parents: configuring root namespace. For example, istio-system. local namespace (a resource with namespace scope that lacks a workload selector). workload (namespace-scoped resource that has a workload selector). The telemetry API employs the provider’s notion to specify the integration protocol or type. MeshConfig allows for provider configuration. A sample provider configuration in MeshConfig would include the following: data: mesh: |- extensionProviders: # The following content defines two example tracing providers. - name: "localtrace" zipkin: service: "zipkin.istio-system.svc.cluster.local" port: 9411 maxTagLength: 56 - name: "cloudtrace" stackdriver: maxTagLength: 256 Configuring mesh-wide behavior: The root config namespace for a mesh, commonly istio-system, is inherited by the telemetry API resources. Therefore, to configure mesh-wide functionality, include a new telemetry resource to the root configuration namespace or change the existing one. Here is a configuration example that employs the source configuration from the previous section: apiVersion: telemetry.istio.io/v1alpha1 kind: Telemetry metadata: name: mesh-default namespace: istio-system spec: tracing: - providers: - name: localtrace customTags: foo: literal: value: bar randomSamplingPercentage: 100 This configuration takes precedence over MeshConfig’s default provider and makes localtrace the default for the mesh. It further specifies that every trace spans are tagged with the name “foo” and the value “bar” and enables the mesh-wide sampling fraction to 100. Behavioral configuration for tracing within a namespace: To customize the behavior of specific namespaces, include a telemetry resource in the chosen namespace. Any fields supplied in the namespace resources will override any field configurations inherited from the configuration hierarchy. For example: apiVersion: telemetry.istio.io/v1alpha1 kind: Telemetry metadata: name: namespace-override namespace: myapp spec: tracing: - customTags: userId: header: name: userId defaultValue: unknown This will cause tracing behavior inside the myapp namespace, which provides trace spans to the localtrace source, selects requests at random for tracing at a rate of 100%, and generates user-specific tags for each span based on the user-id field in the incoming request’s header. Importantly, the parent configuration’s foo: bar tag will not be employed in the myapp namespace. This is because the custom tags behavior entirely overrides the mesh-default.istio-system-configured behavior. Configuring workload-specific behavior: Append a telemetry resource toward the desired namespace and employ a selector to modify the behavior for specific workloads. When a workload-specific resource contains field specifications, those field settings take precedence over any inherited field configuration. For example: apiVersion: telemetry.istio.io/v1alpha1 kind: Telemetry metadata: name: workload-override namespace: myapp spec: selector: matchLabels: service.istio.io/canonical-name: frontend tracing: - disableSpanReporting: true Tracing for the frontend workload inside the myapp namespace will be deactivated in this situation. However, Istio will still pass tracing headers to the selected tracing provider, yet no spans would be reported. Conclusion In this article, I explained the basics of Istio and showed how to use key capabilities to help you manage networking and security for microservices applications. To review, here are the areas we covered: Installing Istio: You can deploy Istio as a sidecar in your Kubernetes pods using the istioctl command line. Describing the service mesh: A key capability of Istio is to let you visualize and understand network topologies in complex environments. You can achieve this at a basic level with the istioctl x describe command. Istio Telemetry API: This API provides a flexible endpoint for obtaining metrics about inter-service communications, helping you achieve observability for microservices applications. I hope this will be useful as you level up your ability to manage and monitor microservices environments. The latest API insights straight to your inbox