Learn how to create an Istio multi-cluster service mesh with locality-aware connection routing.
πŸŽ‰ Kubecost 2.0 is here! Learn more about theΒ massive new feature additions and predictive learning

Istio Multi-Cluster: Tutorial & Examples

Like this Article?

Subscribe to our Linkedin Newsletter to receive more educational content

Subscribe now

Multi-cloud and hybrid cloud Kubernetes implementations often require specialized tooling to address load balancing across clusters hosted on different platforms. For example, additional tooling can help manage ingress maps, such as using NGINX or Traefik to route requests to services based on metadata in the HTTP header.

Organizations looking to scale across Kubernetes multiple clusters without compromising performance often implement a service mesh.

This tutorial will explain the service mesh concepts and step readers through implementing multiple clusters in a single network using Istio to produce a single logical service mesh.

Summary of key Istio multi-cluster concepts

The table below summarizes the key Istio multi-cluster concepts this article will build on.

Concept Description
What is a service mesh? A service mesh is an interconnected virtual infrastructure layer of proxies that implement and extend various Kubernetes capabilities.
What is Istio? Istio is an extensible open-source implementation of a Kubernetes service mesh that uses the Envoy proxy as its data plane.
Differences between implementing Istio for one cluster vs. multiple clusters A single-mesh configuration is common for Istio implementations in a single cluster because it maintains simplicity while still providing all of the added capabilities Istio offers.

More clusters bring additional complexity but also new capabilities:
  • Fault tolerance through cross-cluster failover
  • Locality-aware connection routing
  • Workload isolation, such as by business unit or project
  • Various multi-cluster control plane configurations, such as single remote, primary or secondary, and location-specific

Understanding Kubernetes service mesh

A Kubernetes service mesh is a way of connecting services together as a dedicated virtual infrastructure layer. This infrastructure layer is implemented using a sidecar pattern where a container is injected into each newly created Pod and serves as a proxy for the service mesh, allowing the other containers of the Pod to communicate transparently across the mesh.

Connectivity between pods alone is not compelling, considering that this is a basic capability of Kubernetes. However, a service mesh also includes the ability to transparently add and extend capabilities such as load balancing, load shedding, observability, security, and more. Some examples are listed in the table below.

Capability Description
Security Native mutual Transport Layer Security (mTLS) between services
Load Balancing Automatic and native integration for on-mesh services
Load Shedding Options for Service resilience during unpredictable load patterns
Connection Routing Integrated and deterministic routing support, including circuit breaking, failover, and fault injection
Observability Native logging and tracing support

The Istio project delivers one such service mesh - for more information on Istio deployment models, reference the Istio documentation here.

Tutorial: How to create an Istio multi-cluster service mesh

This tutorial will explain how to implement an Istio multi-cluster configuration with locality-aware connection routing.

The prerequisites for this tutorial are:

  • Two Kubernetes clusters at or above version 1.22
  • Clusters can connect and communicate transparently east-west
  • kubectl utility installed
  • istioctl utility installed

Getting started

Begin by assigning logical names for each cluster participating in the mesh. These names are arbitrary and optional, but the below will match commands throughout for simplicity:

$ export CTX_CLUSTER1=first_cluster-context
$ export CTX_CLUSTER2=second_cluster-context

Clone the Istio source repository:

$ git clone https://github.com/istio/istio.git
$ cd istio/

Cluster one - primary

Generate SSL certificates for the primary cluster to pre-establish trust between the clusters. This shared set of certificates is required:

$ mkdir -p certs && \
      pushd certs
$ make -f ../tools/certs/Makefile.selfsigned.mk root-ca
$ make -f ../tools/certs/Makefile.selfsigned.mk cluster1-cacerts

Create a secret from the generated certificate data:

$ kubectl --context="${CTX_CLUSTER1}" create namespace istio-system
$ kubectl --context="${CTX_CLUSTER1}" create secret generic cacerts -n istio-system \
      --from-file=cluster1/ca-cert.pem \
      --from-file=cluster1/ca-key.pem \
      --from-file=cluster1/root-cert.pem \
      --from-file=cluster1/cert-chain.pem
$ popd

Install the Istio Service Mesh

Create a configuration file for the IstioOperator custom resource. Then execute istioctl to perform the install, specifying that cluster1 will make its pilot externally available to permit the cross-cluster mesh:

$ cat <<EOF > cluster1.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
  values:
	global:
  	meshID: mesh1
  	multiCluster:
    	clusterName: cluster1
  	network: network1
EOF

$ istioctl install \
      --set values.pilot.env.EXTERNAL_ISTIOD=true \
      --context="${CTX_CLUSTER1}" \
      --filename cluster1.yaml

This will install the Istio 1.16.0 default profile with ["Istio core" "Istiod" "Ingress gateways"] components into the cluster. Proceed? (y/N) y
βœ” Istio core installed
βœ” Istiod installed
βœ” Ingress gateways installed
βœ” Installation complete                                                                                                                             	Making this installation the default for injection and validation.

Thank you for installing Istio 1.16.
Please take a few minutes to tell us about your install/upgrade experience!

Add an Istio gateway to route east-west traffic between the clusters:

$ samples/multicluster/gen-eastwest-gateway.sh \
      --mesh mesh1 \
      --cluster cluster1 \
      --network network1 | \
      istioctl --context="${CTX_CLUSTER1}" install -y -f -

βœ” Ingress gateways installed
βœ” Installation complete
Thank you for installing Istio 1.16.  Please take a few minutes to tell us about your install/upgrade experience!

Once the gateway becomes available ($ kubectl --context="${CTX_CLUSTER1}" get svc istio-eastwestgateway -n istio-system), expose the control plane so that services on cluster2 will be able to perform service discovery:

$ kubectl apply \
      --context="${CTX_CLUSTER1}" \
      -n istio-system \
      -f samples/multicluster/expose-istiod.yaml

Comprehensive Kubernetes cost monitoring & optimization

Cluster two - secondary

This process will add a second cluster to the mesh - note that the second cluster maintains its own Kubernetes context and independent control plane so that it is only the service mesh that spans clusters.

Begin by configuring the namespace, similar to cluster1 above:

$ kubectl --context="${CTX_CLUSTER2}" create namespace istio-system
$ kubectl --context="${CTX_CLUSTER2}" annotate namespace istio-system topology.istio.io/controlPlaneClusters=cluster1

As before, create a secret using the pre-shared SSL certificate:

$ pushd certs
$ kubectl --context="${CTX_CLUSTER2}" create secret generic cacerts -n istio-system \
      --from-file=cluster2/ca-cert.pem \
      --from-file=cluster2/ca-key.pem \
      --from-file=cluster2/root-cert.pem \
      --from-file=cluster2/cert-chain.pem
$ popd

Create a configuration file for the IstioOperator custom resource similar to cluster1 but referencing the gateway created above:

$ export DISCOVERY_ADDRESS=$(kubectl \
  	--context="${CTX_CLUSTER1}" \
  	-n istio-system get svc istio-eastwestgateway \
  	-o jsonpath='{.status.loadBalancer.ingress[0].ip}')

$ cat <<EOF > cluster2.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
  profile: remote
  values:
    istiodRemote:
      injectionPath: /inject/cluster/cluster2/net/network1
    global:
      remotePilotAddress: ${DISCOVERY_ADDRESS}
EOF

Use the configuration file to install Istio service mesh:

$ istioctl install --context="${CTX_CLUSTER2}" -f cluster2.yaml

This will install the Istio 1.16.0 remote profile with ["Istiod remote"] components into the cluster. Proceed? (y/N) y
βœ” Istiod remote installed
βœ” Installation complete                                                                                                                             	Making this installation the default for injection and validation.

Thank you for installing Istio 1.16.  Please take a few minutes to tell us about your install/upgrade experience!

To complete the cross-cluster attachment, create a secret on cluster1 that contains the cluster configuration for cluster2:

$ istioctl x create-remote-secret \
 	--context="${CTX_CLUSTER2}" \
 	--name=cluster2 | \
 	kubectl apply -f - --context="${CTX_CLUSTER1}"

secret/istio-remote-secret-cluster2 created

Verify the multi-cluster mesh

To verify that routing is working correctly between clusters, establish a service on each cluster that publicizes its respective location, in this case using a version string. Fortunately, the Istio project conveniently makes such a service available.

On each cluster, create a Namespace for testing and ensure that the Namespace if configured for injecting the Istio sidecar:

$ kubectl create --context="${CTX_CLUSTER1}" namespace sample && \
 	kubectl label --context="${CTX_CLUSTER1}" namespace sample \
 	istio-injection=enabled

namespace/sample created
namespace/sample labeled

$ kubectl create --context="${CTX_CLUSTER2}" namespace sample && \
 	kubectl label --context="${CTX_CLUSTER2}" namespace sample \
 	istio-injection=enabled

namespace/sample created
namespace/sample labeled

On each cluster deploy the helloworld service from the Istio repository:

$ kubectl apply --context="${CTX_CLUSTER1}" \
	-f samples/helloworld/helloworld.yaml \
	-l service=helloworld -n sample

service/helloworld created

$ kubectl apply --context="${CTX_CLUSTER2}" \
	-f samples/helloworld/helloworld.yaml \
	-l service=helloworld -n sample

service/helloworld created

On each cluster, deploy a version of the helloworld deployment:

$ kubectl apply --context="${CTX_CLUSTER1}" \
 	-f samples/helloworld/helloworld.yaml \
 	-l version=v1 -n sample

deployment.apps/helloworld-v1 created

$ kubectl apply --context="${CTX_CLUSTER2}" \
	-f samples/helloworld/helloworld.yaml \
	-l version=v2 -n sample

deployment.apps/helloworld-v2 created

On each cluster, deploy the sleep deployment, service, and service account. Sleep provides a convenient curl environment:

$ kubectl apply --context="${CTX_CLUSTER1}" \
 	-f samples/sleep/sleep.yaml -n sample
serviceaccount/sleep created
service/sleep created
deployment.apps/sleep created

$ kubectl apply --context="${CTX_CLUSTER2}" \
 	-f samples/sleep/sleep.yaml -n sample
serviceaccount/sleep created
service/sleep created
deployment.apps/sleep created
K8s clusters handling 10B daily API calls use Kubecost

Verify that traffic is balanced β€” approximately 50-50 β€” between the two versions and therefore between the two clusters:

$ for i in $(seq 10); do
 	kubectl exec \
 	 	--context="${CTX_CLUSTER2}" \
 	 	-n sample \
 	 	-c sleep \
 	 	"$(kubectl get pod --context="${CTX_CLUSTER2}" -n sample -l app=sleep -o jsonpath='{.items[0].metadata.name}')" -- curl -sS helloworld.sample:5000/hello
done

Hello version: v2, instance: helloworld-v2-5b46bc9f84-pczw4
Hello version: v1, instance: helloworld-v1-fdb8c8c58-ldzww
Hello version: v1, instance: helloworld-v1-fdb8c8c58-ldzww
Hello version: v2, instance: helloworld-v2-5b46bc9f84-pczw4
Hello version: v1, instance: helloworld-v1-fdb8c8c58-ldzww
Hello version: v1, instance: helloworld-v1-fdb8c8c58-ldzww
Hello version: v1, instance: helloworld-v1-fdb8c8c58-ldzww
Hello version: v2, instance: helloworld-v2-5b46bc9f84-pczw4
Hello version: v1, instance: helloworld-v1-fdb8c8c58-ldzww
Hello version: v2, instance: helloworld-v2-5b46bc9f84-pczw4

Adding Isito locality load balancing

It’s nice that traffic can be routed between and across clusters, and there are many scenarios where this is the goal, including:

  • Capacity management
  • Multi-cloud
  • Versioning
  • Canary deployments

But Istio has more to offer. Below, we will add location-aware load balancing to the multi-cluster Istio mesh so that priority can be given by source or destination location.

Understanding locality

This configuration is predicated on the two clusters being in different geographic locations, which is not a requirement for the multi-cluster configuration above. Istio location-aware routing depends on Node metadata to make routing decisions, for example:

topology.kubernetes.io/region=us-west1
topology.kubernetes.io/zone=us-west1-a

topology.kubernetes.io/region=us-central1
topology.kubernetes.io/zone=us-central1-c

Implementing locality load balancing

Begin by implementing a new Namespace for testing:

$ cat <<EOF > sample.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: sample
  labels:
	istio-injection: enabled
EOF

$ for CLUSTER in "$CTX_CLUSTER1" "$CTX_CLUSTER2"; do
    kubectl --context="$CLUSTER" apply -f sample.yaml; \
  done

Generate a location-specific configuration for the helloworld application used in the verification process above:

$ for LOC in "us-west1-a" "us-west1-b"; do
 	./samples/helloworld/gen-helloworld.sh \
 	 	--version "$LOC" > "helloworld-${LOC}.yaml"
done

Take a look at one of the generated manifest files and note the value of the version label:

---
apiVersion: v1
kind: Service
metadata:
  name: helloworld
  labels:
	app: helloworld
	service: helloworld
spec:
  ports:
  - port: 5000
	name: http
  selector:
	app: helloworld
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: helloworld-us-west1-a
  labels:
    app: helloworld
    version: us-west1-a
spec:
  replicas: 1
  selector:
    matchLabels:
      app: helloworld
      version: us-west1-a
  template:
    metadata:
      labels:
        app: helloworld
        version: us-west1-a
      spec:
  	  containers:
  	  - name: helloworld
    	    env:
    	    - name: SERVICE_VERSION
      	value: us-west1-a
    	    image: docker.io/istio/examples-helloworld-v1
    	    resources:
      	requests:
        	  cpu: "100m"
    	    imagePullPolicy: IfNotPresent
    	    ports:
    	    - containerPort: 5000

This manifest should look similar to the version 1 / version 2 routing performed during the multi-cluster mesh validation. Mechanically, locality load balancing uses the same internal structures to make decisions as did the versioning, with one very significant addition: awareness of the location of the nodes where pods are running. This manifest provides help to Istio by labeling pods with that location so that routing decisions can be made correctly.

Learn how to manage K8s costs via the Kubecost APIs

Deploy the manifests to their respective clusters:

$ kubectl apply \
    --context="${CTX_CLUSTER1}" \
    -n sample \
    -f helloworld-us-west1-a.yaml

service/helloworld created
deployment.apps/helloworld-us-west1-a created

$ kubectl apply \
    --context="${CTX_CLUSTER2}" \
    -n sample \
    -f helloworld-us-central1-a.yaml

service/helloworld created
deployment.apps/helloworld-us-central1-a created

Once the Pods are deployed, continue by configuring the locality load balancing via a Destination Rule, which defines the policies that Istio uses after routing is complete to to determine load balancing, connection pool sizing, and outlier detection:

$ kubectl apply \
    --context="${CTX_CLUSTER1}" \
    -n sample \
    -f - <<EOF
      apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: helloworld
spec:
  host: helloworld.sample.svc.cluster.local
  trafficPolicy:
    loadBalancer:
      localityLbSetting:
        enabled: true
        distribute:
        - from: us-west1/*
          to:
            "us-west1/*": 90
            "us-central1/*": 10
    outlierDetection:
      consecutive5xxErrors: 100
      interval: 1s
      baseEjectionTime: 1m
EOF

destinationrule.networking.istio.io/helloworld created

Now test the balancing, expect the majority of connections to go to the west Pod:

$ for i in {1..20}; do
    kubectl exec \
        --context="${CTX_CLUSTER1}" \
        -n sample \
        -c sleep \
        "$(kubectl get pod -n sample -l app=sleep -o jsonpath='{.items[0].metadata.name}')" -- \
        curl -sSL helloworld.sample:5000/hello
done
Hello version: us-west1-a, instance: helloworld-us-west1-a-86446f9ff9-znzkb
Hello version: us-west1-a, instance: helloworld-us-west1-a-86446f9ff9-znzkb
Hello version: us-west1-a, instance: helloworld-us-west1-a-86446f9ff9-znzkb
Hello version: us-west1-a, instance: helloworld-us-west1-a-86446f9ff9-znzkb
Hello version: us-west1-a, instance: helloworld-us-west1-a-86446f9ff9-znzkb
Hello version: us-west1-a, instance: helloworld-us-west1-a-86446f9ff9-znzkb
Hello version: us-west1-a, instance: helloworld-us-west1-a-86446f9ff9-znzkb
Hello version: us-west1-a, instance: helloworld-us-west1-a-86446f9ff9-znzkb
Hello version: us-west1-a, instance: helloworld-us-west1-a-86446f9ff9-znzkb
Hello version: us-west1-a, instance: helloworld-us-west1-a-86446f9ff9-znzkb
Hello version: us-central1-a, instance: helloworld-us-central1-a-79b4fb5c67-pkrsr
Hello version: us-west1-a, instance: helloworld-us-west1-a-86446f9ff9-znzkb
Hello version: us-west1-a, instance: helloworld-us-west1-a-86446f9ff9-znzkb
Hello version: us-west1-a, instance: helloworld-us-west1-a-86446f9ff9-znzkb
Hello version: us-west1-a, instance: helloworld-us-west1-a-86446f9ff9-znzkb
Hello version: us-west1-a, instance: helloworld-us-west1-a-86446f9ff9-znzkb
Hello version: us-west1-a, instance: helloworld-us-west1-a-86446f9ff9-znzkb
Hello version: us-west1-a, instance: helloworld-us-west1-a-86446f9ff9-znzkb
Hello version: us-west1-a, instance: helloworld-us-west1-a-86446f9ff9-znzkb
Hello version: us-west1-a, instance: helloworld-us-west1-a-86446f9ff9-znzkb

When to consider locality load balancing

Using this same configuration, one could configure any combination of location-sensitive routing to suit any need. Additionally, hidden within this configuration is source sensitivity.

Source sensitivity is routing decisions based on where a connection has originated. This falls into the Internet-source category for strategies such as edge caching. However, within the Istio mesh, it involves keeping connections on the same cluster or within the same geographical footprint to reduce latency and simplify security considerations. This approach can also be a component of a global traffic management strategy that targets low-latency endpoints and benefits from low-latency inter-service connectivity.

While we did not demonstrate them here, mutually exclusive with weighted load balancing locality failover may interest organizations looking to reduce points of failure in a mesh.

Conclusion

As the leading open-source project for implementing a service mesh on Kubernetes, Istio, powered by Envoy, offers even more capabilities when implemented across multiple clusters.

The core functionality within a single cluster includes secure communications across microservices using the mTLS protocol, load balancing and shedding that keep the high-priority services stable by intelligently rejecting less-important connections that overwhelm the services, and observability, such as logging and tracing microservices.

The additional value derived from extending an Istio service mesh across multiple clusters includes improved fault tolerance by supporting fail-over across physical clusters hosted in different data centers, connection routing to deploy a new software release using the canary deployment model, or load balancing across multiple cloud providers by routing connections using the meta-data of the cluster nodes.

Comprehensive Kubernetes cost monitoring & optimization

Continue reading this series