Learn how to deploy and manage Kubernetes clusters with AWS' EKS Anywhere project, including components for ClusterAPI, Flux, and EKS Distro.
🎉 Kubecost 2.0 is here! Learn more about the massive new feature additions and predictive learning

EKS Anywhere: A Get Started Guide

EKS Anywhere is a project developed by AWS that allows administrators to deploy Kubernetes clusters to on-premise environments. The project includes components for deploying all binaries required for a fully standards-compliant Kubernetes cluster. The purpose of EKS Anywhere is to provide administrators with access to a Kubernetes distribution which is mirrored with the cloud-managed EKS service.

Administrators can expect their on-premise EKS Anywhere clusters to be configured with identical components (such as control plane binaries) as the managed EKS service. The consistency allows use cases such as preparing for cloud migrations and configuring hybrid cloud environments, where administrators can easily set up matching EKS clusters on-premise and on the AWS cloud.

This article will review EKS Anywhere in detail, including architecture, use cases, how to deploy your first cluster, and EKS Anywhere alternatives.

Summary of key EKS Anywhere concepts

The table below summarizes the key EKS Anywhere concepts this article will explore in more detail.

Concept Description
EKS Anywhere architecture AWS packages many different components together to build an EKS Anywhere cluster, including ClusterAPI, Flux, and EKS Distro. Administrators should understand how these tools work together to deploy and manage an EKS Anywhere cluster.
What are EKS Anywhere’s use cases? There are several use cases for EKS Anywhere, including setting up hybrid cloud deployments, leveraging a vendor-supported Kubernetes distribution, and validating readiness for a cloud migration.
Getting started with EKS Anywhere EKS Anywhere can be deployed locally using Docker, allowing administrators to test the platform easily on a local machine.
Pricing and AWS support The EKS Anywhere software is free, but administrators can pay an optional subscription cost to access AWS Support.
EKS Anywhere security considerations While AWS is responsible for patching vulnerabilities in EKS Anywhere’s components, most security measures are the responsibility of the administrator.
EKS Anywhere alternatives Alternative projects can help with on-premise use cases, such as Gardener, OpenShift, and Rancher. Evaluating the features of each project will help select an appropriate solution.

EKS Anywhere architecture

The EKS Anywhere project includes many components for deploying and managing clusters. AWS bundles these components together to orchestrate an end-to-end solution for on-premise EKS deployments.

An overview of EKS Anywhere components.

The sections below detail each of the key EKS Anywhere components.

EKS Distro

EKS Distro is a Kubernetes distribution developed by AWS to drive the managed EKS service. AWS customizes the upstream Kubernetes binaries (like the API Server, Kube Controller Manager, etcd) for use with the EKS service. The distribution has been open-sourced, and all the container images required to deploy an EKS-equivalent cluster are publicly available. The public availability of the components running the managed EKS service allows administrators to deploy EKS Anywhere clusters outside of the AWS cloud.

The container images provided by AWS as part of EKS Distro include:

  • Kube API Server
  • Kube Controller Manager
  • CoreDNS
  • etcd
  • Kube Proxy
  • Kube Scheduler

Deploying an EKS Anywhere cluster with the container images provided by EKS Distro enables users to create on-premise Kubernetes clusters with the same configuration as the managed EKS service. Every configuration parameter of the above binaries will match the managed EKS service's implementation, assuring administrators that their on-premise EKS Anywhere clusters have a configuration consistent with the cloud-managed variant.

Since EKS Anywhere aims to provide a mirrored EKS experience for on-premise users, configuration consistency is essential. EKS Distro is the most significant element of fulfilling EKS Anywhere's objective.

eksctl

The eksctl command-line tool is the interface for users to interact with their EKS Anywhere clusters. The open-source tool is developed in collaboration between AWS and Weaveworks and is responsible for provisioning, upgrading, and managing EKS Anywhere clusters. eksctl can also operate cloud-managed EKS clusters.

The eksctl tool can perform administrative tasks for EKS Anywhere, like those summarized in the table below.

Use case eksctl command
Creating clusters eksctl anywhere create cluster
Installing packages (like Prometheus and CertManager) eksctl anywhere install package
Upgrading cluster versions eksctl anywhere upgrade cluster
Generating cluster configuration manifests eksctl anywhere generate clusterconfig

eksctl is used to manage the entire lifecycle of an EKS Anywhere cluster and is an essential tool for administrators to familiarize themselves with.

ClusterAPI

ClusterAPI (CAPI) is an open-source Kubernetes project to manage the clusters' lifecycle through a standardized API. CAPI supports cluster bootstrapping actions like setting up control plane binaries, initializing etcd nodes, and enabling cluster networking. EKS Anywhere uses CAPI to manage clusters created via eksctl. Commands executed for eksctl (such as upgrading clusters) are passed onto CAPI, which can handle the underlying operations like updating container image versions for control plane components.

While administrators interact with CAPI via the clusterctl tool in traditional setups, EKS Anywhere only requires eksctl to interface with CAPI's operations. Even though CAPI is somewhat abstracted and hidden from the administrator for simplicity, there are benefits to familiarizing yourself with this project to assist in troubleshooting cluster operational issues, such as failed upgrades.

Tinkerbell

Tinkerbell is an open-source project that aims to simplify operating system provisioning for on-premise bare metal platforms. It configures operating system components like DHCP, network interfaces, cloud-init, and ensures the kernel is available in the installed system.

EKS Anywhere leverages Tinkerbell's capabilities when an administrator installs this project on a bare metal system.

Flux

Flux is a GitOps tool designed to allow administrators to deploy and reconcile application and infrastructure manifests declaratively. Administrators can commit manifests to a Git repository and rely on Flux to automatically synchronize and deploy changes by watching for repository changes. EKS Anywhere allows administrators to connect Flux and a target cluster to enable GitOps-managed clusters.

Administrators can then commit cluster infrastructure configuration to a Git repository, and rely on Flux to update their EKS Anywhere clusters accordingly. Storing and deploying configuration information for EKS Anywhere clusters this way enables administrators to benefit from version control, tracking change history, auditing change owners, backing up cluster configurations, and enabling easier rollback.

Comprehensive Kubernetes cost monitoring & optimization

EKS Connector

A worthwhile optional component supported by EKS Anywhere is the EKS Connector. AWS developed the Connector project to allow users of any Kubernetes cluster to visualize their cluster's details in the EKS web console. The Connector runs as a workload in any Kubernetes cluster and pushes data related to cluster status, pod, and node information to the AWS cloud. The Connector project can be helpful for administrators who have a mixture of Kubernetes clusters running across the managed EKS service and other Kubernetes platforms and want a central location to view every cluster's status.

In the context of EKS Anywhere, administrators may have a combination of clusters running on the managed EKS service and the self-hosted EKS Anywhere platform. Setting up the EKS Connector allows administrators to view each type of EKS cluster via the central EKS web console.

Curated packages

EKS Anywhere supports additional software like Cluster Autoscaler, Prometheus, and CertManager. Administrators can install additional software through traditional methods like Helm; however, AWS also provides vendor-supported "curated packages" for particular components. Curated packages allow administrators to access cluster components that AWS has tested and validated for compatibility with EKS Anywhere. The Curated packages are backed by the AWS Support Engineering team, allowing administrators to get the vendor's help for guidance like troubleshooting.

Curated packages are only available for a limited list of components, so administrators will likely investigate additional approaches to installing Kubernetes components like Helm and Kustomize. The list of supported curated packages is as follows:

  • ADOT OpenTelemetry Collector
  • CertManager
  • Cluster Autoscaler
  • Emissary Ingress
  • Harbor
  • MetalLB
  • Metrics Server
  • Prometheus

What are EKS Anywhere's use cases?

There are several use cases for EKS Anywhere that can significantly benefit users:

Hybrid cloud deployments

Users can leverage EKS Anywhere to run workloads on both a managed cloud provider and on-premises. Hybrid deployments are becoming increasingly common for users interested in experimenting and exploring the capabilities of the cloud while continuing to run workloads on-premises. Being able to benefit from cloud providers' elasticity and managed offerings while simultaneously leveraging the features of on-premise systems like data privacy is a powerful advantage for hybrid users. AWS enables users to deploy Kubernetes workloads in a hybrid approach by using the managed EKS offering while, in parallel, deploying on-premise Kubernetes workloads to EKS Anywhere.

Vendor-supported Kubernetes distribution

Obtaining support is a challenge for users deploying Kubernetes in on-premise environments. Support is typically a requirement for mission-critical systems, and most locally hosted Kubernetes distributions do not provide a native support option. EKS Anywhere has an optional key benefit for users requiring vendor support; if a user is willing to purchase an EKS Anywhere Support subscription, they'll have out-of-the-box access to vendor support via the AWS Support organization. The support feature works exactly the same as accessing support for other AWS cloud services, utilizing the same web portal and providing a consistent experience for users requiring assistance. However, the subscription cost is significant and will not be practical for small-scale use cases like experimentation. Pricing is discussed further below.

The scope of support encompasses three areas:

  • Curated packages: The packages provided by AWS to extend cluster functionality are fully supported by Amazon. Support engineers will assist with troubleshooting issues related to package installation and management.
  • Operating systems: AWS will provide support for issues related to EKS Anywhere’s integration with the node’s operating system. The three operating systems tested by Amazon for integration with EKS Anywhere are Bottlerocket, Ubuntu, and Red Hat Enterprise Linux. AWS support will provide troubleshooting guidance to ensure these operating systems are functional with EKS Anywhere. It’s important to note that Amazon’s support will be limited to EKS Anywhere’s interactions with the operating system (such as Linux bootstrapping and launching Kubernetes components like the Kubelet); general operating system issues like the configuration of 3rd party packages will be out of scope of support. For broader operating system support, obtaining enterprise support subscriptions directly from Ubuntu and Red Hat can be a worthwhile option.
  • EKS Anywhere binaries: The components bundled with EKS Anywhere are within the scope of support. AWS support engineers will assist with troubleshooting issues related to binaries deployed by EKS Anywhere, such as the control plane Kubernetes components (e.g., API Server) and dependencies like Cluster API.

The scope of support will not include assistance with other elements of the on-premise environment, such as the hardware, virtualization software, physical networking, etc.

Validating cloud readiness

Users transitioning from on-premise to cloud-based environments will need the ability to experiment and validate their workload's functionality in the new platform. Moving away from on-premise systems is a complicated transition with potentially many moving parts, and users will need to be careful that their selected cloud service's capabilities can fulfill all of their requirements.

For users migrating on-premise Kubernetes workloads to the managed EKS service, EKS Anywhere validates an on-premise workload's compatibility with the cloud. EKS Anywhere implements the same control plane configuration as the managed EKS service's control plane using the same EKS Distro components for both platforms. The mirrored configuration means users will have a consistent experience between running workloads on either EKS Anywhere or managed EKS. The consistency allows users to deploy and validate their on-premise workloads using EKS Anywhere, with the confidence that if the workloads operate successfully, they will likely be compatible with the managed EKS service. Leveraging tools that validate cloud readiness is critical for ensuring a smooth and timely migration.

While many aspects of an EKS Anywhere cluster will mirror the configuration of a managed EKS cluster, there are some areas that will differ between an on-premise and cloud environment, including

  • The network routing infrastructure for an on-premise environment will differ from an AWS VPC, which affects EKS Anywhere cluster implementation details such as the Container Network Interface (CNI) tool, which controls how a pod’s network configuration (like IP addresses) are enabled.
  • Network ingress details such as load balancing and enabling public traffic to workloads in the cluster will differ between on-premise and cloud environments. The managed EKS approach of using tools like the AWS Load Balancer Controller to enable load balancing capabilities with public DNS endpoints will not be applicable to EKS Anywhere clusters. Tools more suitable for an on-premise cluster may include MetalLB and Emissary Ingress, which are packages provided by AWS for EKS Anywhere.
  • Data volumes will also be implemented differently. AWS implements storage for managed EKS clusters with services like EBS (block storage) and EFS (file storage), whereas EKS Anywhere will require on-premise storage managed by other tools like vSphere storage.

While EKS Anywhere provides a significant degree of consistency between on-premise and managed cloud environments, some key differences will impact how cloud readiness is tested for migrations. Converting from tooling specific to on-premise clusters to cloud-native tools will require additional analysis and testing.

Getting started with EKS Anywhere

EKS Anywhere can be tested and validated on an administrator's local machine. Testing the software will help administrators validate that the EKS Anywhere platform meets their requirements. It will also help build confidence regarding how the components would be deployed in a real environment. While a production-ready EKS Anywhere deployment will typically run on bare-metal servers or virtual machine platforms like vSphere, testing EKS Anywhere locally via the Docker runtime will be sufficient to explore the overall deployment process.

This tutorial will guide administrators through an EKS Anywhere deployment on a local Docker-based cluster. Windows is not supported by EKS Anywhere, so the commands below are compatible with a Linux-based operating system (like Ubuntu).

K8s clusters handling 10B daily API calls use Kubecost

The setup below will utilize Docker and KIND to deploy a local EKS Anywhere cluster. KIND is a project that enables administrators to deploy entire Kubernetes clusters within a Docker container, which is useful for quickly testing Kubernetes configurations locally. EKS Anywhere can use KIND to deploy the EKS Anywhere cluster components (worker nodes, control plane nodes, and system pods) to the administrator's local machine, and allows a simple approach to testing the project before deploying to a real on-premise environment.

The end result of the below tutorial is a set of Docker containers running locally containing EKS Anywhere nodes (the output has been truncated for simplicity):

$ docker ps

NAMES
eks-anywhere-1-eks-a-cluster-control-plane-34dfd
eks-anywhere-1-etcd-5r662
eks-anywhere-1-md-0

The Docker containers above will host the EKS Anywhere components. For example, the etcd container will host a database for the API Server. The control plane node will contain processes like the API Server and Kube Controller Manager. The container named “md-0” stands for Machine Deployment, which is a ClusterAPI concept representing a worker node. It will contain pod processes like CoreDNS and any other pods we deploy to the cluster.

EKS Anywhere will require 4 CPU cores, 16GB of memory, and 30GB of disk space to deploy locally. It may run on fewer resources depending on what workloads are deployed to the cluster. More pods deployed will result in higher cluster utilization.

Step 1: Install eskctl

First, we'll need to install the eksctl command-line tool and the eksctl-anywhere plugin. This tool will be the primary method by which administrators will interface with their EKS Anywhere clusters, so correctly installing it is critical. Follow the instructions here and run the below to validate the installation was successful:

eksctl anywhere version

Step 2: Name the cluster

Select a name for your cluster, and set it as an environment variable. We'll reuse this value several times throughout the tutorial:

export CLUSTER_NAME=eks-anywhere-1

Step 3: Create a basic cluster configuration

Use the eksctl tool to generate a basic cluster configuration file. The tool allows administrators to automatically generate a template cluster configuration to avoid manually writing the file contents. Notice we use the --provider flag to specify what environment the configuration should be created for. In this case, we want the configuration to use Docker.

eksctl anywhere generate clusterconfig $CLUSTER_NAME --provider docker > $CLUSTER_NAME.yaml

Investigate the contents of the generated configuration file by running the command below:

cat $CLUSTER_NAME.yaml

The eksctl tool has generated a configuration file with basic default settings as a starting point. Administrators should carefully review the settings to ensure they are appropriate for your use case. The most critical settings will include the Kubernetes version and the number of control plane nodes (which impacts high availability). The configuration provided below will deploy 3 control plane nodes, 3 etcd nodes, and 3 worker nodes using Kubernetes 1.27

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
  name: eks-anywhere-1
spec:
  clusterNetwork:
	cniConfig:
  	cilium: {}
	pods:
  	cidrBlocks:
  	- 192.168.0.0/16
	services:
  	cidrBlocks:
  	- 10.96.0.0/12
  controlPlaneConfiguration:
	count: 3
  datacenterRef:
	kind: DockerDatacenterConfig
	name: eks-anywhere-1
  externalEtcdConfiguration:
	count: 3
  kubernetesVersion: "1.27"
  managementCluster:
	name: eks-anywhere-1
  workerNodeGroupConfigurations:
  - count: 3
	name: md-0
---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: DockerDatacenterConfig
metadata:
  name: eks-anywhere-1
spec: {}
---
Learn how to manage K8s costs via the Kubecost APIs

Step 4: Deploy the EKS Anywhere cluster

We can now use the above configuration template to deploy an EKS Anywhere cluster. The eksctl tool will build the cluster according to the configuration values set in the file:

eksctl anywhere create cluster -f $CLUSTER_NAME.yaml

The command output will show what steps are being taken to set up the cluster. We can see log messages like ClusterAPI components being configured, nodes being created, network bootstrapping, and other action items required for the cluster to reach a ready state:

Performing setup and validations
✅ Docker Provider setup is valid
✅ Validate certificate for registry mirror
✅ Validate authentication for git provider
✅ Create preflight validations pass
Creating new bootstrap cluster
Provider specific pre-capi-install-setup on bootstrap cluster
Installing cluster-api providers on bootstrap cluster
Provider specific post-setup
Creating new workload cluster
Installing networking on workload cluster
Creating EKS-A namespace
Installing cluster-api providers on workload cluster
Installing EKS-A secrets on workload cluster
Installing resources on management cluster
Moving cluster management from bootstrap to workload cluster
Installing EKS-A custom components (CRD and controller) on workload cluster
Installing EKS-D components on workload cluster
Creating EKS-A CRDs instances on workload cluster
Installing GitOps Toolkit on workload cluster
GitOps field not specified, bootstrap flux skipped
Writing cluster config file
Deleting bootstrap cluster
🎉 Cluster created!

The above eksctl command will also generate a Kubeconfig file inside a directory with the cluster’s name, which the administrator will use when running Kubectl commands to connect to the cluster. You can use environment variables to ensure Kubectl automatically uses the newly generated Kubeconfig file:

export KUBECONFIG=${CLUSTER_NAME}/${CLUSTER_NAME}.kubeconfig

Step 5: Verify connectivity

You can now verify that the Kubectl CLI can connect to the cluster. You should browse the various resources like pods and nodes that have been deployed to the cluster to gain an understanding of what components have been configured. Basic knowledge of the cluster's configuration will help you investigate and troubleshoot potential issues if any arise.

kubectl get pods --all-namespaces
kubectl get nodes

The cluster is now ready to use, and the administrator can deploy their desired Kubernetes applications. For example, the following will create a new Deployment resource with a running pod based on the Nginx container image:

$ kubectl create deployment nginx --image nginx
$ kubectl get pods
nginx-8f458dc5b-qvfct

Step 6: Delete the cluster

When you have finished experimenting with the new cluster, you can cleanup the cluster’s resources by running:

eksctl anywhere delete cluster $CLUSTER_NAME -f $CLUSTER_NAME.yaml

EKS Anywhere pricing and AWS support

EKS Anywhere comprises open-source components, allowing administrators to deploy the project at no cost. For administrators who require vendor support for EKS Anywhere, obtaining AWS Enterprise Support and an EKS Anywhere Support subscription may be beneficial.

As of this writing, the EKS Anywhere Support subscription is available on a per-cluster basis for $24,000 per year. Committing to 3 years reduces the per-cluster annual cost to $18,000. Considering the significant costs, administrators should carefully evaluate their use case to determine if the subscription is worthwhile.

The EKS Anywhere Support subscription is only available for customers with an AWS Enterprise Support contract. The support contract will have its own costs depending on the utilization of AWS cloud resources.

EKS Anywhere security considerations

Administrators share responsibility with AWS for the security posture of EKS Anywhere clusters. While AWS owns specific responsibilities, most of the security measures need to be implemented by the administrator.

AWS Responsibilities

AWS takes ownership of delivering EKS Anywhere components that have been vetted and regularly patched. Consistent maintenance of relevant software repositories (like EKS Distro) to maintain the functionality and security of EKS Anywhere is under the purview of AWS. Administrators are not expected to patch or develop component software to support the expected functionality.

Administrator responsibilities

Since clusters created on the EKS Anywhere platform are on-premise, administrators will be responsible for the entire deployment and management lifecycle.

Administrators must securely configure their complete hardware and software stack to maintain a strong security posture. Key administrator responsibilities include:

  • Securing server hardware: Basic responsibilities include restricting unauthorized physical access to hardware, blocking unwanted network access via firewalls, segmenting networks and data storage based on privacy and regulatory requirements, and keeping host machine software up-to-date.
  • Keeping EKS Anywhere components up-to-date: While AWS is responsible for developing and patching their software, administrators are still accountable for regularly deploying upgrades to their clusters. Applying cluster upgrades frequently is a critical aspect of maintaining cluster security, especially since Kubernetes has many moving parts and a broad attack surface. Vulnerabilities in control plane components can lead to compromised clusters, and exploits in data plane components could lead to compromised applications and sensitive data leaks. Kubernetes has a frequent update cycle of about three times a year, and AWS maintains a similar cadence for both managed EKS and EKS Anywhere. Therefore, administrators should be prepared to deploy cluster upgrades at least every few months to avoid running outdated and potentially vulnerable software. An added benefit of frequent upgrades includes bug fixes and new features provided by the upstream Kubernetes project and by AWS.
  • Following Kubernetes security best practices: Important best practices include restricting access to the API Server endpoint, implementing security benchmarking tools (like Kube-Bench), utilizing role-based access control (RBAC) and other Kubernetes security enforcement tools like Gatekeeper, and auditing workloads deployed to the cluster. Any best practices applicable to Kubernetes will be relevant for EKS Anywhere.

Security is a broad topic that requires careful evaluation of the entire stack of hardware and software implemented for the EKS Anywhere platform. Following security best practices for the hardware, operating systems, and Kubernetes cluster components will help administrators keep their platforms secure and production-ready.

EKS Anywhere alternatives

There are many projects available to administrators for deploying Kubernetes clusters to on-premise systems. Administrators should carefully evaluate the tradeoffs for the available projects before committing to a long-term investment in any platform.

Some popular examples of alternative projects are covered in the sections below.

OpenShift

OpenShift is a commercial Red Hat Platform-as-a-Service (Paas) project that aims to provide a Kubernetes platform with many additional features to enhance the developer experience beyond a vanilla Kubernetes cluster. For example, Red Hat packages some Custom Resource Definitions (CRDs) with OpenShift to enable more granular functionality for some resource types like Deployments. They also provide a built-in web interface to browse and install Operators for extending the cluster’s functionality. The library of Operators is much more extensive than the Packages provided by EKS Anywhere.

A key benefit of OpenShift is the availability of vendor support from Red Hat, which can be essential for enterprise users. A drawback to OpenShift is subscriptions can dive costs up higher.

EKS can give you a greater degree of flexibility to customize your environment.

Rancher

This is an open-source project that provides out-of-the-box functionality for deploying any number of clusters has a library of Helm Charts to extend cluster capabilities, and allows both on-premise and cloud-based cluster deployments. Rancher also comes with a web UI to easily browse and modify Kubernetes resources, providing a better developer experience than only relying on the Kubectl command-line tool. The open-source project is under active development on GitHub, with thousands of stars and forks from a large developer community. It is a more mature project than EKS Anywhere and will suit many users.

Gardener

This open-source project aims to leverage the official Kubernetes ClusterAPI specification to allow administrators to deploy and manage hundreds or even thousands of clusters easily. A control plane cluster will run Gardener components like Operators and Custom Resources, and workload clusters are created from a control plane dashboard interface.

Administrators can easily generate workload clusters on various platforms like cloud providers or on-premise systems. Administrators benefit from easily configuring large-scale multi-cluster systems through a central management tool. The use case will be for advanced users only due to the complexity and operational overhead of designing and administering the setup. For most users, other platforms will be simpler to operate, especially at a small scale (like a few clusters only). Gardener provides superior scalability features compared to EKS Anywhere at the cost of complexity.

Administrators will benefit from testing EKS Anywhere along with other tools to help determine which tools meet their use case. There are many projects available that may have overlapping use cases, so carefully testing and validating their capabilities will help select an appropriate choice for the long term.

Conclusion

EKS Anywhere is a helpful AWS project enabling administrators to deploy on-premise clusters compatible with the managed EKS service. This capability lets users quickly deploy hybrid cloud setups with a mirrored Kubernetes configuration while benefiting from the optional EKS Anywhere Support subscription. The open-source components bundled with EKS Anywhere allow flexible use cases such as GitOps integration and Prometheus monitoring.

Administrators should assess some key considerations to determine if EKS Anywhere is an appropriate solution for their use case, such as whether the user intends to utilize the managed EKS solution in the future and whether vendor support is required. Several projects are available for deploying Kubernetes clusters on-premises, and assessing each project’s strengths and weaknesses will help determine an appropriate solution.

Many on-premise Kubernetes projects will require administrators to be accountable for security measures, which also applies to EKS Anywhere. Administrators should assess security measures for securing hardware servers, operating system software, and Kubernetes cluster best practices to ensure their platform stack is secured.

Overall, EKS Anywhere can be a valuable tool for administrators, particularly those implementing a hybrid model or intending to migrate to AWS. Considering the software is free (without paid vendor support), exploring and experimenting with EKS Anywhere will help administrators determine if it’s an appropriate tool for their use case.

Comprehensive Kubernetes cost monitoring & optimization

Continue reading this series