Building EKS (Amazon hosted Kubernetes) clusters using eksctl

Building EKS clusters using


Eksctl acts as a wrapper around CloudFormation templates. Creating a cluster will add one stack for the control plane (EKS master servers) and one stack for each node group configured (a node group is a group of workers using the same networking and sizing as well as IAM permissions).

However, certain actions such as upgrading the Kubernetes master or worker version or scaling out the number of workers in a node group does not always update the CF stacks associated with it.


Download and install the latest version of eksctl.

Follow the Weaveworks installation guide:

Download eksctl (Linux)

curl --silent --location "$(uname -s)_amd64.tar.gz" | tar xz -C /tmp

Install eksctl (Linux)

sudo mv /tmp/eksctl /usr/local/bin

Provide AWS credentials

Ensure the AWS credentials are set for your current session. The easiest way is to run (aws configure) with the required region and access keys - however as things get more complicated (multiple accounts, assume role and mfa), additional scripts or applications may be required.


Our first EKS test cluster should be simple. Just running eksctl create cluster will create a new VPC, Internet Gateway, subnets and all other resources required for initial testing.

However, this cluster will be available externally (with EC2 instances / workers and the API server endpoint exposed to the internet). Ideally we would like to utilise existing networks and VPCs as well as adhering to existing security policies and regulations.

Assuming we decided on a setup with three internal subnets to be used for workers and (initially) an external EKS control plane / API server endpoint. We create a resource definition to be used with eksctl:

kind: ClusterConfig
  name: myeks
  version: '1.13'
  region: ap-southeast-2
# Role used by the EKS control plane itself when managing resources
  serviceRoleARN: "arn:aws:iam::123412341234:role/basic/eks-cluster-service-role"
# Where to deploy the control plane endpoints, and the worker nodes
  id: vpc-12341234123412345
      ap-southeast-2a: { id: subnet-12341234123412345 }
      ap-southeast-2b: { id: subnet-12341234123412346 }
      ap-southeast-2c: { id: subnet-12341234123412347 }

  - name: mynodes
    instanceType: r4.large
    desiredCapacity: 3
    privateNetworking: true
      withShared: true
      withLocal: true
      instanceProfileARN: "arn:aws:iam::123412341234:instance-profile/basic/eks-cluster-iam-NodeInstanceProfile-1PSA1WKT5RP16"
      instanceRoleARN: "arn:aws:iam::123412341234:role/eks-cluster-node-instance-role"
      allow: true
      publicKeyPath: test


eksctl create cluster --config-file config.yaml --kubeconfig $HOME/.kube/config.eks

This takes around 12 minutes after which the worker nodes are running and showing as Ready in K8s:

$ kubectl get nodes -o wide
NAME                                               STATUS   ROLES    AGE   VERSION              INTERNAL-IP     EXTERNAL-IP     OS-IMAGE         KERNEL-VERSION                  CONTAINER-RUNTIME
ip-192-168-12-18.ap-southeast-2.compute.internal   Ready    <none>   4m    v1.13.7-eks-c57ff8     Amazon Linux 2   4.14.123-111.109.amzn2.x86_64   docker://18.6.1
ip-192-168-40-73.ap-southeast-2.compute.internal   Ready    <none>   4m    v1.13.7-eks-c57ff8   Amazon Linux 2   4.14.123-111.109.amzn2.x86_64   docker://18.6.1


We can easily scale the cluster out and back in again.

eksctl scale nodegroup --cluster=myeks --name=mynodes --nodes=6
eksctl scale nodegroup --cluster=myeks --name=mynodes --nodes=2


Upgrading only upgrades to the next available higher version, so no version info is necessary.

eksctl update cluster --name=myeks --approve

To upgrade the workers / node pools, simply create a new pool and remove the existing one after.

eksctl create nodegroup --config-file config.yaml # create new node group
eksctl delete nodegroup --config-file config.yaml --only-missing # remove old node group


eksctl delete cluster --name=myeks