This repository provides a sample Go application deployment using EKS.
- Configuring AWS Credentials for
eksctl
- Setting Up EKS Cluster
- Deploying App to the Cluster
- Create IAM OIDC Identity Provider for the Cluster
- Others
Useful tools to manage EKS:
The usual setup for AWS CLI also works for eksctl
, utilising the ~/.aws/credentials
file or environment variables as explained in https://eksctl.io/installation/#prerequisite.
You can also use the --profile
flag to specify the profile for eksctl
.
You can setup a new EKS cluster with either of the following methods:
- using
eksctl
- using AWS Management Console
- using AWS CLI
For this example, we will focus more on using eksctl
to setup the cluster, using the following command:
eksctl create cluster --name <cluster_name>
This command will create a <cluster_name>
EKS cluster with default settings, such as:
- 2 x m5.large worker nodes
- using the official AWS EKS AMI
- us-west-2 region (or the profile's default region)
- a dedicated VPC
For more examples on creating EKS cluster with various configuration: https://eksctl.io/getting-started/ or https://eksctl.io/usage/creating-and-managing-clusters/.
eksctl utils associate-iam-oidc-provider --cluster <cluster_name> --approve
More details: https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html
The complete guide is available at https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html
The iam_policy.json is included in this project.
aws iam create-policy \
--policy-name AWSLoadBalancerControllerIAMPolicy \
--policy-document file://iam_policy.json
This only needs to be created once if never been done before
eksctl create iamserviceaccount \
--cluster=<cluster_name> \
--namespace=kube-system \
--name=aws-load-balancer-controller \
--role-name AmazonEKSLoadBalancerControllerRole \
--attach-policy-arn=arn:aws:iam::<aws_account_id>:policy/AWSLoadBalancerControllerIAMPolicy \
--approve
helm repo add eks https://aws.github.io/eks-charts
helm repo update eks
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
-n kube-system \
--set clusterName=<cluster_name> \
--set serviceAccount.create=false \
--set serviceAccount.name=aws-load-balancer-controller
kubectl get deployment -n kube-system aws-load-balancer-controller
aws ecr create-repository --repository-name <app_repository_name>
or,
Create the repository using the AWS Management Console: https://docs.aws.amazon.com/AmazonECR/latest/userguide/repository-create.html
After creating the ECR repository, you should be able to view the push commands from the AWS Management Console or by following this guide: https://docs.aws.amazon.com/AmazonECR/latest/userguide/getting-started-cli.html
First, you'll need to authenticate the Docker client with the registry
aws ecr get-login-password --region <region> | docker login --username AWS --password-stdin <aws_account_id>.dkr.ecr.<region>.amazonaws.com
Do not forget to replace <aws_account_id>
and <region>
with the actual values.
docker build -t <app_repository_name> .
Note: you may need to use buildx build --platform=linux/amd64
to make sure it is build for the correct platform
docker tag <app_repository_name>:latest <aws_account_id>.dkr.ecr.<region>.amazonaws.com/<app_repository_name>:latest
docker push <aws_account_id>.dkr.ecr.<region>.amazonaws.com/<app_repository_name>:latest
kubectl create namespace <namespace_name>
Namespace used in this example: simple-app
kubectl apply -f manifests_app/deployments.yml -f manifests_app/service_nlb.yml
This will get the load balancer's external IP / DNS name
kubectl get svc -n simple-app
In Kubernetes, there are 2 main mechanisms available to scale capacity automatically to maintain steady and predictable performance.
Some possible options for scaling compute resource are:
Tldr; the underying technology for CA (in AWS) is based on Auto Scaling Group, while Karpenter is able to launch the right-sized compute resources depending on theworkload requirements.
To scale the EKS workloads, these are some of the options:
- Horizontal Pod Autoscaler (HPA), where the number of replicas can be adjusted based on average CPU utilisation, average memory utilisation or any other custom metric.
- Cluster Proportional Autoscaler (CPA), where the replicas are scaled based on the number of nodes in a cluster. Example application: CoreDNS and other services that needs to scale according to the number of nodes in the cluster.
To specify the resource for your nodes, you can specify the instance type and sizes by specifying the node groups: https://eksctl.io/usage/managing-nodegroups/.
For containers and pods, you can also specify the memory and CPU.
One of the easiest method to do log collection and monitoring is to enable Container Insights: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/deploy-container-insights-EKS.html
To be notified based on certain pattern in a log file, you can create a CloudWatch Metric Filter: https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/MonitoringPolicyExamples.html
And then, based on that Metric Filter, you can create a CloudWatch Alarm which in turn can be used to trigger an SNS topic (e.g. to send message through Slack)
To share an ALB across multiple namespaces, Ingress can be used.
To apply ALB ingress for this example:
kubectl apply -f manifests_app/ingress.yml
Create new ECR Repository for app2
aws ecr create-repository --repository-name <app2_repository_name>
Build app2
docker build -f Dockerfile-app2 -t <app2_repository_name> .
Tag app2 container image
docker tag <app2_repository_name>:latest <aws_account_id>.dkr.ecr.<region>.amazonaws.com/<app2_repository_name>:latest
Push app2 container image to repository
docker push <aws_account_id>.dkr.ecr.<region>.amazonaws.com/<app2_repository_name>:latest
Create namespace for app2
kubectl create namespace <app2_namespace_name>
Namespace used in this example: simple-app2
Apply app2 manifests
kubectl apply -f manifests_app2
To get the ALB DNS name, run
kubectl get ingress -n <namespace_name>
You should be able to access the app service at
<ALB_DNS_name>/app1
and app2 service at
<ALB_DNS_name>/app2
AWS has a good practical workshop on EKS that you can do at your own pace, available at https://www.eksworkshop.com.