- EKS Cluster Setup with Monitoring and GitOps
- Terraform resources
Set up an EKS cluster with Prometheus and Grafana monitoring, ArgoCd; AWS Application Load Balancer Controller; Crossplane:
- AWS CLI configured with appropriate credentials
- kubectl installed
- Helm installed
- Terraform (or OpenTofu) installed
Note
-
Security Considerations:
- Services are publicly accessible (restricted by NACLs to specific IPs)
- Using unencrypted HTTP for endpoints
- Consider using VPN access and proper TLS certificates in production
-
Resource Management:
- Load balancers must be deleted before cluster destruction
-
Production Recommendations:
- Set up private VPC endpoints
- Configure TLS certificates using AWS Certificate Manager
- Use proper DNS aliases for Load Balancer URLs
- Implement proper backup and disaster recovery procedures
-
Source Control:
- Backend configuration (
backend.tf
) should be managed separately - Sensitive information should be managed through secrets management
- Backend configuration (
export TF_VAR_cidr_passlist=${your_ip}$/32
export AWS_REGION=${region}$
export AWS_PROFILE=${profile}
export CLUSTER_NAME=${cluster_name}
aws eks list-clusters
aws eks update-kubeconfig --name personal-eks-workshop
helm repo add argo https://argoproj.github.io/argo-helm
helm repo update
helm install argocd argo/argo-cd --namespace argocd --create-namespace
Get initial admin password:
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
Then
kubectl port-forward svc/argocd-server -n argocd 8080:443
Login to ArgoCD (CLI)
get password with
argocd login localhost:8080 --username admin --password <your-password> --insecure --grpc-web
GUI Access (otional) at: http://localhost:8080
This application installs:
- Monitoring server
- Prometheus
- Grafana
- AWS Load Balancer Controller
export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
envsubst < argocd/applicationset/monitoring-apps.yaml | kubectl apply -f -
For Prometheus:
kubectl port-forward svc/prometheus-kube-prometheus-prometheus -n monitoring 4001:9090
Access at: http://127.0.0.1:4001
For Grafana:
Get Grafana credentials:
# Username: admin
# Password:
kubectl get secret --namespace monitoring prometheus-grafana -o jsonpath="{.data.admin-password}" | base64 --decode; echo
kubectl port-forward service/prometheus-grafana 3000:80 --namespace monitoring
Access at: http://127.0.0.1:3000
Grafana is configured in the ApplicationSet to access Prometheus. We can browse the prefab Dashboards to see stuff
The following is partly based on Crossplane tutorial part 1 but with some notable changes:
- OIDC authentication is substituted for IAM static credentials
- We are using ArgoCD to install Crossplane rather than Helm directly
- Discrete YAML files are substituted for HereDocs
- We concatenate YAML files where that makes sense
- We demonstrate updating an exiting resource in place
export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
envsubst < argocd/applications/crossplane.yaml | argocd app create -f -
Verify Crossplane installed with kubectl get pods.
kubectl get pods -n crossplane-system
Installing Crossplane creates new Kubernetes API end-points. check these with
kubectl api-resources | grep crossplane
The Crossplane Provider installs the Kubernetes Custom Resource Definitions (CRDs) representing AWS S3 services. These CRDs allow you to create AWS resources directly inside Kubernetes.
To install the AWS S3 provider with IRSA (IAM Roles for Service Accounts) authentication, we need three components:
- A DeploymentRuntimeConfig that configures the provider to use our service account with IRSA annotations
- The Provider itself that installs the AWS S3 CRDs
- A ProviderConfig that tells Crossplane to use IRSA authentication
Apply all three components with:
kubectl apply -f argocd/applications/crossplane-aws-s3-provider-setup.yaml
The configuration file contains:
- DeploymentRuntimeConfig: Ensures the provider uses our IRSA-annotated service account instead of auto-creating its own
- Provider: Installs the AWS S3 provider from Upbound's registry
- ProviderConfig: Configures the provider to use IRSA authentication
Verify the installation:
- Check the provider status:
kubectl get providers
- View the new CRDs:
kubectl get crds
A managed resource is anything Crossplane creates and manages outside of the Kubernetes cluster. AWS S3 bucket names must be globally unique. To generate a unique name the example uses a random hash. Any unique name is acceptable.
cat <<EOF | kubectl create -f -
apiVersion: s3.aws.upbound.io/v1beta1
kind: Bucket
metadata:
generateName: crossplane-bucket-
spec:
forProvider:
region: eu-west-1
providerConfigRef:
name: default
EOF
Verifiying resource creation
We can see that the bucket is deployed when SYNCED
and READY
are both True
kubectl get buckets
NAME SYNCED READY EXTERNAL-NAME AGE
crossplane-bucket-r8lvj True True crossplane-bucket-r8lvj 109m
Ensure correct bucket name
cat <<EOF | kubectl apply -f -
apiVersion: s3.aws.upbound.io/v1beta1
kind: Bucket
metadata:
name: crossplane-bucket-r8lvj
spec:
forProvider:
region: eu-west-1
tags:
project: crossplane-demo
deployment: manual
providerConfigRef:
name: default
EOF
Before shutting down your Kubernetes cluster, delete the S3 bucket just created.
kubectl delete bucket ${bucketname}
based on https://docs.crossplane.io/latest/getting-started/provider-aws-part-2/
Install the DynamoDB Provider
kubectl apply -f argocd/applications/crossplane-aws-dynamodb-provider.yaml
Apply the API
kubectl apply -f argocd/applications/crossplane-nosql-api.yaml
View the installed XRD with
kubectl get xrd
View the new custom API endpoints with
kubectl api-resources | grep nosql
kubectl apply -f argocd/applications/crossplane-dynamo-with-bucket-composition.yaml
Apply this Function to install function-patch-and-transform:
kubectl apply -f argocd/applications/crossplane-function-patch-and-transform.yaml
View the Composition with
kubectl get composition
Create a NoSQL object to create the cloud resources.
cat <<EOF | kubectl apply -f -
apiVersion: database.example.com/v1alpha1
kind: NoSQL
metadata:
name: my-nosql-database
spec:
location: "US"
EOF
View the resource with
kubectl get nosql
This object is a Crossplane composite resource (also called an XR). It's a single object representing the collection of resources created from the Composition template.
View the individual resources with
kubectl get managed
Delete the resources with
kubectl delete nosql my-nosql-database
Verify Crossplane deleted the resources with kubectl get managed
Note It may take up to 5 minutes to delete the resources.
Most organizations isolate their users into namespaces.
A Crossplane Claim is the custom API in a namespace.
Creating a Claim is just like accessing the custom API endpoint, but with the kind from the custom API’s claimNames.
Create a new namespace to test create a Claim in.
kubectl create namespace crossplane-test
Then create a Claim in the crossplane-test namespace.
cat <<EOF | kubectl apply -f -
apiVersion: database.example.com/v1alpha1
kind: NoSQLClaim
metadata:
name: my-nosql-database
namespace: crossplane-test
spec:
location: "US"
EOF
View the Claim with kubectl get claim -n crossplane-test
The Claim automatically creates a composite resource, which creates the managed resources.
View the Crossplane created composite resource with kubectl get composite
Again, view the managed resources with kubectl get managed
Deleting the Claim deletes all the Crossplane generated resources.
kubectl delete claim -n crossplane-test my-nosql-database
Note It may take up to 5 minutes to delete the resources.
Verify Crossplane deleted the composite resource with kubectl get composite
Verify Crossplane deleted the managed resources with kubectl get managed
Name | Version |
---|---|
terraform | >= 1.9.0 |
aws | >= 5.86.0 |
Name | Version |
---|---|
aws | 5.86.0 |
Name | Source | Version |
---|---|---|
eks | terraform-aws-modules/eks/aws | ~> 20.33.1 |
vpc | terraform-aws-modules/vpc/aws | ~> 5.18.1 |
Name | Type |
---|---|
aws_iam_policy.aws_lb_controller | resource |
aws_iam_role.aws_lb_controller_role | resource |
aws_iam_role.crossplane | resource |
aws_iam_role_policy_attachment.aws_lb_controller_attach | resource |
aws_iam_role_policy_attachment.crossplane_aws_admin | resource |
aws_availability_zones.available | data source |
aws_iam_policy_document.assume_role_policy | data source |
aws_iam_policy_document.crossplane_assume_role | data source |
aws_iam_policy_document.load_balancer_controller | data source |
Name | Description | Type | Default | Required |
---|---|---|---|---|
cidr_passlist | CIDR block to allow all traffic from | string |
"" |
no |
cluster_name | Name of the EKS cluster | string |
"personal-eks-workshop" |
no |
eks_managed_node_groups | n/a | object({ |
{ |
no |
tf_state | Terraform state file configuration | object({ |
n/a | yes |
vpc_cidr | Defines the CIDR block used on Amazon VPC created for Amazon EKS. | string |
"10.42.0.0/16" |
no |
Name | Description |
---|---|
load_balancer_controller_role_arn | n/a |