Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rbac aad practice #16

Open
wants to merge 3 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
54 changes: 54 additions & 0 deletions .terraform.lock.hcl

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

51 changes: 51 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,9 @@
Just practice based on https://docs.microsoft.com/en-us/azure/developer/terraform/create-k8s-cluster-with-tf-and-aks

- TODO: create AD group via this Terraform script.
- TODO: assign user as cluster admin via this Terraform script.
- TODO: improve README

### Infrastucture preparation for Kuksa Cloud

In order to deploy Kuksa Cloud, Terraform -tool is first used create
Expand Down Expand Up @@ -78,3 +82,50 @@ You can use example.tfvars to e.g. customize naming of various objects.




#### 5. Role-based access control (RBAC), Azure Active Directory (AAD) and namespaces

To enable RBAC through AAD via this Terraform plan the following steps were done:

1. Added a role_based_access_control-block to resource "azurerm_kubernetes_cluster" found in modules/k8s/main.tf.
- that block contains an array named admin_group_object_ids, which contains the id of an AD group. That AD group contains admins of the cluster. Currently, creating the group and adding admins is a manual process. The group id can be queried via Azure CLI or Azure Portal.
2. After enabling AAD/RBAC, K8S csi driver installation needs sufficient K8S cluster credentials. One easy way of providing them is to use kubeconfig with --admin flag:
- `az aks get-credentials -g <RESOURCE_GROUP_NAME> -n <CLUSTER_NAME> --admin`. This creates an entry in the kubeconfig (default path is ~/.kube/config).
3. Added following lines to provider "helm" in modules/k8s_csi_driver_azure/main.tf:
- `config_path = "~/.kube/config"`
- `config_context = "<ADMIN_CONTEXT_NAME_HERE>"`
- Note: do not specify username and password if using kubeconfig to authenticate.
4. Then, this guide was followed: https://docs.microsoft.com/en-us/azure/aks/azure-ad-rbac with following exceptions:
- In step `Create demo users in Azure AD`, new users weren't created. Instead,
$AKSDEV_ID and $AKSSRE_ID were replaced by existing users' ids.
- Note: if you are in the cluster admin AD group, you will see all cluster resources regardless of whether you use cluster admin or cluster user context (acquired via the az aks get-credentials command).
- Note: if you destroy the resources and then apply them again, you may need to acquire new credentials to kubeconfig to be able to install the K8S CSI driver.

After doing those steps, with your cluster user credentials, you should only be able to see and modify resources in specific namespaces.
Comment on lines +86 to +104
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These steps document how to enable AAD-assisted RBAC in K8S cluster



#### Misc. links regarding roles and namespaces:
https://docs.microsoft.com/en-us/azure/aks/azure-ad-rbac

https://registry.terraform.io/providers/hashicorp/azuread/latest/docs/resources/group

https://www.danielstechblog.io/azure-kubernetes-service-azure-rbac-for-kubernetes-authorization/

https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/aks/managed-aad.md

https://docs.microsoft.com/en-us/azure/aks/manage-azure-rbac

https://registry.terraform.io/

https://www.danielstechblog.io/terraform-deploy-an-aks-cluster-using-managed-identity-and-managed-azure-ad-integration/

https://docs.microsoft.com/en-us/azure/aks/managed-aad

https://www.chriswoolum.dev/aks-with-managed-identity-and-terraform

https://docs.microsoft.com/en-us/azure/aks/kubernetes-portal#troubleshooting <-- Azure Portal resource view

https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/role-based-access-control/built-in-roles.md#azure-kubernetes-service-cluster-user-role

https://kubernetes.io/docs/reference/access-authn-authz/rbac/

37 changes: 37 additions & 0 deletions main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@

module "k8s_cluster_azure" {
source = "./modules/k8s"

k8s_agent_count = var.k8s_agent_count
k8s_resource_group_name_suffix = var.k8s_resource_group_name_suffix
project_name = var.project_name
Expand All @@ -24,6 +25,42 @@ module "container_registry_for_k8s" {
k8s_cluster_kubelet_managed_identity_id = module.k8s_cluster_azure.kubelet_object_id
}

module "keyvault_for_secrets" {
source = "./modules/keyvault"

policies = {
# full_permission = {
# tenant_id = var.azure-tenant-id
# object_id = var.kv-full-object-id
# key_permissions = var.kv-key-permissions-full
# secret_permissions = var.kv-secret-permissions-full
# certificate_permissions = var.kv-certificate-permissions-full
# storage_permissions = var.kv-storage-permissions-full
# }
read_policy_for_k8s_kubelet = {
tenant_id = module.k8s_cluster_azure.mi_tenant_id
object_id = module.k8s_cluster_azure.kubelet_object_id
key_permissions = var.kv-key-permissions-read
secret_permissions = var.kv-secret-permissions-read
certificate_permissions = var.kv-certificate-permissions-read
storage_permissions = var.kv-storage-permissions-read
}
}

}

module "k8s_csi_driver_azure"{
source = "./modules/k8s_csi_driver_azure"

k8s_host = module.k8s_cluster_azure.host
k8s_username = module.k8s_cluster_azure.cluster_username
k8s_password = module.k8s_cluster_azure.cluster_password
k8s_client_cert = module.k8s_cluster_azure.client_certificate
k8s_client_key = module.k8s_cluster_azure.client_key
k8s_cluster_ca_cert = module.k8s_cluster_azure.cluster_ca_certificate

}

terraform {

required_providers {
Expand Down
2 changes: 1 addition & 1 deletion modules/container_registry/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ variable "location" {
}

variable "project_name" {
default = "kuksatrng"
default = "rbackuksatrng"
}

# You can use the same resource group that was used with K8S cluster in AKS
Expand Down
17 changes: 15 additions & 2 deletions modules/k8s/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -75,8 +75,11 @@ resource "azurerm_kubernetes_cluster" "k8s_cluster" {

addon_profile {
oms_agent {
enabled = true
log_analytics_workspace_id = azurerm_log_analytics_workspace.log_analytics_ws.id
enabled = true
log_analytics_workspace_id = azurerm_log_analytics_workspace.log_analytics_ws.id
}
kube_dashboard {
enabled = true
}
}

Expand All @@ -85,6 +88,16 @@ resource "azurerm_kubernetes_cluster" "k8s_cluster" {
network_plugin = "kubenet"
}

role_based_access_control {
enabled = true
azure_active_directory {
managed = true
admin_group_object_ids = [
"93b4062c-6cf4-4ed3-af28-9633d2785bda"
]
}
}

Comment on lines +91 to +100
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is one essential part of enabling AAD integration for K8S cluster.

tags = {
environment = var.environment
}
Expand Down
8 changes: 4 additions & 4 deletions modules/k8s/outputs.tf
Original file line number Diff line number Diff line change
Expand Up @@ -35,17 +35,17 @@ output "k8s_cluster_node_resource_group" {
}

output "mi_principal_id" {
value = azurerm_kubernetes_cluster.k8s_cluster.identity[0].principal_id
value = azurerm_kubernetes_cluster.k8s_cluster.identity[0].principal_id
}

output "mi_tenant_id" {
value = azurerm_kubernetes_cluster.k8s_cluster.identity[0].tenant_id
value = azurerm_kubernetes_cluster.k8s_cluster.identity[0].tenant_id
}

output "kubelet_client_id" {
value = azurerm_kubernetes_cluster.k8s_cluster.kubelet_identity[0].client_id
value = azurerm_kubernetes_cluster.k8s_cluster.kubelet_identity[0].client_id
}

output "kubelet_object_id" {
value = azurerm_kubernetes_cluster.k8s_cluster.kubelet_identity[0].object_id
value = azurerm_kubernetes_cluster.k8s_cluster.kubelet_identity[0].object_id
}
2 changes: 1 addition & 1 deletion modules/k8s/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ variable "location" {
}

variable "project_name" {
default = "kuksatrng"
default = "rbackuksatrng"
}

variable "k8s_agent_count" {
Expand Down
9 changes: 9 additions & 0 deletions modules/k8s_csi_driver_azure/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
This module installs Azure Key Vault Provider for
Kubernetes Secrets Store CSI Driver.

Installation is done with Helm.

TODO: Document variables
TODO: Document module usage
TODO: Document outputs
TODO: Change kube_config and config_context to use variables.
24 changes: 24 additions & 0 deletions modules/k8s_csi_driver_azure/main.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
provider "helm" {
kubernetes {
config_path = "~/.kube/config" # Replace if you have different kubeconfig path!
config_context = "rbackuksatrng-k8stest-cluster-admin" # Replace with your admin context name!
# host = var.k8s_host
# username = var.k8s_username
# password = var.k8s_password
client_certificate = base64decode(var.k8s_client_cert)
client_key = base64decode(var.k8s_client_key)
cluster_ca_certificate = base64decode(var.k8s_cluster_ca_cert)
}
}


resource "helm_release" "kv_azure_csi" {
name = "csi-secrets-provider-azure"
repository = var.repository
chart = "csi-secrets-store-provider-azure"
version = var.csi_provider_version
# In which K8S namespace the Azure provider for CSI driver should be installed.
# TODO: Should this be the same namespace as where the actual application is deployed.
namespace = var.namespace
create_namespace = true
}
44 changes: 44 additions & 0 deletions modules/k8s_csi_driver_azure/variables.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
variable "environment" {
default = "development"
}

variable "location" {
default = "West Europe"
}

variable "project_name" {
default = "rbackuksatrng"
}

# See secrets-store-csi-driver-provider-azure documentation, e.g. for version information.
# https://github.com/Azure/secrets-store-csi-driver-provider-azure/blob/master/charts/csi-secrets-store-provider-azure/README.md
variable "csi_provider_version" {
default = "0.0.13"
}

# Repository where Helm charts for 'csi-secrets-provider-azure' are hosted.
# Note: 'master'-branch is used since helm_release will parse 'index.yaml' located in repo.
# Despite the 'master' -branch, 'index.yaml' contains only stable version releases.
variable "repository" {
default = "https://raw.githubusercontent.com/Azure/secrets-store-csi-driver-provider-azure/master/charts"
}

# Kubernetes connection details for Helm
# Note: certs and key will be decoded from base64 in modules main.tf.
# Don't decode them manually.
# Supply raw values provided by e.g.
# 'azurerm_kubernetes_cluster.aks.kube_config.0.client_certificate'
# 'azurerm_kubernetes_cluster.aks.kube_config.0.client_key'
# 'azurerm_kubernetes_cluster.aks.kube_config.0.cluster_ca_certificate'
variable "k8s_host" {}
variable "k8s_username" {}
variable "k8s_password" {}
variable "k8s_client_cert" {}
variable "k8s_client_key" {}
variable "k8s_cluster_ca_cert" {}

# In which K8S namespace the Azure provider for CSI driver should be installed.
# TODO: Should this be the same namespace as where the actual application is deployed.
variable "namespace" {
default = "kuksacloud"
}
9 changes: 9 additions & 0 deletions modules/keyvault/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
This module is used create a Azure Key Vault to store
secrets used e.g. in K8S Deployment configurations.
Note that an identity of K8S Kubelet service (kubelet_object_id)
needs to be allowed read access to the created Key Vault, before
K8S Deployments can fetch secrets from the vault.

TODO: Document variables
TODO: Document module usage (how to define identities for read access)
TODO: Document outputs
Loading