kubectl inside a pod with workload identity #335
-
I'm having a nightmare using kubectl inside a pod with workload identity. The setup works fine for az cli generally within the pod, i.e. I can pull secrets from Key Vault or run simple commands to list what the identity account has access to. The process I've followed is to login as per below: az login --federated-token "$(cat $AZURE_FEDERATED_TOKEN_FILE)" --service-principal -u $AZURE_CLIENT_ID -t $AZURE_TENANT_ID Then populating my kubeconfig: az aks get-credentials --resource-group $RESOURCE_GROUP --name $AKS_CLUSTER_NAME --overwrite-existing I'm then using kubelogin to stop the devicelogin prompt and allow workload identity. export KUBECONFIG=/root/.kube/config When I then run any kubectl command, it spits out a API forbidden message but rather than showing the service account which is linked to the az identity client ID, it shows a unresolved SID. It's almost as if the kubelogin is not converting the kubeconfig properly or something is screwed in AKS. Here is an example output. "Error from server (Forbidden): nodes is forbidden: User "73866869-f64c-441e-853c-0b81579061d0" cannot list resource "nodes" in API group "" at the cluster scope: User does not have access to the resource in Azure. Update role assignment to allow access." I'm installing kubectl and kubelogin as per below: az aks install-cli It's driving me nuts :D I can find many examples online with this working for others. The azure identity is attached to AKS with IAM role "Azure Kubernetes Service Cluster Admin Role" as a test, it also has contributor on the resource groups where this all lives. As everything works with az cli, I'm sure the issue relates to the "kubelogin convert-kubeconfig" - or something inside AKS not resolving right. I tried a few different kubelogin versions but it made no difference. Any ideas? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Fix was easy once I slept, although I had assigned "Azure Kubernetes Service Cluster Admin Role" via IAM in Azure, this had not assigned the Object ID of the managed identity to the relevent role binding within AKS. Not sure why, it works automatically for groups, maybe not for managed identities. I assume users would also work fine. I create a role binding that used the object ID of the managed identity and it now works fine. Clue was searching Azure AD for the un-resolved ID that Kubernetes was throwing at me. The search came back with the managed identity and the rest fell into place with the role binding. An alternative would be to drop the managed identity into an Azure AD group and then attach this to the relevent RBAC role. |
Beta Was this translation helpful? Give feedback.
Fix was easy once I slept, although I had assigned "Azure Kubernetes Service Cluster Admin Role" via IAM in Azure, this had not assigned the Object ID of the managed identity to the relevent role binding within AKS. Not sure why, it works automatically for groups, maybe not for managed identities. I assume users would also work fine.
I create a role binding that used the object ID of the managed identity and it now works fine. Clue was searching Azure AD for the un-resolved ID that Kubernetes was throwing at me. The search came back with the managed identity and the rest fell into place with the role binding.
An alternative would be to drop the managed identity into an Azure AD group and …