-
Notifications
You must be signed in to change notification settings - Fork 93
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Basically unusable for multiple tenants with multiple users #243
Comments
did you try to specify |
I'm sorry I don't understand how using a token cache dir would help me in this? From the documentation it is not clear to me.
Now, when I try to use kubelogin I have to do the following: So essentially instead of just switching seamlessly like in step 8 and 9, I have to call |
please look at this proposal, this would work, at least it works for me when i manually edited the kubeconfig
after this, you can change contexts without interfering each other. you can confirm this proposal by editing the kubeconfig to add unique token-cache-dir, but you have to use |
I have to side with @SebestyenBartha on this - kubelogin is almost useless. I can connect fine to AKS clusters but not to our legacy k8s 1.17, 1.18 local clusters nor the legacy clusters on Azure. For the legacy clusters the process is kubectl use-context... or kubectx clustername and k9s. Once kubelogin is used as minimally directed nothing can connect to the legacy clusters at all, including kubectl cluster-info. |
@ScottS-byte would you mind elaborating? does your legacy k8s use AAD or not? if you use |
We have two different tenants in two different datacenters. I have two different users to go with that and I have to switch contexts all the time. When I switched from one datacenter to the other using kubectl the user experience with the azure auth plugin was that I need to log in once for both the first time I switch and then I can seamlessly switch between the two without re-authenticating every time, because the tokens were saved individually for both clusters, so kubectl always knew which token to use for which context. That is impossible to set up with kubelogin. When I convert the config, everything is converted but the two contexts will use the same token, and only the one I was using at the time of convert will work, the other context will provide an authorization error. Kubelogin is unable to figure out that I had multiple users for the different tenants. To fix it, I need to log out, remove the tokens from kubelogin, re-authenticate, convert the config again. And I have to do these steps every single time I switch contexts. It's awful.
The azure auth plugin had sooo much better user experience... :(
Can something be done about this?
The text was updated successfully, but these errors were encountered: