Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix version upgrade kubelet support #16932

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

rsafonseca
Copy link
Contributor

Automatically preserve kubelet supported version skew on worker nodes while control plane is being updated

Fixes: #16907

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Nov 4, 2024
@k8s-ci-robot
Copy link
Contributor

Hi @rsafonseca. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign johngmyers for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@rsafonseca
Copy link
Contributor Author

cc @danports @hakman @justinsb as discussed in the office hours
I've only had time to do some basic testing, but haven't gone through an actual upgrade, though the outputs look as expected :)

@danports
Copy link
Contributor

danports commented Nov 4, 2024

Thanks @rsafonseca! On the one hand, I can't say I'm the biggest fan of the ergonomics of this approach, because it basically means we have to run the identical kops update cluster and kops rolling-update cluster commands twice in our CD script - and not having a command line switch like --instance-group-role makes the script less self-documenting. But on the other hand, I am very happy to have any solution that works and this also has the advantage of being safe by default in terms of automatically enforcing the version skew policy. An "uber command" as @rifelpet called it would still be great, and while I haven't looked at all at the code, I wondered whether one could implement it by making a rolling update (of CP or worker nodes) just another type of task that needs to be run in the update dependency tree.

@rsafonseca
Copy link
Contributor Author

rsafonseca commented Nov 4, 2024

Yeah, it's not perfect for all use cases, but it's a first step and we can build upon this for the so called "uber" command :)
Like you said, it's safe by default, but the user can override this behaviour to keep the old one if they so wish by using the --ignore-version-skew command line switch.

In addition, this will produce no behaviour change for any kops update cluster other than specifically when the kubernetes version is being increased (e.g. kops updates will still procede normally and will update things like the containerRuntime, etc without the need for 2 interactions)

It also allows the instancegroups to continue to be managed while the controlplane isn't updated. E.g. if i need to make a change to an instancegroup but haven't finished rolling out all control planes.

As for the CD script, here I'm probably the least impacted as we're running a kops update loop on all clusters to reconcile with CRDs from our operator, so this change will (in our scenario) make the upgrades seamless and safe without having to change anything :)

Edit: Optionally an easy way to maintain behaviour in current CD workflows, is to simply ensure api-server nodes are rolled out first in the rolling-update command, after the masters have rolled out, before moving on to the other instance groups trigger an update cluster :)

if err != nil {
klog.Warningf("error checking control plane running verion: %v", err)
} else {
klog.Warningf("successfully checked control plane running version: %v", minControlPlaneRunningVersion)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: any reason to make this a warning (as opposed to Infof)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh no... that was just for my testing i fully intended to make it info or debug and then forgot :)

if !c.IgnoreVersionSkew {
minControlPlaneRunningVersion, err = checkControlPlaneRunningVersion(ctx, cluster.ObjectMeta.Name, minControlPlaneRunningVersion)
if err != nil {
klog.Warningf("error checking control plane running verion: %v", err)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should just return the error, unless there's some reason to expect this error to happen which we need to tolerate

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it will error out for example if the cluster isn't up (e.g. new clusters, testing, etc) in which case we need to tolerate it.


func checkControlPlaneRunningVersion(ctx context.Context, clusterName string, version string) (string, error) {

configLoadingRules := clientcmd.NewDefaultClientConfigLoadingRules()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: I think we should try to share this "get kubeconfig" code with the rolling-update logic (I think that also connects to the cluster). Not a blocker though.

LabelSelector: "node-role.kubernetes.io/control-plane",
})
if err != nil {
return version, fmt.Errorf("cannot list nodes in cluster %q: %v", clusterName, err)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah - this is tricky. If the control plane is offline, we presumably don't want to update the nodes (?)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the control plane is offline, this is the least of our worries lol. But it might be an expected situation (e.g. creating a new cluster) so we just log the error and continue as normal

return err
}
if len(assetBuilder.KubeletSupportedVersion) > 0 && assetBuilder.KubeletSupportedVersion != c.Cluster.Spec.KubernetesVersion {
c.AssetsSupportedKubelet, err = c.addFileAssets(assetBuilder, assetBuilder.KubeletSupportedVersion)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah - great point - we need different assets. Although I'm starting to worry about complexity now...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the complexity here worried me a bit as well as I dug a bit into how to achieve this. The code isn't perfect but i tried to do it without having to refactor many things, if you have any better ideas we can either improve this down the line or i can make any adjustments :)

@justinsb
Copy link
Member

justinsb commented Nov 5, 2024

Thanks for this! A few random comments on the code but I'm going to mull on the overall approach.

@k8s-ci-robot
Copy link
Contributor

PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Nov 9, 2024
@justinsb
Copy link
Member

So we have added the ability to update only some of the instance groups, and in #16939 I am proposing to make it easy to drive those operations in a predefined sequence.

One thing I don't like about that approach is that we do have to predefine the sequence, and one of the things I like in this PR is that we're being more reasoned about the sequence: rather than hard-coding the sequence we effectively infer the sequence at run-time.

It might be that the predefined approach is good enough to unblock 1.31, but I would like to pursue this direction more. I am wondering if we should make it more explicit, adding a field for the Kube version on each instance groups and managing that.

@danports
Copy link
Contributor

@justinsb Could we combine the two approaches by having the anneal/reconcile cluster command call update & rolling-update repeatedly, until the update command reports that there are no tasks left that it had to skip due to version skew or other dependencies? Then the anneal/reconcile command wouldn't have to hard-code any sequences, and that structure might also be able to incorporate running the --prune logic as well.

@justinsb
Copy link
Member

@danports yes, I think we might need a combination. Something brought up in office hours was that we need to support this for terraform also, where we can't emit partial configuration (at least I don't think we can!). I've gone round in circles a lot on other ways to make this work, but I am feeling like the approach in this PR is probably going to be the only one that we can really do, short of introducing a kubernetesVersion field in the instance group which is its own management problem.

I am doing some work to try to clean up our assets logic so that this PR doesn't have to plumb through multiple asset versions - that should make this PR smaller I hope!

@justinsb
Copy link
Member

OK, I think I've got something working, and it is based on this PR. The approach here is very clever, and because terraform will delete resources if we stop emitting them, I think this is the right way forward. The downside is that we will have to run update / rolling-update / update / rolling-update, but we can address that with messages to the user and potentially a command like #16939 (though that will have to be reworked)

I do think we should merge #16954 first though, as with that PR the code in this PR becomes much easier. The whole building of fileAssets looks to be unnecessarily complicated, which is part of the reason why this PR is somewhat complicated. I have a rebase of it after #16954 so once that merges I think I can probably just update this PR branch (as a maintainer).

cc @hakman

@rsafonseca
Copy link
Contributor Author

Cool, let me know if you need a hand with anything @justinsb :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/documentation cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
4 participants