Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rolling a cluster from Kubernetes 1.30 to 1.31 gets stuck in a validation loop when new nodes are added to the cluster via CAS/Karpenter after kops update cluster completes #16907

Open
danports opened this issue Oct 16, 2024 · 3 comments · May be fixed by #16932
Labels
blocks-next kind/bug Categorizes issue or PR as related to a bug. kind/office-hours

Comments

@danports
Copy link
Contributor

/kind bug

1. What kops version are you running? The command kops version, will display
this information.

1.31.0-alpha.1

2. What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.

Upgrading from 1.30.5 to 1.31.1.

3. What cloud provider are you using?
AWS

4. What commands did you run? What is the simplest way to reproduce this issue?
Update the cluster kubernetesVersion and then run:
kops update cluster
kops rolling-update cluster

5. What happened after the commands executed?
The rolling-update got stuck in a validation loop and eventually timed out, because pods on the new worker nodes created by Karpenter after kops update cluster failed to start as described in kubernetes/kubernetes#127316.

6. What did you expect to happen?
Would have been great if the rolling update completed without errors.

7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml to display your cluster manifest.
You may want to remove your cluster name and other sensitive information.

Only relevant part here is having Karpenter enabled and then upgrading the Kubernetes version to 1.31.1.

8. Please run the commands with most verbose logging by adding the -v 10 flag.
Paste the logs into this report, or in a gist and provide the gist link here.

Rolling update validation loop outputs things like this over and over:

I1016 03:06:22.203255    2989 instancegroups.go:567] Cluster did not pass validation, will retry in "30s": node "i-05f95c0b6ad6e5201" of role "node" is not ready, system-node-critical pod "calico-node-ct25f" is pending, system-node-critical pod "ebs-csi-node-sm8v6" is pending, system-node-critical pod "efs-csi-node-bmdvv" is pending.

Upon describing one of those pods:

  Warning  Failed     25m (x12 over 27m)     kubelet            Error: services have not yet been read at least once, cannot construct envvars

9. Anything else we need to know?
It should be possible to work around this issue by pausing autoscaling before kops update cluster until after kops rolling-update cluster has replaced all of the control plane nodes, or with judicious use of kops rolling-update cluster --cloudonly.

@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Oct 16, 2024
@danports danports changed the title Rolling a cluster from Kubernetes 1.30 to 1.31 gets stuck in a validation loop when new new nodes are added to the cluster via CAS/Karpenter after kops update cluster completes Rolling a cluster from Kubernetes 1.30 to 1.31 gets stuck in a validation loop when new nodes are added to the cluster via CAS/Karpenter after kops update cluster completes Oct 16, 2024
@rifelpet
Copy link
Member

Some options discussed in office hours:

  • Extend the kops update cluster --phase concept to conditionally apply tasks for just control plane vs nodes. perhaps with an --instance-group-role field to match terminology in kops rolling-update cluster
  • Implement an "uber command" that runs both update cluster --yes and rolling-update cluster --yes together, allowing for the sequence of task applies to handled internally. This could be a new flag in kops rolling-update cluster or kops upgrade cluster.
  • Add kubernetesVersion API field to InstanceGroupSpec to allow control plane to be upgraded independently of nodes, even with a traditional kops update cluster --yes

We'll likely start with the first option and see how the ergonomics of the second option feel, given that it depends on the first option.

In either case we'll add upgrade instructions to the release notes for this new behavior.

/kind blocks-next

@k8s-ci-robot
Copy link
Contributor

@rifelpet: The label(s) kind/blocks-next cannot be applied, because the repository doesn't have them.

In response to this:

Some options discussed in office hours:

  • Extend the kops update cluster --phase concept to conditionally apply tasks for just control plane vs nodes. perhaps with an --instance-group-role field to match terminology in kops rolling-update cluster
  • Implement an "uber command" that runs both update cluster --yes and rolling-update cluster --yes together, allowing for the sequence of task applies to handled internally. This could be a new flag in kops rolling-update cluster or kops upgrade cluster.
  • Add kubernetesVersion API field to InstanceGroupSpec to allow control plane to be upgraded independently of nodes, even with a traditional kops update cluster --yes

We'll likely start with the first option and see how the ergonomics of the second option feel, given that it depends on the first option.

In either case we'll add upgrade instructions to the release notes for this new behavior.

/kind blocks-next

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@rsafonseca
Copy link
Contributor

Another option, and possibly a more correct one, would be to enforce the version skew https://kubernetes.io/releases/version-skew-policy/#kubelet

As such, the userdata for instance groups shouldn't be updated, until the control plane is already rolled out to a newer version, thus ensuring that we never have nodes coming up with a kubelet version that is more recent than any control plane node.

E.g: kops update cluster would:

  • Check running control-plane versions
  • Update instance-groups kubernetes version only if the lowest running version of a control plane node is >= target version

In this situation:

  • you don't need additional command switches in kops update cluster
  • for new clusters or those without any currently running control plane nodes the change would be transparent
  • simply re-run kops update cluster after your control plane is rolled out

Optionally, similar to the suggested above, a flag for going through the whole procedure like --sync or --wait could:

  • update the controlplane igs' spec
  • rollout the control plane
  • update the other instancegroups' spec

@rsafonseca rsafonseca linked a pull request Nov 4, 2024 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
blocks-next kind/bug Categorizes issue or PR as related to a bug. kind/office-hours
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants