You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Upgrading a cluster to 1.30 with managed node groups that set maxPodsPerNode. Presumably the same issue would occur creating a new cluster on 1.30 with the same config.
What happened?
eksctl produces the following error:
Error: eksctl does not support configuring maxPodsPerNode EKS-managed nodes based on AmazonLinux2023
Create a ClusterConfig for Kubernetes 1.30 or higher with a managed node group that sets maxPodsPerNode to some value above zero.
Versions
$ eksctl info
eksctl version: 0.194.0
kubectl version: v1.31.1
OS: linux
Further comments
Having dug through the code a little bit, it seems that the reason this was added was because AL2023 EKS-managed nodes have their user data automatically populated to include a NodeConfig that sets maxPods: de74c5b#r1546826066
I'm slightly confused by this reasoning, however:
nodeadm supports merging multiple NodeConfigs from a set of userdata (ref).
From what I can tell, EKS inserts its NodeConfig into the user data of an instance above the user data specified by the template.
Looking at the codepath for merging NodeConfigs, NodeConfigs get merged top-to-bottom from the MIME multi-part user data.
Therefore, I think that this means any NodeConfig specified in the launch template's user data should get merged with and override equivalent fields of the (auto-generated) NodeConfig that gets inserted into the instance user data by EKS, and hence that we can override maxPods.
If the reasoning above is sound, I think this would be a reasonably straightforward fix: one could add some logic into createNodeConfg() in pkg/nodebootstrap/al2023.go to return a basic NodeConfig that sets maxPods (omitting the cluster details as these would get set by EKS).
On the other hand, I may be missing a deeper understanding of why this restriction was added in the first place. If the maintainers believe this reasoning is sound, I'm happy to open a PR!
The text was updated successfully, but these errors were encountered:
Hello davejbax 👋 Thank you for opening an issue in eksctl project. The team will review the issue and aim to respond within 1-5 business days. Meanwhile, please read about the Contribution and Code of Conduct guidelines here. You can find out more information about eksctl on our website
What were you trying to accomplish?
Upgrading a cluster to 1.30 with managed node groups that set
maxPodsPerNode
. Presumably the same issue would occur creating a new cluster on 1.30 with the same config.What happened?
eksctl produces the following error:
which comes from the validation here.
How to reproduce it?
Create a
ClusterConfig
for Kubernetes 1.30 or higher with a managed node group that setsmaxPodsPerNode
to some value above zero.Versions
Further comments
Having dug through the code a little bit, it seems that the reason this was added was because AL2023 EKS-managed nodes have their user data automatically populated to include a
NodeConfig
that setsmaxPods
: de74c5b#r1546826066I'm slightly confused by this reasoning, however:
NodeConfig
s from a set of userdata (ref).NodeConfig
into the user data of an instance above the user data specified by the template.Therefore, I think that this means any NodeConfig specified in the launch template's user data should get merged with and override equivalent fields of the (auto-generated) NodeConfig that gets inserted into the instance user data by EKS, and hence that we can override maxPods.
If the reasoning above is sound, I think this would be a reasonably straightforward fix: one could add some logic into
createNodeConfg()
in pkg/nodebootstrap/al2023.go to return a basicNodeConfig
that setsmaxPods
(omitting the cluster details as these would get set by EKS).On the other hand, I may be missing a deeper understanding of why this restriction was added in the first place. If the maintainers believe this reasoning is sound, I'm happy to open a PR!
The text was updated successfully, but these errors were encountered: