CAPI logs filled with error messages if no machine deployments/pools exist and ControlPlane does not implement v1beta2 conditions #11820
Labels
area/conditions
Issues or PRs related to conditions
help wanted
Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines.
kind/bug
Categorizes issue or PR as related to a bug.
priority/important-soon
Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
triage/accepted
Indicates an issue or PR is ready to be actively worked on.
What steps did you take and what happened?
Clusters that do not use machine deployments or machine pools (for example, configuring a cluster with machines manually) will cause the capi-controller-manager to endlessly write error logs whenever the cluster status is updated. The capi-controller-manager will show the following logs:
The requisite code can be found here:
We are able to workaround this by setting the conditions to false so that they are present.
What did you expect to happen?
My expectation is that during the v1beta1 -> v1beta2 migration, the status is set to
Unknown
if the aggregate conditions are not present.Cluster API version
v1.9.4
Kubernetes version
v1.30.2
Anything else you would like to add?
We implement a bring your own host style provisioning (not related to the byoh provider) in which users can register nodes freely, leaving lifecycle management to the user. Although a less common provisioning model, I imagine this could also affect clusters with control plane providers which have not been updated that also provision machines via manual definition.
Label(s) to be applied
/kind bug
/area conditions
The text was updated successfully, but these errors were encountered: