-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: MULTIARCH-4974: Cluster wide architecture weighted affinity #452
base: main
Are you sure you want to change the base?
WIP: MULTIARCH-4974: Cluster wide architecture weighted affinity #452
Conversation
@AnnaZivkovic: This pull request references MULTIARCH-4974 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.19.0" version, but no target version was set. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@Prashanth684 and I were also discussing about providing two merging strategies with an additional field for the nodeAffinityScoring plugin, In
|
6777797
to
e39938b
Compare
+1 Nice write up @aleskandro . We should definitely get this recorded in an EP and define the two strategies clearly. In future there might also be room to add more strategies like maybe a strategy where an input variable or even a normalization function is provided by user to influence the scheduling. |
wait I'm confused..the whole reason we thought about the merging strategy was because the nodes with ssd should be considered over nodes without ssd? or is it just purely based on the higher number? |
d3b7b94
to
7c27733
Compare
https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/ So it will prefer the one with the highest score. We could just append to this list, but we risk unbalancing any predefined user rules |
dcab070
to
ececc41
Compare
…terPodPlacementConfig Signed-off-by: Punith Kenchappa <[email protected]> # Conflicts: # bundle/manifests/multiarch-tuning-operator.clusterserviceversion.yaml
ececc41
to
ea9a87b
Compare
ea9a87b
to
c694bbb
Compare
Make clusterPodPlacementConfig private
… during scheduling using append method
c694bbb
to
1df7a2c
Compare
@AnnaZivkovic: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
Implementing cluster wide weights using
PreferredDuringSchedulingIgnoredDuringExecution
which stores its items as a list.In the case where we have predefined PreferredSchedulingTerms. We must preserve the weights and avoid unbalancing them with the new arch weights. To do so we can normalize the existing weights using arch weights.
new_weight = 100 * old_weight/ sum(arch weights)
For example
User defined arch weights
The pod yaml would look like the following if there was a pre existing rule