Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MULTIARCH-4252: namespace-scoped Pod Placement Config #453

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

aleskandro
Copy link
Member

@aleskandro aleskandro commented Feb 5, 2025

This PR provides the design proposal for the namespace-scoped Pod Placement Configuration and minor fixes to the (to be updated) initial EP.

@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Feb 5, 2025
@openshift-ci-robot
Copy link

openshift-ci-robot commented Feb 5, 2025

@aleskandro: This pull request references MULTIARCH-4252 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the spike to target the "4.19.0" version, but no target version was set.

In response to this:

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@aleskandro
Copy link
Member Author

/cc @jeffdyoung @prb112

@openshift-ci openshift-ci bot requested review from jeffdyoung and prb112 February 5, 2025 09:35
@openshift-ci-robot
Copy link

openshift-ci-robot commented Feb 5, 2025

@aleskandro: This pull request references MULTIARCH-4252 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the spike to target the "4.19.0" version, but no target version was set.

In response to this:

This PR provides the design proposal for the namespace-scoped Pod Placement Configuration and minor changes to the (to be updated) initial EP.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

aleskandro added a commit to aleskandro/multiarch-manager-operator that referenced this pull request Feb 6, 2025
aleskandro added a commit to aleskandro/multiarch-manager-operator that referenced this pull request Feb 6, 2025
Copy link

@prb112 prb112 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm
minor comment

@openshift-ci openshift-ci bot added lgtm Indicates that a PR is ready to be merged. and removed lgtm Indicates that a PR is ready to be merged. labels Feb 7, 2025
Copy link

@prb112 prb112 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is awesome... minor change since last update
/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Feb 7, 2025
Copy link

openshift-ci bot commented Feb 7, 2025

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: prb112
Once this PR has been reviewed and has the lgtm label, please ask for approval from aleskandro. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot removed the lgtm Indicates that a PR is ready to be merged. label Feb 7, 2025
@aleskandro aleskandro force-pushed the ep-per-namespace branch 4 times, most recently from f99229e to c2769c2 Compare February 7, 2025 16:35
| security.openshift.io/securitycontextconstraints | use | it's used in case the pods are created by a user/SA that needs some privileges |
| events | create, patch | |
| podplacementconfigs.multiarch.openshift.io | get, list, watch | |
| configmaps, secrets | get, list, watch | (future) (not needed by the current plugins) if other plugins will require them to communicaet with the registries |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will secrets be a cluster role since we need the global pull secret?

@Prashanth684
Copy link
Contributor

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Feb 8, 2025
- The pod placement controller should apply the configuration defined by the namespace-scoped `PodPlacementConfig` CRD to the pods in the selected namespaces, and the cluster-scoped `ClusterPodPlacementConfig` configuration for preferred affinities should be ignored
- The pod placement controller should apply the configuration defined by the highest-priority namespace-scoped `PodPlacementConfig` CRD to pods matching multiple `PodPlacementConfig` CRDs in the same namespace
- No PPC with the same priority should be allowed in the same namespace
- When no priority is set in the `PodPlacementConfig`, the operator should default to the lowest priority among the existing Pod Placement Configs in the same namespace decreased by 1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what will occur when new ppc with no priority is created but the current lowest priority of ppc in the namespace is already 0? will the creation be rejected?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a good point @lwan-wanglin. I updated the proposal by defaulting to 0 and increasing the value for each new one PPC that gets created.

In this way:

  • 'newly' created PPCs (if the priority field is omitted), gets a higher priority (instead of a lower one)
  • The issue arises when the highest priority is 255, in which case, we'd reject the creation (Adding a test case for this too).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i'm wondering if we are being too rigid with the priority. if no priority is defined, we should just set it to 0. If there are multiple ppc's with no priority it's a random pick. If the user really cares, they will specify the priority. This way is less deterministic when ppcs get evaluated, but assuming that a PPC which is created first has a higher priority might not be the best assumption to go with.

On the other hand, if a value is specified, we should take care that no other ppc has the same value - I'm fine with that.

2. Get all the pod placement configurations in the pod's namespace, sorted by descending priority
3. For each pod placement configuration:
- Check if the pod matches the label selector
- Apply the configuration to the pod.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does the following situation exist:
the image supported architecture doesn't exist in preference Arch in plugins.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @lwan-wanglin, can you clarify your point here?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what I mean is that if a pod container using single arch image like ppc64le, but the namespace-scoped ppc set a plugins configure like below, finially we set nodeAffinity with supported arch ppc64le, right?

plugins:
    nodeAffinityScoring:
      enabled: true
      platforms:
        - architecture: amd64
          weight: 25
        - architecture: arm64
          weight: 75

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the required during scheduling affinity will always be added by cppc. in this case the preferred during scheduling affinity will also be added with weights - it just won't do anything

@openshift-ci openshift-ci bot removed the lgtm Indicates that a PR is ready to be merged. label Feb 10, 2025
Copy link

openshift-ci bot commented Feb 10, 2025

New changes are detected. LGTM label has been removed.

Copy link

openshift-ci bot commented Feb 10, 2025

@aleskandro: all tests passed!

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
jira/valid-reference Indicates that this PR references a valid Jira ticket of any type.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants