Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OCPBUGS-43896: add revert logic to OCL path in MCD #4825

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

cheesesashimi
Copy link
Member

@cheesesashimi cheesesashimi commented Jan 30, 2025

- What I did

If a problem occurs while attempting to reboot a given node while OCL is enabled, the degradation message did not indicate what the problem was. Instead, it would indicate a problem with rpm-ostree because it would attempt to reapply the same image, which it can't do. Additionally, the MCDRebootError metric was not being surfaced in certain cases (although I did not actually observe this to be the case).

To remedy this, I did the following:

  • Refactored the onClusterBuildUpdate() function in the MCD to include more state restorations in the event of an error. Additioanlly, this function now uses the applyOSChanges() method which includes benefits such as additional eventing and metrics.
  • Refactored the finalizeOCLRevert() function to do more state restorations.
  • Modified the machineConfigDiff object to include a boolean oclEnabled field which will allow us to use that information. This includes the creation of a new constructor just for OCL cases.

- How to verify it

  1. Bring up a cluster.
  2. Enable OCL.
  3. Break reboots on one of the worker nodes by doing something like: $ oc debug node-name -- chroot /host sh -c "mount -o remount,rw /usr; mv /usr/bin/systemd-run /usr/bin/systemd-run2"
  4. Move that worker node into the layered pool.
  5. Wait for the MCDRebootError to appear in the Alerts / Metrics part of the web UI console. There should also be a message on the node object which indicates the cause of the failure.
  6. Fix the reboot process by doing something like: $ oc debug node-name -- chroot /host sh -c "mount -o remount,rw /usr; mv /usr/bin/systemd-run2 /usr/bin/systemd-run".
  7. The node should eventually reboot into the correct configuration / image. When that happens, the degradation message should be cleared from the node object. Additionally, the MCDRebootError alert / metric should clear.

- Description for the changelog
Make MCDRebootErrors clearer for OCL

@openshift-ci openshift-ci bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Jan 30, 2025
Copy link
Contributor

openshift-ci bot commented Jan 30, 2025

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@cheesesashimi cheesesashimi changed the title add revert logic to OCL path in MCD OCPBUGS-43896: add revert logic to OCL path in MCD Jan 30, 2025
@openshift-ci-robot openshift-ci-robot added jira/severity-moderate Referenced Jira bug's severity is moderate for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. labels Jan 30, 2025
@openshift-ci-robot
Copy link
Contributor

@cheesesashimi: This pull request references Jira Issue OCPBUGS-43896, which is valid. The bug has been moved to the POST state.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.19.0) matches configured target version for branch (4.19.0)
  • bug is in the state New, which is one of the valid states (NEW, ASSIGNED, POST)

Requesting review from QA contact:
/cc @sergiordlr

The bug has been updated to refer to the pull request using the external bug tracker.

In response to this:

- What I did

If a problem occurs while attempting to reboot a given node while OCL is enabled, the degradation message did not indicate what the problem was. Instead, it would indicate a problem with rpm-ostree because it would attempt to reapply the same image, which it can't do. Additionally, the MCDRebootError metric was not being surfaced in certain cases (although I did not actually observe this to be the case).

To remedy this, I did the following:

  • Refactored the onClusterBuildUpdate() function in the MCD to include more state restorations in the event of an error. Additioanlly, this function now uses the applyOSChanges() method which includes benefits such as additional eventing and metrics.
  • Refactored the finalizeOCLRevert() function to do more state restorations.
  • Modified the machineConfigDiff object to include a boolean oclEnabled field which will allow us to use that information. This includes the creation of a new constructor just for OCL cases.

- How to verify it

  1. Bring up a cluster.
  2. Enable OCL.
  3. Break reboots on one of the worker nodes: $ oc debug "node/$node" -- chroot /host sh -c "mount -o remount,rw /usr; mv /usr/bin/systemd-run /usr/bin/systemd-run2"
  4. Move that worker node into the layered pool.
  5. Wait for the MCDRebootError to appear in the Alerts / Metrics part of the web UI console. There should also be a message on the node object which indicates the cause of the failure.

- Description for the changelog
Make MCDRebootErrors clearer for OCL

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot openshift-ci-robot added the jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. label Jan 30, 2025
@cheesesashimi cheesesashimi marked this pull request as ready for review January 30, 2025 22:34
@openshift-ci openshift-ci bot requested a review from sergiordlr January 30, 2025 22:34
@openshift-ci openshift-ci bot added approved Indicates a PR has been approved by an approver from all required OWNERS files. and removed do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. labels Jan 30, 2025
@openshift-ci-robot
Copy link
Contributor

@cheesesashimi: This pull request references Jira Issue OCPBUGS-43896, which is valid.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.19.0) matches configured target version for branch (4.19.0)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, POST)

Requesting review from QA contact:
/cc @sergiordlr

In response to this:

- What I did

If a problem occurs while attempting to reboot a given node while OCL is enabled, the degradation message did not indicate what the problem was. Instead, it would indicate a problem with rpm-ostree because it would attempt to reapply the same image, which it can't do. Additionally, the MCDRebootError metric was not being surfaced in certain cases (although I did not actually observe this to be the case).

To remedy this, I did the following:

  • Refactored the onClusterBuildUpdate() function in the MCD to include more state restorations in the event of an error. Additioanlly, this function now uses the applyOSChanges() method which includes benefits such as additional eventing and metrics.
  • Refactored the finalizeOCLRevert() function to do more state restorations.
  • Modified the machineConfigDiff object to include a boolean oclEnabled field which will allow us to use that information. This includes the creation of a new constructor just for OCL cases.

- How to verify it

  1. Bring up a cluster.
  2. Enable OCL.
  3. Break reboots on one of the worker nodes by doing something like: $ oc debug node-name -- chroot /host sh -c "mount -o remount,rw /usr; mv /usr/bin/systemd-run /usr/bin/systemd-run2"
  4. Move that worker node into the layered pool.
  5. Wait for the MCDRebootError to appear in the Alerts / Metrics part of the web UI console. There should also be a message on the node object which indicates the cause of the failure.
  6. Fix the reboot process by doing something like: $ oc debug node-name -- chroot /host sh -c "mount -o remount,rw /usr; mv /usr/bin/systemd-run2 /usr/bin/systemd-run".
  7. The node should eventually reboot into the correct configuration / image. When that happens, the degradation message should be cleared from the node object. Additionally, the MCDRebootError alert / metric should clear.

- Description for the changelog
Make MCDRebootErrors clearer for OCL

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@cheesesashimi cheesesashimi force-pushed the zzlotnik/OCPBUGS-43896 branch from 9fded41 to c93fcf6 Compare January 30, 2025 22:47
@sergiordlr
Copy link

Hello! After breaking the reboot process what we get is this error instead of one reporting the broken reboot:

  - lastTransitionTime: "2025-02-04T13:07:36Z"
    message: 'Node ip-10-0-15-238.us-east-2.compute.internal is reporting: "error
      rolling back changes to OS: [failed to update OS to quay.io/mcoqe/layering@sha256:b18a32516acf4b897c0bf6e97a4ad43d483445c75f9d438f71f9eb215535c96a:
      error running rpm-ostree rebase --experimental ostree-unverified-registry:quay.io/mcoqe/layering@sha256:b18a32516acf4b897c0bf6e97a4ad43d483445c75f9d438f71f9eb215535c96a:
      error: Old and new refs are equal: ostree-unverified-registry:quay.io/mcoqe/layering@sha256:b18a32516acf4b897c0bf6e97a4ad43d483445c75f9d438f71f9eb215535c96a\n:
      exit status 1, reboot command failed, something is seriously wrong]"'

@cheesesashimi cheesesashimi force-pushed the zzlotnik/OCPBUGS-43896 branch from c93fcf6 to 15a2123 Compare February 4, 2025 18:45
@cheesesashimi
Copy link
Member Author

@sergiordlr Just pushed a fix for that.

This was needed because if there was a problem rebooting, the
MCDRebootError would never be surfaced. Additionally, the degraded
message would not include that. This fixes that problem by adding revert
code and also makes the OCL update path more similar to the non-OCL
update path.
@cheesesashimi cheesesashimi force-pushed the zzlotnik/OCPBUGS-43896 branch from 15a2123 to 88a85e2 Compare February 4, 2025 21:04
@cheesesashimi
Copy link
Member Author

/test e2e-gcp-op-ocl e2e-hypershift

@sergiordlr
Copy link

Verified using IPI on AWS

Test case "[sig-mco] MCO alerts Author:sregidor-NonHyperShiftHOST-NonPreRelease-Longduration-Medium-63865-[P1] MCDRebootError alert[Disruptive] [Serial]" passed with and without OCL

/label qe-approved

@openshift-ci openshift-ci bot added the qe-approved Signifies that QE has signed off on this PR label Feb 5, 2025
@cheesesashimi
Copy link
Member Author

/test e2e-hypershift e2e-azure-ovn-upgrade-out-of-change

@dkhater-redhat
Copy link
Contributor

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Feb 6, 2025
Copy link
Contributor

openshift-ci bot commented Feb 6, 2025

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: cheesesashimi, dkhater-redhat

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:
  • OWNERS [cheesesashimi,dkhater-redhat]

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@umohnani8
Copy link
Contributor

LGTM

Copy link
Contributor

openshift-ci bot commented Feb 6, 2025

@cheesesashimi: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-azure-ovn-upgrade-out-of-change 88a85e2 link false /test e2e-azure-ovn-upgrade-out-of-change
ci/prow/e2e-hypershift 88a85e2 link true /test e2e-hypershift

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. jira/severity-moderate Referenced Jira bug's severity is moderate for the branch this PR is targeting. jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. lgtm Indicates that a PR is ready to be merged. qe-approved Signifies that QE has signed off on this PR
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants