-
Notifications
You must be signed in to change notification settings - Fork 2.2k
Preventing containers from being unable to be deleted #4757
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
11c5aba
to
60ae641
Compare
I was unable to add integration tests for this PR without resorting to some hacky methods, but I tested whether this issue was resolved in the kubernetes-sigs/kind repository. In brief, I discovered this issue while working in the kubernetes/kubernetes repo to propagate kubelet's context to the container runtime. The issue manifested as the test job being unable to tear down after the k/k repo's e2e tests completed, because the leaked Therefore, I opened a PR in the kubernetes-sigs/kind repo to debug this issue by manually replacing the containerd/runc binaries in the CI environment. After building the code from this PR and replacing the binaries in the CI environment, the test job no longer failed to tear down due to systemd being unable to shut down, as the leaked processes were resolved. Ref: kubernetes-sigs/kind#3903 (Some job failures occurred due to the instability of the k/k repo e2e tests, but they are unrelated to this issue.) I also conducted some manual tests targeting the scenarios where the leftover container is in the paused and stopped states.Paused: Inject sleep to allow us to control where the code is interrupted.You can add a headerdiff --git a/vendor/github.com/opencontainers/cgroups/systemd/v1.go b/vendor/github.com/opencontainers/cgroups/systemd/v1.go
index 8453e9b4..bbe3524c 100644
--- a/vendor/github.com/opencontainers/cgroups/systemd/v1.go
+++ b/vendor/github.com/opencontainers/cgroups/systemd/v1.go
@@ -6,6 +6,7 @@ import (
"path/filepath"
"strings"
"sync"
+ "time"
systemdDbus "github.com/coreos/go-systemd/v22/dbus"
"github.com/sirupsen/logrus"
@@ -361,6 +362,7 @@ func (m *LegacyManager) Set(r *cgroups.Resources) error {
}
}
setErr := setUnitProperties(m.dbus, unitName, properties...)
+ time.Sleep(time.Second * 30)
if needsThaw {
if err := m.doFreeze(cgroups.Thawed); err != nil {
logrus.Infof("thaw container after SetUnitProperties failed: %v", err)
stopped: Inject sleep to allow us to control where the code is interrupted.You can add a headerdiff --git a/libcontainer/process_linux.go b/libcontainer/process_linux.go
index 96e3ca5f..350e3660 100644
--- a/libcontainer/process_linux.go
+++ b/libcontainer/process_linux.go
@@ -613,6 +613,7 @@ func (p *initProcess) start() (retErr error) {
return fmt.Errorf("unable to apply cgroup configuration: %w", err)
}
}
+ time.Sleep(time.Second * 30)
if p.intelRdtManager != nil {
if err := p.intelRdtManager.Apply(p.pid()); err != nil {
return fmt.Errorf("unable to apply Intel RDT configuration: %w", err)
|
60ae641
to
29dcef9
Compare
See also: #2575 |
// because the container lacks a state.json file. | ||
// This typically occurs when higher-level | ||
// container runtimes terminate the runc create process due to context cancellation or timeout. | ||
_, err = p.container.updateState(nil) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this fixing the problem or there is still a race if the signal is sent before we do this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Although
runc init STAGE_PARENT/STAGE_CHILD
may exist simultaneously whenrunc create
receives SIGKILL signal, after runc create terminates,STAGE_PARENT/STAGE_CHILD
will also terminate due to the termination ofrunc create
:
- STAGE_PARENT: Directly relies on
pipenum
to communicate withrunc create
. Whenrunc create
terminates,pipenum
is closed, causingSTAGE_PARENT
to fail when reading/writing topipenum
, triggering bail and termination.- STAGE_CHILD: Relies on syncfd to synchronize with
STAGE_PARENT
. WhenSTAGE_PARENT
terminates,syncfd
is closed, causingSTAGE_CHILD
to fail when reading/writing tosyncfd
, triggering bail and termination.
As stated here, runc init [STAGE_PARENT/STAGE_CHILD]
will terminated after runc create
terminates, and at this point, the cgroup has not yet been created. I believe this does not lead to a race condition, nor does it cause process leaks or other resources to remain uncleaned.
@HirazawaUi thanks! So my comment was on-spot, but you didn't need to remove the assignment? For testing, I'd like to have something. It should be simple and kind of reliable. Here are some ideas, but we don't need a test if we don't find a reasonable and simple way to test this:
|
I believe that removing this assignment and delaying the assignment process until after runc/libcontainer/container_linux.go Lines 893 to 895 in a4b9868
runc/libcontainer/container_linux.go Lines 905 to 913 in a4b9868
|
I will try testing it in the direction of Suggestion 2 (it seems the most effective). If it cannot be implemented, I will promptly provide feedback here :) |
39d801e
to
a6ebd29
Compare
Test case has been added. While attempting to use Compared to event monitoring, this approach better aligns with the scenario we encountered and is completely asynchronous. The only downside seems to be its fragility, but I added numerous @rata Do you think this test case sufficiently covers the scenarios for this PR? |
This comment was marked as spam.
This comment was marked as spam.
ping @kolyshkin @AkihiroSuda @rata Could you take another look at this PR? Any feedback would be greatly appreciated. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(Sorry, had some pending review comments which I forgot to submit)
Also, you need a proper name/description for the second commit. Currently it just says "add integration test" which is enough in the context of this PR, but definitely not enough when looking at git history.
a6ebd29
to
1606d12
Compare
4b1b1e0
to
72e3fac
Compare
@kolyshkin Thank you so much for your review! All the issues have been addressed. Could you take another look at this PR? I believe this PR is overall harmless, it doesn’t break any of our existing behavior and simply adds a new troubleshooting scenario. :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks!
# This test verifies that a container can be properly deleted | ||
# even if the runc create process was killed with SIGKILL | ||
|
||
requires root cgroups_v1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why requires cgroups_v1
? I think it's also work for cgroup v2
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using cgroup v2 might cause the cgroup configuration phase during runc create to complete prematurely, container will reach the created state directly. This could result in test failures.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While it might succeed, this approach could introduce instability in the test behavior. Therefore, I'm restricting usage to cgroups v1 exclusively for this implementation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This could result in test failures.
Failure in which line? Maybe here?
runc/tests/integration/delete.bats
Line 302 in 0a35a54
[[ "$output" == *"stopped"* || "$output" == *"paused"* ]] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, the changes in this PR aim to ensure proper cleanup of the container if the runc create process terminates unexpectedly before the container enters the "created" state.
Therefore, our test cases should specifically verify that the container remains in either "stopped" or "paused" states under such scenarios. Once the container transitions to the "created" state, it would fall outside the scope of the test cases are intended to cover.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using cgroup v2 might cause the cgroup configuration phase during runc create to complete prematurely, container will reach the created state directly.
So, there is still a chance to reach the stopped
or paused
state for cgroup v2 here?
The changes in this PR aim to ensure proper cleanup of the container if the runc create process terminates unexpectedly before the container enters the "created" state.
Therefore, our test cases should specifically verify that the container remains in either "stopped" or "paused" states under such scenarios. Once the container transitions to the "created" state, it would fall outside the scope of the test cases are intended to cover.
But how to ensure your changes worked in cgroup v2? If we introduce some regressions in cgroup v2 for such scenario, there is no test to cover cgroup v2.
I think requires cgroups_v1
is used to test some functions only supported by cgroup v1, but not supported by cgroup v2. This PR influences both cgroup versions, so I think we should support both cgroup v1 and cgroup v2 here.
Maybe you can do the cgroup version check in L302?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, there is still a chance to reach the
stopped
orpaused
state for cgroup v2 here?
yes.
But how to ensure your changes worked in cgroup v2? If we introduce some regressions in cgroup v2 for such scenario, there is no test to cover cgroup v2. I think
requires cgroups_v1
is used to test some functions only supported by cgroup v1, but not supported by cgroup v2. This PR influences both cgroup versions, so I think we should support both cgroup v1 and cgroup v2 here.Maybe you can do the cgroup version check in L302?
I fully understand your concerns, but as I mentioned earlier, there's no reliable and effective way to 100% reproduce this test case in cgroupv2 environments without artificially injecting sleeps into the code.
We could modify the test at L302 to relax state validation for cgroupv2 (allowing containers in "created" state to pass), but this would render the test meaningless in the context of this PR. The core purpose of this PR is to ensure cleanup when runc create terminates before the container reaches "created" state.
Once a container enters the "created" state, its cleanup is already guaranteed by existing runc logic, that's outside the scope of what these changes aim to address.
This is merely a stopgap measure I've resorted to since I've been unable to delay the cgroupv2 set process. I'd be extremely grateful if you have any suggestions for reliably postponing cgroupv2 configuration.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but as I mentioned earlier, there's no reliable and effective way to 100% reproduce this test case in cgroupv2 environments without artificially injecting sleeps into the code.
In fact, I have done 1000 times test running for cgroupv2, I didn't meet any errors here. Another question, you can ensure a 100% right result for this test case in cgroupv1?
BTW, please remove the last commit Merge branch main...
, and use git rebase
to catch up with the branch main.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In fact, I have done 1000 times test running for cgroupv2, I didn't meet any errors here. Another question, you can ensure a 100% right result for this test case in cgroupv1?
I can't guarantee that cgroupv1 will definitely succeed, but based on my debugging experience, configuring cgroupv1 generally takes more time than cgroupv2 :)
In previous automated test runs prior to this PR, there were failures. So, I've modified this test case to be limited to cgroupv1.
I just tried to find the historical test execution records for this PR, but GitHub Actions doesn't seem to provide an accessible entry point for this.
BTW, please remove the last commit
Merge branch main...
, and usegit rebase
to catch up with the branch main.
Ah, this is GitHub's default behavior after clicking the 'update branch' button. I haven't noticed the additional commit yet, I'll rebase it.
Signed-off-by: HirazawaUi <[email protected]>
Signed-off-by: HirazawaUi <[email protected]>
0a35a54
to
4720e5b
Compare
The failed test job is unrelated to the current PR. Waiting for retesting.... |
This is follow-up to PR #4645, I am taking over from @jianghao65536 to continue addressing this issue.
If the
runc-create
process is terminated due to receiving a SIGKILL signal, it may lead to the runc-init process leaking due to issues like cgroup freezing, and it cannot be cleaned up byrunc delete/stop
because the container lacks astate.json
file. This typically occurs when higher-level container runtimes terminate the runc create process due to context cancellation or timeout.In PR #4645, the Creating state was added to clean up processes in the
STAGE_PARENT/STAGE_CHILD
stage within the cgroup. This PR no longer adds theCreating
state for the following reasons:Although
runc init STAGE_PARENT/STAGE_CHILD
may exist simultaneously whenrunc create
receives SIGKILL signal, after runc create terminates,STAGE_PARENT/STAGE_CHILD
will also terminate due to the termination ofrunc create
:pipenum
to communicate withrunc create
. Whenrunc create
terminates,pipenum
is closed, causingSTAGE_PARENT
to fail when reading/writing topipenum
, triggering bail and termination.STAGE_PARENT
. WhenSTAGE_PARENT
terminates,syncfd
is closed, causingSTAGE_CHILD
to fail when reading/writing tosyncfd
, triggering bail and termination.If the
runc-create
process is terminated during execution, the container may be in one of the following states:SIGKILL
signal during the process of setting the cgroup, the container will be in a paused state. At this point, therunc init
process becomes zombie process and cannot be killed. However,pausedState.destroy
will thaw the cgroup and terminate therunc init
process.STAGE_PARENT
->STAGE_CHILD
phase, the container will be in a stopped state. As described above,STAGE_PARENT/STAGE_CHILD
will terminate due to the termination of runc create, so no processes will be left behind. We only need to clean up the remaining cgroup files, andstoppedState.destroy
will handle this cleanup.Therefore, based on the above reasons, the existing
paused
andstopped
states are sufficient to handle the abnormal termination of runc create due to a SIGKILL signal.