-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Failed to start ContainerManager" err="failed to get rootfs info: failed to get mount point for device..." #3839
Comments
I think we need to know a little more about your environment. Can you include the output from You can also run |
not familiar with this one, but using a more common eg etx4 partition will probably fix it. |
docker info:
|
My rootfs is on this and kind wants to know about rootfs, or no? |
I did this, or maybe withou the --retain but it does not matter, as the message I posted initially is repeatedly in kubelet.log and it Attaching the whole file - I will provide any other of the, I just do not want to flood it here with useless data, so please guide me. |
kubelet is looking for stats, but from it's POV the "rootfs" will be whatever the storage for the "node" container is on. The logs from kubelet don't make sense in this context because it's expected to be running directly on a "real" host (machine, VM), not in a container (which is not technically supported upstream) So the rootfs in this case would be whatever filesystem docker's data root is on with your volumes and containers. This code is not in kind, and the filesystem stats need to work inside the container. |
https://docs.docker.com/engine/daemon/#daemon-data-directory |
In theory we'd like kind to work with all of these, but in practice the container ecosystem is most well tested with ext4, possibly a few others, but definitely not all filesystems (and most of the relevant code is not in kind). In the future instead of cadvisor the stats may be in kubelet and CRI (containerd here). See also: https://github.com/kubernetes-sigs/kind/pull/1464/files (not sure if this sort of thing is relevant for f2fs) |
Thanks for the pointers, I'll look at it hopefully soon more closely. I appreciate the info, it's just there came some more pushing things. |
I've checked the code. Not sure how is the function |
Yes, we have no attempt to support F2FS specifically (and I'm not sure what is necessary for it), but you could try manually configuring the equivalent /dev/mapper mount on the off chance we have the same problem here. https://kind.sigs.k8s.io/docs/user/configuration/#extra-mounts |
I'm getting the error as desrivd in known issues (https://kind.sigs.k8s.io/docs/user/known-issues/) but the creating and using the cluster config file did not change anything:
Jan 05 23:26:32 kind-control-plane kubelet[1763]: E0105 23:26:32.106420 1763 kubelet.go:1649] "Failed to start ContainerManager" err="failed to get rootfs info: failed to get mount point for device "/dev/nvme0n1p2
": no partition info for device "/dev/nvme0n1p2""
Jan 05 23:26:32 kind-control-plane systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
My cluster yaml looks this way, the partition has file system F2FS:
(starting it with
kind create cluster --config ~/.kind/cluster.yaml
)kind version:
kind v0.26.0 go1.23.4 linux/amd64
docker version:
Is there something else I should check or another workaround?
The text was updated successfully, but these errors were encountered: