-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Swap stats is not shown as part of the metrics/resource endpoint #3834
Comments
FWIW you're on an oudated release, but I would not be surprised that some kernel info like this is not working properly, the "node containers" are a bit leaky, and I don't think SIG node officially supports this environment. What's your use case? This will probably take a bit of debugging ... |
I can try to test it on a current release if it would be valuable
Just a development environment, was trying to work on swap metrics. |
I suspect we'll see the same thing but ... worth a shot. Makes sense, sorry, there hasn't been a ton of demand for metrics overall and they're not part of conformance, we have some known issues around e.g. CPU and memory reflecting the underlying host (Which is then repeated for each cluster/node), it's messy and ideally we'd require more cooperation from kubelet and/or cadvisor to mitigate. Maybe kubelet has relevant logs? FWIW swap support is a recent thing in Kubernetes, historically kubernetes has recommended disabling swap, and kubelet even had a hard requirement for it by default (it was possible to opt-out with a warning log instead). EDIT: of course @iholder101 is working on the swap support, "development environment" is ambiguous for kind, though the kubernetes project itself is our first priority. For some SIG node work, you might have better luck with |
Thanks for the answer @BenTheElder!
Thanks for the detailed explanation!
Yeah I can use some logs, but when working on adding a swap metric for example I have to actually test that it's there and working as expected. As of now, I don't see a way of doing so without actually taking some nodes and creating a cluster out of them with something like kubeadm, which is a tiresome process.
As @dims have expressed here, it seems that the local-up cluster is in "maintenance mode" and is not really being developed anymore. He claims that kind became the de-facto development platform for SIG node. In any case, the local-up cluster also seems to not support calling
|
I think you can "emulat" local-up by running kind with. single node using host-network |
For the record, here's what i said @iholder101 |
Right, thanks for bringing the source. For the record, you've also said: If so, is it true to assume that local-up cluster is now in a "maintanance mode" and that kind is the now de-facto the development environment for SIG node? Sorry if I didn't understood it correctly. |
Thanks, I wasn't aware, I don't think SIG Node has a stance as a SIG, officially, but previously maintainers had indicated a preference for running kubelet directly on a host versus in kind, which is understandable (consider e.g. #1422 ...), I haven't been fully clear if kind is even considered supported for node versus For a lot of node development node_e2e tests are commonly used against a single target host over SSH (there's a script for this with GCE, and I think dims worked out something with EC2?, but I don't know if there's e.g. a limactl approach yet),, but I can't speak for the SIG. I would be happy to have this working correctly, and I'd consider any proposed bug fixes, but I don't personally have much spare time at the moment :(.
host network won't work and isn't currently supported, I also think it would be more confusing than just using single node since other aspects would still be containerized. If you or someone else can dig more into what's happening here, we can consider options to patch. |
@BenTheElder: GuidelinesPlease ensure that the issue body includes answers to the following questions:
For more details on the requirements of such an issue, please see here and ensure that they are met. If this request no longer meets these requirements, the label can be removed In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Ah, sorry to hear that @BenTheElder! Wish you the best and a fast recovery!
I'll try to find the time, although TBH I'm pretty overloaded myself. |
Thank you! I'm up and down, it is still really limiting my throughput so I'm having to be more strategic about time use...
FWIW I don't think Kubernetes would adopt vagrant today due to licensing concerns. Kubernetes had a vagrant based solution when I first worked on the project, since then it doesn't ... I built kind as a replacement in part, but it admittedly has some trade-offs that may be unsuitable to some node work. I think a limactl solution for node_e2e would be moderately popular. For other parts of the project, it's hard to beat the speed of containers and there are less issues with them. node_e2e doesn't have that problem though in that it's not end user facing and it doesn't even really involve clusters. |
What happened:
The following:
As can be seen, swap stats is not shown here:
What you expected to happen:
Swap to be included in metrics/resource endpoint stats.
How to reproduce it (as minimally and precisely as possible):
kubectl get --raw "/api/v1/nodes/<NODE-NAME>/proxy/metrics/resource" | grep -i swap
Anything else we need to know?:
Swap stats were introduced in this PR: kubernetes/kubernetes#118865.
It also shows the expected output.
Environment:
kind version
): > kind v0.22.0 go1.21.7 linux/amd64docker info
,podman info
ornerdctl info
): docker 27.2.1/etc/os-release
): Fedora 39kubectl version
): 1.32 (main)The text was updated successfully, but these errors were encountered: