-
Notifications
You must be signed in to change notification settings - Fork 536
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Eliminate latency phase from density test #1311
Comments
Nothing interesting happened neither in test-infra nor in perf-tests repo at that time... |
Interesting. Looks like we had a similar drop in load test pod startup, but it got back to normal after a few runs and it completely doesn't coincide in time: Strange... |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
I checked https://prow.k8s.io/view/gcs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-scale-performance/1335992827380764672/ where we see 57s latency for only stateless pods. Sample slow pod:
Timeline:
So it looks like most of the time it was waiting on kube-scheduler, but also latency on kubelet was higher than expected (~5s). In kube-scheduler logs I see a block of daemonset schedule events right before small-deployment-63-677d88d774-xgj2d has been created.
So between 18:05:03 and 18:06:04 kube-scheduler was scheduling only daemonsets. So it looks like due to fact that replicaset-controller and daemonset-controller have separate rate limiters for api calls, they can generate more pods (200 qps/s) than kube-scheduler is able to schedule (100 qps/s) and managed to generate O(minute) backlog of work that slowed down "normal" pods binding. |
This is great finding. @mborsz - once kubernetes/kubernetes#97798 is debugged and fixed, WDYT about this? |
Another important bit of information here is that the deamonset in the load test has its own priorityClass (https://github.com/kubernetes/perf-tests/blob/6aa08f8817fd347b3ccf4d18d29260ce2f57a0a1/clusterloader2/testing/load/daemonset-priorityclass.yaml.). This is why the daemonset pods are starving other pods during scheduling phase. I believe the main issue here is that one of the load test assumptions is that it should create pods with the given throughput (set via I discussed this with Wojtek, and I believe with both agree that it might make sense to move the daemonset operations to a separate CL2 Step. Because steps are executed in serial, it'll stop creating/updating deamonsets in parallel with other pods. This might make the load test minimally slower, but will definitely help with the issue that Maciek found. AIs
/good-first-issue |
@mm4tt: Please ensure the request meets the requirements listed here. If this request no longer meets these requirements, the label can be removed In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
OK - so the above moved us a long way (in fact got down to 5s for 100th percentile based on last two runs of 5k-node test). That said, we're still high on the main "pod startup latency". I took a quick look at the last run: I took at O(10) pods with largest startup_time and almost all of them didn't have any other pods starting on the same node at the same time. So the problem doesn't seem to be scheduling-later anymore. Picking one of the pods, I'm seeing the following in kubelet logs:
It took longer then expected later too, but that partially may be a consequence of some backoffs or sth. Looking into other pods, e.g. medium-deployment-2-c99c86f6b-ch699 gives pretty much the same situation. I initially was finding pods scheduling around the same time. But it's not limited to a single time.
So it seems that latency in the node-authorizer is the biggest bottleneck for pod startup. I will try to look a bit into it. |
With PR decreasing the indexing threshold in node-authorizer: seems to no longer have those, but it didn't really move the needle... |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/remove-lifecycle rotten |
@wojtek-t: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle rotten |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
PodStartup from SaturationPhase signiifcantly dropped recently:
This seem to be the diff between those two runs:
kubernetes/kubernetes@bded41a...1700acb
We should understand why that happened, as if this is expected (not a bug somewhere), this would potentially allow us to achieve our long-standing goal to get rid of latency phase from density test completely.
@mm4tt @mborsz - FYI
The text was updated successfully, but these errors were encountered: