You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The Grafana dashboard defined in charts/kube-prometheus-stack/templates/grafana/dashboards-1.14/k8s-resources-pod.yaml includes a graph for memory consumption per pod. The memory consumption query is currently sourced from a different repository (https://github.com/prometheus-operator/kube-prometheus):
When a pod is restarted, the current query combines memory usage data from both the old and new containers, leading to temporary spikes in displayed memory consumption. This can cause the dashboard to show memory usage exceeding the container's memory limit, even though actual usage remains within limits.
Steps to Reproduce:
Trigger a pod restart (e.g OOM kill, or Evict).
Compare graphs with expression grouped by just container field with graph that has expression that groups by container and id:
vladmalynych
changed the title
Incorrect Container Memory Consumption Graph Behavior When Pod is Restarted
Allow Overriding or Disabling Default Grafana Dashboards in kube-prometheus-stack Helm
Oct 17, 2024
zeritti
changed the title
Allow Overriding or Disabling Default Grafana Dashboards in kube-prometheus-stack Helm
[kube-prometheus-stack] Allow Overriding or Disabling Default Grafana Dashboards
Oct 17, 2024
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
Is your feature request related to a problem ?
Problem:
The Grafana dashboard defined in
charts/kube-prometheus-stack/templates/grafana/dashboards-1.14/k8s-resources-pod.yaml
includes a graph for memory consumption per pod. The memory consumption query is currently sourced from a different repository (https://github.com/prometheus-operator/kube-prometheus):https://github.com/prometheus-operator/kube-prometheus/blob/main/manifests/grafana-dashboardDefinitions.yaml#L8300
When a pod is restarted, the current query combines memory usage data from both the old and new containers, leading to temporary spikes in displayed memory consumption. This can cause the dashboard to show memory usage exceeding the container's memory limit, even though actual usage remains within limits.
Steps to Reproduce:
container
field with graph that has expression that groups bycontainer
andid
:Describe the solution you'd like.
Possible solution:
Would it be possible to add functionality in Helm to allow default charts to be overwritten or patched?
Describe alternatives you've considered.
Alternative solutions:
Would it be possible to add functionality in Helm to disable specific dashboards ?
Additional context.
This issue is also rised in prometheus-operator/kube-prometheus#2522
The text was updated successfully, but these errors were encountered: