You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
From docs, customers that want to pull large OCI format images (think large LLM images with gigabyte weights) to a multi-thousand node K8S cluster have no intuitive way to "warm" up workload scale up. A K8S admin interested in this scenario can of course create 1 pod, pull the image to that pod (which causes the workload to begin executing on that pod), before scaling to more pods to leverage P2P.
However, a customer may be interested in pre-emptively caching OCI content on the cluster (1) without image pulls and (2) without the pulled image running on any pod because they are interested in all images running at the same time. This isn't easily done and requires a lot of manual K8S scripts to achieve.
If Peerd had an intuitive CLI command or a K8S YAML experience for "pre-warming", it would facilitate community adoption.
The text was updated successfully, but these errors were encountered:
From docs, customers that want to pull large OCI format images (think large LLM images with gigabyte weights) to a multi-thousand node K8S cluster have no intuitive way to "warm" up workload scale up. A K8S admin interested in this scenario can of course create 1 pod, pull the image to that pod (which causes the workload to begin executing on that pod), before scaling to more pods to leverage P2P.
However, a customer may be interested in pre-emptively caching OCI content on the cluster (1) without image pulls and (2) without the pulled image running on any pod because they are interested in all images running at the same time. This isn't easily done and requires a lot of manual K8S scripts to achieve.
If Peerd had an intuitive CLI command or a K8S YAML experience for "pre-warming", it would facilitate community adoption.
The text was updated successfully, but these errors were encountered: