You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Given the goal of reducing costs on a per-job basis, we would like to understand the effects of limiting CPU cycles available to a build job. This process would add variance to the resource allocation algorithm.
This would take "gantry in the direction of a full genetic algorithm to optimize the resource requests of jobs to build applications in the least expensive way possible" - Alec.
This is essentially a scaling study in order to balance the amount of cycles allocated to a build and the wall time of the job, ultimately optimizing cost. The efficiency curve is the plot of interest, where efficiency is defined as cores/build time.
This would be done by choosing 10-15% of all incoming prediction requests to "fuzz" and purposefully limiting the CPU resources allocated so we can understand the impact on different types of applications and the variety of build options available in Spack.
This fuzzing would occur a few times for each given spec, until we can determine the optimal efficiency for the job, which would be used to define future CPU limits and the number of make jobs.
The text was updated successfully, but these errors were encountered:
once we have fuzzed for a bit, need to figure out how to update the prediction algorithm to choose allocations based on the efficiency of resources/duration
Given the goal of reducing costs on a per-job basis, we would like to understand the effects of limiting CPU cycles available to a build job. This process would add variance to the resource allocation algorithm.
This would take "gantry in the direction of a full genetic algorithm to optimize the resource requests of jobs to build applications in the least expensive way possible" - Alec.
This is essentially a scaling study in order to balance the amount of cycles allocated to a build and the wall time of the job, ultimately optimizing cost. The efficiency curve is the plot of interest, where efficiency is defined as cores/build time.
This would be done by choosing 10-15% of all incoming prediction requests to "fuzz" and purposefully limiting the CPU resources allocated so we can understand the impact on different types of applications and the variety of build options available in Spack.
This fuzzing would occur a few times for each given spec, until we can determine the optimal efficiency for the job, which would be used to define future CPU limits and the number of make jobs.
The text was updated successfully, but these errors were encountered: