-
Hi all! I am experience a lot of overhead (2min 20s) using The overhead is highlighted in the picture. It happens between the Kubernetes run worker job created and the Started execution of run for '<job_name>'. I believe it is not an issue in my K8s cluster since the job is created in the cluster almost instantaneously. Can it be related to the time to load all the definitions in my user-deployed code? The question was originally asked in Dagster Slack. |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments
-
I am the original author of this question. |
Beta Was this translation helpful? Give feedback.
-
Hi @gustavo-delfosim - i think this is indeed likely the time it takes to import your Python module with your definitions in it, as that's the main work that's happening during this time period. Using py-spy is one tool that can help here: #14771 Splitting things into more code locations is another option, as the run will only need to load the code location in which its ops or assets are in. |
Beta Was this translation helpful? Give feedback.
Hi @gustavo-delfosim - i think this is indeed likely the time it takes to import your Python module with your definitions in it, as that's the main work that's happening during this time period.
Using py-spy is one tool that can help here: #14771
Splitting things into more code locations is another option, as the run will only need to load the code location in which its ops or assets are in.