You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is there a way to estimate TF Memory usage before serving the model. We would like to take actions based on this information like if we want to undeploy some models to meet memory usage limit. Also, can we get this information for before and after model warmup? Since Model Warmup will lead to increased memory usage (based on the fact that tf serving only initializes internal data structures lazily)
The text was updated successfully, but these errors were encountered:
Could you try the memory profiling tool to see if it helps?
You could also take a look at get_memory_info, which provides the current and peak memory that TensorFlow is actually using. If you are more interested in monitoring GPU usage, we already have a ongoing feature request #1407 in progress. Requesting you to close this issue and follow and +1 similar thread for updates.
Thank you!
Question
Is there a way to estimate TF Memory usage before serving the model. We would like to take actions based on this information like if we want to undeploy some models to meet memory usage limit. Also, can we get this information for before and after model warmup? Since Model Warmup will lead to increased memory usage (based on the fact that tf serving only initializes internal data structures lazily)
The text was updated successfully, but these errors were encountered: