You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current issue for support of multi-GPU is the fact that model never gets mapped to any other available devices and always it is mapped to cuda:0. It might be something coded in model that if cuda is available always map to cuda:0, so the setting of the model should get changed to support multi-gpu.
Currently, geo-inference only supports the use of a single GPU. I want to support the use of multiple GPUs to increase inference speed.
The text was updated successfully, but these errors were encountered: