You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Would it be possible to build a Android client so you can enter the IP of the master node so you can start inference directly? Also, new SoCs of Qualcomm have built-in NPUs, would it be possible to utilise these dedicated hardware?
The text was updated successfully, but these errors were encountered:
Would it be possible to build a Android client so you can enter the IP of the master node so you can start inference directly? Also, new SoCs of Qualcomm have built-in NPUs, would it be possible to utilise these dedicated hardware?
I would wonder about the google tensor chips, and intel/amd NPUs as well for hwa.
Would it be possible to build a Android client so you can enter the IP of the master node so you can start inference directly? Also, new SoCs of Qualcomm have built-in NPUs, would it be possible to utilise these dedicated hardware?
I would wonder about the google tensor chips, and intel/amd NPUs as well for hwa.
For Intel, they have something called IPEX-LLM, where you can use cpu/igpu/dgpu/npu, as long as they are all intel devices. Basically you send the model to "xpu" then it will be processed by the designated device. I just don't know if they can processes sliced networks.
Would it be possible to build a Android client so you can enter the IP of the master node so you can start inference directly? Also, new SoCs of Qualcomm have built-in NPUs, would it be possible to utilise these dedicated hardware?
The text was updated successfully, but these errors were encountered: