You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hey, I've got all of your examples running and it's working well. It seems when I add a model to the model string, it is downloaded, but what is the process of using a local directory of mlx models that I've downloaded already? Also, I know that once I've run the example the first time which takes longer because it is downloading the model. Are you then loading the model? I ask because, If I want to have quick access to 3 or 4 models, is there a way of 'preloading them?' like you might with LLM Studio or Ollama? or is that not necessary with your tool?
Thanks!
The text was updated successfully, but these errors were encountered:
This project adopts the same method as the mlx-examles project, both of which load and manage models through huggingface. Generally speaking, you can find the downloaded model files in the directory /Users/user_name/.cache/huggingface/hub/.
Additionally, you can also use the huggingface-cli command-line tool to pre-download, such as huggingface-cli download mlx-community/phi-4-4bit.
Hey, I've got all of your examples running and it's working well. It seems when I add a model to the model string, it is downloaded, but what is the process of using a local directory of mlx models that I've downloaded already? Also, I know that once I've run the example the first time which takes longer because it is downloading the model. Are you then loading the model? I ask because, If I want to have quick access to 3 or 4 models, is there a way of 'preloading them?' like you might with LLM Studio or Ollama? or is that not necessary with your tool?
Thanks!
The text was updated successfully, but these errors were encountered: