Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Great Tool. one question. #20

Open
JacobGoldenArt opened this issue Jan 27, 2025 · 1 comment
Open

Great Tool. one question. #20

JacobGoldenArt opened this issue Jan 27, 2025 · 1 comment
Labels
question Further information is requested

Comments

@JacobGoldenArt
Copy link

Hey, I've got all of your examples running and it's working well. It seems when I add a model to the model string, it is downloaded, but what is the process of using a local directory of mlx models that I've downloaded already? Also, I know that once I've run the example the first time which takes longer because it is downloading the model. Are you then loading the model? I ask because, If I want to have quick access to 3 or 4 models, is there a way of 'preloading them?' like you might with LLM Studio or Ollama? or is that not necessary with your tool?

Thanks!

@madroidmaq madroidmaq added the question Further information is requested label Feb 3, 2025
@madroidmaq
Copy link
Owner

This project adopts the same method as the mlx-examles project, both of which load and manage models through huggingface. Generally speaking, you can find the downloaded model files in the directory /Users/user_name/.cache/huggingface/hub/.

Additionally, you can also use the huggingface-cli command-line tool to pre-download, such as huggingface-cli download mlx-community/phi-4-4bit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants