As of the time of writing, Ollama supports only macOS and Linux. Support for Windows is planned for future releases.
- Navigate to the Ollama website.
- Click on the
Download
button. - Follow the provided installation instructions.
Ollama offers a variety of pre-trained models for download to use locally. These files are quite large, ranging from 4 to 16 GB, so ensure you have sufficient disk space and a reliable internet connection before proceeding. Here are the steps to download a model:
- Visit the Ollama model library.
- Choose a model to download. Currently,
mistral
is among the better models. Note: Additional model options are available under theTag
tab on the model's page. - Confirm that the Ollama application is open.
- Open a terminal and execute the command:
ollama pull <model-name>
. - Allow time for the download to complete, which will vary based on your internet speed.
- Verify the model's functionality by running:
ollama run <model-name> "Tell me a joke about auto-complete."
You should receive a response similar to:
> ollama run mistral "Tell me a joke about auto-complete."
Why did the text editor's auto-complete feature feel superior?
Because it had a lot of "self-confidence."
(This joke may resonate more with programmers and tech enthusiasts.)
- Open the Obsidian vault where the plugin is installed.
- Go to the plugin's settings.
- Select
Self-hosted Ollama API
in the API Provider setting to update the view with the API settings. - Verify that the API URL is set correctly. The default should be
http://localhost:11434/api/chat
. - Enter the downloaded model's name in the designated field.
- Click the
Test Connection
button to confirm the API key's validity and the plugin's ability to connect to the Ollama API. - Close the settings window.
- With the setup complete, the plugin is ready for use. It will offer suggestions as you type, triggered at specific points such as the end of a sentence, or you can manually activate it by using the command palette and entering
Copilot auto-completion: Predict
.
Note: If Ollama does not automatically start up the API server, you can manually start it by running ollama serve
in a terminal.