Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: ggozad/oterm
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: 0.1.17
Choose a base ref
...
head repository: ggozad/oterm
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: main
Choose a head ref

Commits on Dec 30, 2023

  1. Copy the full SHA
    e96b01b View commit details
  2. Copy the full SHA
    8409982 View commit details

Commits on Jan 4, 2024

  1. Copy the full SHA
    9f6ccaf View commit details
  2. Update textual

    ggozad committed Jan 4, 2024
    Copy the full SHA
    59d547b View commit details
  3. Simplify ChatItem.

    Only render Markdown, fix duplicate text.
    ggozad committed Jan 4, 2024
    Copy the full SHA
    462b042 View commit details
  4. Copy the full SHA
    b90f403 View commit details

Commits on Jan 5, 2024

  1. Simplify chat/tab id.

    ggozad committed Jan 5, 2024
    Copy the full SHA
    ac914cc View commit details
  2. Copy the full SHA
    1417a39 View commit details
  3. Copy the full SHA
    e55d894 View commit details
  4. Copy the full SHA
    ea96b0b View commit details
  5. Merge pull request #43 from ggozad/feature/fast_creation

    Feature/fast creation
    ggozad authored Jan 5, 2024
    Copy the full SHA
    00435cd View commit details
  6. Copy the full SHA
    2006bc9 View commit details
  7. vb

    ggozad committed Jan 5, 2024
    Copy the full SHA
    0b3a74d View commit details

Commits on Jan 8, 2024

  1. Support OLLAMA_URL and OLLAMA_HOST

    Restores the original behavior of OLLAMA_URL,
    which requires that the full path be specified,
    including the /api prefix. Also supports the
    OLLAMA_HOST behavior requested in
    resolved #37 by adding the /api
    prefix when constructing the OLLAMA_URL.
    bbatsell committed Jan 8, 2024
    Copy the full SHA
    a12adaf View commit details
  2. Merge pull request #44 from bbatsell/ollama_url_fix

    Support OLLAMA_URL and OLLAMA_HOST
    ggozad authored Jan 8, 2024
    Copy the full SHA
    3853a20 View commit details

Commits on Jan 10, 2024

  1. Copy the full SHA
    e9d195d View commit details
  2. Copy the full SHA
    abff8ba View commit details
  3. Copy the full SHA
    4e3124b View commit details
  4. Copy the full SHA
    65fed1c View commit details
  5. Copy the full SHA
    268e01d View commit details
  6. Merge pull request #46 from ggozad/23-keep-light-theme-across-runs

    Introduce app config stored as json file.
    ggozad authored Jan 10, 2024
    Copy the full SHA
    b5c1464 View commit details

Commits on Jan 11, 2024

  1. Update dependencies

    ggozad committed Jan 11, 2024
    Copy the full SHA
    cf94ec3 View commit details

Commits on Jan 14, 2024

  1. Copy the full SHA
    cf91b2d View commit details

Commits on Jan 22, 2024

  1. Copy the full SHA
    57115d4 View commit details

Commits on Jan 24, 2024

  1. Copy the full SHA
    4c8558d View commit details
  2. vb

    ggozad committed Jan 24, 2024
    Copy the full SHA
    c7a3214 View commit details

Commits on Feb 1, 2024

  1. Speed up initial loading of the app. Re #39

    Speed up initial loading of the app by mounting past messages lazily
    only when a chat pane is viewed
    ggozad committed Feb 1, 2024
    Copy the full SHA
    3494071 View commit details
  2. Copy the full SHA
    a96cb5e View commit details
  3. Copy the full SHA
    1054c68 View commit details
  4. Merge pull request #56 from ggozad/feat/cancel-inference

    Ability to cancel inference.
    ggozad authored Feb 1, 2024
    Copy the full SHA
    09ec05d View commit details
  5. vb

    ggozad committed Feb 1, 2024
    Copy the full SHA
    d45e61b View commit details

Commits on Feb 11, 2024

  1. Copy the full SHA
    1dd0901 View commit details
  2. Update textual

    ggozad committed Feb 11, 2024
    Copy the full SHA
    88fab03 View commit details

Commits on Feb 13, 2024

  1. Remove MarkdownFence patch.

    Our MarkdownFence changes have been merged to textual, no more monkeypatching.
    This also now sets the number of lines of MarkdownFence to 50 instead of 20, see #58.
    ggozad committed Feb 13, 2024
    Copy the full SHA
    10d34c5 View commit details

Commits on Feb 14, 2024

  1. Add an edit_mode to ModelSelect, allow to select a model.

    Preparing to refactor ModelSelect to edit chat parameters. See #57
    ggozad committed Feb 14, 2024
    Copy the full SHA
    232e3ad View commit details
  2. Copy the full SHA
    1d57e78 View commit details
  3. Save chat on edit. Closes #57.

    ggozad committed Feb 14, 2024
    Copy the full SHA
    0295056 View commit details
  4. docs

    ggozad committed Feb 14, 2024
    Copy the full SHA
    3992c45 View commit details
  5. Merge pull request #59 from ggozad/feat/edit-model

    Add support for "editing" a chat, allowing for changing system prompt and template.
    ggozad authored Feb 14, 2024
    Copy the full SHA
    b00ac87 View commit details
  6. Remove template from UI

    ggozad committed Feb 14, 2024
    Copy the full SHA
    bd21f7a View commit details
  7. Copy the full SHA
    f129083 View commit details
  8. Copy the full SHA
    0de7e28 View commit details
  9. Improve chat edit UI

    ggozad committed Feb 14, 2024
    Copy the full SHA
    4c3ccdf View commit details
  10. Merge pull request #60 from ggozad/feat/remove_template

    Remove template from the chat customisation options.
    ggozad authored Feb 14, 2024
    Copy the full SHA
    c98be35 View commit details
  11. vb

    ggozad committed Feb 14, 2024
    Copy the full SHA
    f10d079 View commit details

Commits on Feb 16, 2024

  1. UI for export

    ggozad committed Feb 16, 2024
    Copy the full SHA
    1778d6a View commit details
  2. Copy the full SHA
    4877f97 View commit details
  3. Merge pull request #61 from ggozad/export

    Export chat as markdown document
    ggozad authored Feb 16, 2024
    Copy the full SHA
    1ac0d25 View commit details
  4. vb

    ggozad committed Feb 16, 2024
    Copy the full SHA
    934afdb View commit details

Commits on Feb 27, 2024

  1. Copy the full SHA
    d15fad4 View commit details
Showing with 7,030 additions and 2,487 deletions.
  1. +3 −0 .github/FUNDING.yml
  2. +28 −0 .github/workflows/build-docs.yml
  3. +8 −12 .github/workflows/build-publish.yml
  4. +3 −1 .gitignore
  5. +396 −1 CHANGES.txt
  6. +27 −38 README.md
  7. +38 −0 docs/app_config.md
  8. +40 −0 docs/cli_commands.md
  9. +52 −0 docs/commands.md
  10. BIN docs/img/chat.png
  11. +232 −0 docs/img/customizations.svg
  12. BIN docs/img/image_selection.png
  13. +223 −0 docs/img/mcp.svg
  14. +152 −0 docs/img/ogit.svg
  15. BIN docs/img/splash.gif
  16. +232 −0 docs/img/theme.svg
  17. +43 −0 docs/index.md
  18. +55 −0 docs/installation.md
  19. +1 −0 docs/oracle/.python-version
  20. 0 oterm/__init__.py → docs/oracle/README.md
  21. +16 −0 docs/oracle/pyproject.toml
  22. 0 {oterm/app → docs/oracle/src/oracle}/__init__.py
  23. +22 −0 docs/oracle/src/oracle/tool.py
  24. +11 −0 docs/parameters.md
  25. +80 −0 docs/tools/custom_tools.md
  26. +11 −0 docs/tools/index.md
  27. +27 −0 docs/tools/mcp.md
  28. +79 −0 mkdocs.yml
  29. +0 −240 oterm/app/chat.py
  30. +0 −171 oterm/app/model_selection.py
  31. +0 −93 oterm/app/oterm.py
  32. +0 −53 oterm/app/splash.py
  33. +0 −65 oterm/config.py
  34. +0 −131 oterm/ollama.py
  35. +0 −24 oterm/store/chat.py
  36. +0 −28 oterm/store/setup.py
  37. +0 −198 oterm/store/store.py
  38. +0 −4 oterm/store/upgrades/__init__.py
  39. +0 −1,248 poetry.lock
  40. +54 −43 pyproject.toml
  41. BIN screenshots/chat.png
  42. BIN screenshots/image_selection.png
  43. BIN screenshots/model_selection.png
  44. 0 {oterm/cli → src/oterm}/__init__.py
  45. 0 {oterm/store → src/oterm/app}/__init__.py
  46. +273 −0 src/oterm/app/chat_edit.py
  47. +59 −0 src/oterm/app/chat_export.py
  48. +6 −1 { → src}/oterm/app/chat_rename.py
  49. +5 −0 src/oterm/app/css.py
  50. +66 −0 src/oterm/app/image_browser.py
  51. +266 −0 src/oterm/app/oterm.py
  52. +89 −44 { → src}/oterm/app/oterm.tcss
  53. +34 −0 src/oterm/app/prompt_history.py
  54. +54 −0 src/oterm/app/pull_model.py
  55. +98 −0 src/oterm/app/splash.py
  56. +1 −0 src/oterm/app/widgets/__init__.py
  57. +423 −0 src/oterm/app/widgets/chat.py
  58. +1 −48 oterm/app/image_browser.py → src/oterm/app/widgets/image.py
  59. +15 −0 src/oterm/app/widgets/monkey.py
  60. +70 −4 {oterm/app → src/oterm/app/widgets}/prompt.py
  61. 0 src/oterm/cli/__init__.py
  62. +60 −0 src/oterm/cli/command.py
  63. +15 −4 { → src}/oterm/cli/oterm.py
  64. 0 src/oterm/command/__init__.py
  65. +69 −0 src/oterm/command/command_template.py.jinja
  66. +123 −0 src/oterm/command/create.py
  67. +58 −0 src/oterm/config.py
  68. +219 −0 src/oterm/ollamaclient.py
  69. 0 src/oterm/store/__init__.py
  70. +270 −0 src/oterm/store/store.py
  71. +25 −0 src/oterm/store/upgrades/__init__.py
  72. 0 { → src}/oterm/store/upgrades/v0_1_11.py
  73. 0 { → src}/oterm/store/upgrades/v0_1_6.py
  74. +21 −0 src/oterm/store/upgrades/v0_2_0.py
  75. +21 −0 src/oterm/store/upgrades/v0_2_4.py
  76. +21 −0 src/oterm/store/upgrades/v0_2_8.py
  77. +35 −0 src/oterm/store/upgrades/v0_3_0.py
  78. +21 −0 src/oterm/store/upgrades/v0_4_0.py
  79. +29 −0 src/oterm/store/upgrades/v0_5_1.py
  80. +23 −0 src/oterm/store/upgrades/v0_6_0.py
  81. +37 −0 src/oterm/store/upgrades/v0_7_0.py
  82. +23 −0 src/oterm/store/upgrades/v0_9_0.py
  83. +42 −0 src/oterm/tools/__init__.py
  84. +20 −0 src/oterm/tools/date_time.py
  85. +53 −0 src/oterm/tools/location.py
  86. +185 −0 src/oterm/tools/mcp.py
  87. +26 −0 src/oterm/tools/shell.py
  88. +48 −0 src/oterm/tools/weather.py
  89. +51 −0 src/oterm/tools/web.py
  90. +25 −0 src/oterm/types.py
  91. +119 −0 src/oterm/utils.py
  92. +6 −9 tests/conftest.py
  93. +0 −19 tests/test_api_client.py
  94. +29 −7 tests/test_llm_client.py
  95. +50 −0 tests/test_ollama_api.py
  96. +1 −1 tests/test_store.py
  97. 0 tests/tools/__init__.py
  98. +13 −0 tests/tools/mcp_servers.py
  99. +22 −0 tests/tools/test_custom_tool.py
  100. +18 −0 tests/tools/test_date_time_tool.py
  101. +21 −0 tests/tools/test_location_tool.py
  102. +45 −0 tests/tools/test_mcp_tools.py
  103. +16 −0 tests/tools/test_shell_tool.py
  104. +47 −0 tests/tools/test_weather_tool.py
  105. +15 −0 tests/tools/test_web_tool.py
  106. +1,815 −0 uv.lock
3 changes: 3 additions & 0 deletions .github/FUNDING.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# These are supported funding model platforms

github: ggozad
28 changes: 28 additions & 0 deletions .github/workflows/build-docs.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
name: build-docs
on:
push:
branches:
- main
permissions:
contents: write
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Configure Git Credentials
run: |
git config user.name github-actions[bot]
git config user.email 41898282+github-actions[bot]@users.noreply.github.com
- uses: actions/setup-python@v5
with:
python-version: 3.x
- run: echo "cache_id=$(date --utc '+%V')" >> $GITHUB_ENV
- uses: actions/cache@v4
with:
key: mkdocs-material-${{ env.cache_id }}
path: .cache
restore-keys: |
mkdocs-material-
- run: pip install mkdocs-material
- run: mkdocs gh-deploy --force
20 changes: 8 additions & 12 deletions .github/workflows/build-publish.yml
Original file line number Diff line number Diff line change
@@ -7,16 +7,12 @@ jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
with:
python-version: "3.10"
- uses: abatilo/actions-poetry@v2
- run: poetry config pypi-token.pypi ${{ secrets.PYPI_API_TOKEN }}
- uses: actions/checkout@v4
- name: Set up uv
run: curl -LsSf https://astral.sh/uv/0.3.0/install.sh | sh
- name: Set up Python 3.10
run: uv python install 3.10
- name: Build package
run: uvx --from build pyproject-build --installer uv
- name: Publish package
run: poetry publish --build
# For testing locally, replace pypi with testpypi
# - run: poetry config repositories.testpypi https://test.pypi.org/legacy/
# - run: poetry config pypi-token.testpypi ${{ secrets.PYPI_API_TOKEN }}
# - name: Publish package
# run: poetry publish --build -r testpypi
run: uvx twine upload -u __token__ -p ${{ secrets.PYPI_API_TOKEN }} dist/* --non-interactive
4 changes: 3 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
@@ -7,4 +7,6 @@ __pycache__/
.DS_Store
dist/
oterm.rb
photos/
photos/
.vscode
/site/
397 changes: 396 additions & 1 deletion CHANGES.txt

Large diffs are not rendered by default.

65 changes: 27 additions & 38 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,61 +1,50 @@
# oterm
the text-based terminal client for [Ollama](https://github.com/jmorganca/ollama).

the text-based terminal client for [Ollama](https://github.com/ollama/ollama).

## Features

* intuitive and simple terminal UI, no need to run servers, frontends, just type `oterm` in your terminal.
* multiple persistent chat sessions, stored together with the context embeddings and template/system prompt customizations in sqlite.
* multiple persistent chat sessions, stored together with system prompt & parameter customizations in sqlite.
* can use any of the models you have pulled in Ollama, or your own custom models.
* allows for easy customization of the model's template, system prompt and parameters.

## Installation

Using `brew` for MacOS:

```bash
brew tap ggozad/formulas
brew install ggozad/formulas/oterm
```
* allows for easy customization of the model's system prompt and parameters.
* supports tools integration for providing external information to the model.

Using `pip`:
## Quick install

```bash
pip install oterm
uvx oterm
```
See [Installation](https://ggozad.github.io/oterm/installation) for more details.

## Using
## Documentation

In order to use `oterm` you will need to have the Ollama server running. By default it expects to find the Ollama API running on `http://localhost:11434/api`. If you are running Ollama inside docker or on a different host/port, use the `OLLAMA_URL` environment variable to customize the API url. Setting `OTERM_VERIFY_SSL` to `False` will disable SSL verification.
[oterm Documentation](https://ggozad.github.io/oterm/)

```bash
OLLAMA_URL=http://host:port/api
```

The following keyboard shortcuts are available:
## What's new

* `ctrl+n` - create a new chat session
* `ctrl+r` - rename the current chat session
* `ctrl+x` - delete the current chat session
* `ctrl+t` - toggle between dark/light theme
* `ctrl+q` - quit

* `ctrl+l` - switch to multiline input mode
* `ctrl+p` - select an image to include with the next message
* Create custom commands that can be run from the terminal using oterm. Each of these commands is a chat, customized to your liking and connected to the tools of your choice.
* Support for Model Context Protocol (MCP) tools. You can now use any of the MCP tools to provide external information to the model.
* Support for the `<thinking/>` tag in reasoning models.

### Screenshots
![Splash](https://raw.githubusercontent.com/ggozad/oterm/refs/heads/main/docs/img/splash.gif)
The splash screen animation that greets users when they start oterm.

### Customizing models
![Chat](https://raw.githubusercontent.com/ggozad/oterm/main/docs/img/chat.png)
A view of the chat interface, showcasing the conversation between the user and the model.

When creating a new chat, you may not only select the model, but also customize the `template` as well as the `system` instruction to pass to the model. Checking the `JSON output` checkbox will cause the model reply in JSON format. Please note that `oterm` will not (yet) pull models for you, use `ollama` to do that. All the models you have pulled or created will be available to `oterm`.
![Model selection](https://raw.githubusercontent.com/ggozad/oterm/main/docs/img/customizations.svg)
The model selection screen, allowing users to choose and customize available models.

### Chat session storage
![Tool support](https://raw.githubusercontent.com/ggozad/oterm/main/docs/img/mcp.svg)
oterm using the `git` MCP server to access its own repo.

All your chat sessions are stored locally in a sqlite database.
You can find the location of the database by running `oterm --db`.
![Image selection](https://raw.githubusercontent.com/ggozad/oterm/main/docs/img/image_selection.png)
The image selection interface, demonstrating how users can include images in their conversations.

### Screenshots
![Chat](screenshots/chat.png)
![Model selection](./screenshots/model_selection.png)
![Image selection](./screenshots/image_selection.png)
![Theme](https://raw.githubusercontent.com/ggozad/oterm/main/docs/img/theme.svg)
oterm supports multiple themes, allowing users to customize the appearance of the interface.

## License

38 changes: 38 additions & 0 deletions docs/app_config.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
### App configuration

The app configuration is stored in a directory specific to your operating system, by default:

* Linux: `~/.local/share/oterm/config.json`
* macOS: `~/Library/Application Support/oterm/config.json`
* Windows: `C:/Users/<USER>/AppData/Roaming/oterm/config.json`

If in doubt you can get the directory where `config.json` can be found by running `oterm --data-dir`.

You can set the following options in the configuration file:
```json
{ "splash-screen": true }
```

`splash-screen` controls whether the splash screen is shown on startup.

### Key bindings

We strive to have sane default key bindings, but there will always be cases where your terminal emulator or shell will interfere. You can customize select keybindings by editing the app config `config.json` file. The following are the defaults:

```json
{
...
"keymap": {
"next.chat": "ctrl+tab",
"prev.chat": "ctrl+shift+tab",
"quit": "ctrl+q",
"newline": "shift+enter"
}
}
```

### Chat storage

All your chat sessions are stored locally in a sqlite database. You can customize the directory where the database is stored by setting the `OTERM_DATA_DIR` environment variable.

You can find the location of the database by running `oterm --db`.
40 changes: 40 additions & 0 deletions docs/cli_commands.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
### Creating a command

!!! Note
Do you find yourself running `oterm` to get to this one chat that you use all the time? You can create a custom command to get there faster.

You can create custom commands that can be run from the terminal using oterm. Each of these commands is a chat, customized to your liking and connected to the tools of your choice.

To create a custom command, you can call the `oterm-command` command:


```bash
oterm-command create <command-name>
```
which will present you with the same interface as when creating a chat. You can choose the model, the system propmt, the tools you want to use, etc.

`oterm-command` will create a self-managed command in `~/.local/bin` (make sure the directory is in your `PATH`) that you can call anytime.

For example, here is the command `ogit` that I use to chat about my repositories:

![ogit](../img/ogit.svg)

It is a chat with the `git` MCP tool, running using the `qwen2.5` model.

### Listing commands

You can list all the commands (id, name & path) you have created with the `oterm-command` command:

```bash
oterm-command list
Commands found:
64: ogit -> /Users/ggozad/.local/bin/ogit
```

### Deleting a command

You can delete a command with the `oterm-command` command:

```bash
$ oterm-command delete <command-id>
```
52 changes: 52 additions & 0 deletions docs/commands.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
### Commands
By pressing <kbd>^ Ctrl</kbd>+<kbd>p</kbd> you can access the command palette from where you can perform most of the chat actions. The following commands are available:

* `New chat` - create a new chat session
* `Edit chat parameters` - edit the current chat session (change system prompt, parameters or format)
* `Rename chat` - rename the current chat session
* `Export chat` - export the current chat session as markdown
* `Delete chat` - delete the current chat session
* `Clear chat` - clear the chat history, preserving model and system prompt customizations
* `Regenerate last Ollama message` - regenerates the last message from Ollama (will override the `seed` for the specific message with a random one.) Useful if you want to change the system prompt or parameters or just want to try again.
* `Pull model` - pull a model or update an existing one.
* `Change theme` - choose among the available themes.

### Keyboard shortcuts

The following keyboard shortcuts are supported:

* <kbd>^ Ctrl</kbd>+<kbd>q</kbd> - quit

* <kbd>^ Ctrl</kbd>+<kbd>l</kbd> - switch to multiline input mode
* <kbd>^ Ctrl</kbd>+<kbd>i</kbd> - select an image to include with the next message
* <kbd>↑</kbd> - navigate through history of previous prompts

* <kbd>^ Ctrl</kbd>+<kbd>n</kbd> - open a new chat
* <kbd>^ Ctrl</kbd>+<kbd>Backspace</kbd> - close the current chat

* <kbd>^ Ctrl</kbd>+<kbd>Tab</kbd> - open the next chat
* <kbd>^ Ctrl</kbd>+<kbd>Shift</kbd>+<kbd>Tab</kbd> - open the previous chat

In multiline mode, you can press <kbd>Enter</kbd> to send the message, or <kbd>Shift</kbd>+<kbd>Enter</kbd> to add a new line at the cursor.

While Ollama is inferring the next message, you can press <kbd>Esc</kbd> to cancel the inference.

!!! note
Some of the shortcuts may not work in a certain context, if they are overriden by the widget in focus. For example pressing <kbd>↑</kbd> while the prompt is in multi-line mode.

If the key bindings clash with your terminal, it is possible to change them by editing the configuration file. See [Configuration](/oterm/app_config).

### Copy / Paste

It is difficult to properly support copy/paste in terminal applications. You can copy blocks to your clipboard as such:

* clicking a message will copy it to the clipboard.
* clicking a code block will only copy the code block to the clipboard.

For most terminals there exists a key modifier you can use to click and drag to manually select text. For example:
* `iTerm` <kbd>Option</kbd> key.
* `Gnome Terminal` <kbd>Shift</kbd> key.
* `Windows Terminal` <kbd>Shift</kbd> key.

![Image selection](./img/image_selection.png)
The image selection interface.
Binary file added docs/img/chat.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
232 changes: 232 additions & 0 deletions docs/img/customizations.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/image_selection.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
223 changes: 223 additions & 0 deletions docs/img/mcp.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
152 changes: 152 additions & 0 deletions docs/img/ogit.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/splash.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
232 changes: 232 additions & 0 deletions docs/img/theme.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
43 changes: 43 additions & 0 deletions docs/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
# oterm

the text-based terminal client for [Ollama](https://github.com/ollama/ollama).

## Features

* intuitive and simple terminal UI, no need to run servers, frontends, just type `oterm` in your terminal.
* multiple persistent chat sessions, stored together with system prompt & parameter customizations in sqlite.
* can use any of the models you have pulled in Ollama, or your own custom models.
* allows for easy customization of the model's system prompt and parameters.
* supports tools integration for providing external information to the model.

## Installation

See the [Installation](installation.md) section.

## Using oterm

In order to use `oterm` you will need to have the Ollama server running. By default it expects to find the Ollama API running on `http://127.0.0.1:11434`. If you are running Ollama inside docker or on a different host/port, use the `OLLAMA_HOST` environment variable to customize the host/port. Alternatively you can use `OLLAMA_URL` to specify the full http(s) url. Setting `OTERM_VERIFY_SSL` to `False` will disable SSL verification.

```bash
OLLAMA_URL=http://host:port
```

To start `oterm` simply run:

```bash
oterm
```

### Screenshots
![Splash](img/splash.gif)
The splash screen animation that greets users when they start oterm.

![Chat](img/chat.png)
A view of the chat interface, showcasing the conversation between the user and the model.

![Theme](./img/theme.svg)
oterm supports multiple themes, allowing users to customize the appearance of the interface.

## License

This project is licensed under the [MIT License](LICENSE).
55 changes: 55 additions & 0 deletions docs/installation.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
## Installation

!!! note
Ollama needs to be installed and running in order to use `oterm`. Please follow the [Ollama Installation Guide](https://github.com/ollama/ollama?tab=readme-ov-file#ollama).

Using `uvx`:

```bash
uvx oterm
```

Using `brew` for MacOS:

```bash
brew tap ggozad/formulas
brew install ggozad/formulas/oterm
```

Using `yay` (or any AUR helper) for Arch Linux:

```bash
yay -S oterm
```

Using `pip`:

```bash
pip install oterm
```

## Updating oterm

To update oterm to the latest version, you can use the same method you used for installation:

Using `uvx`:

```bash
uvx oterm@latest
```

Using `brew` for MacOS:

```bash
brew upgrade ggozad/formulas/oterm
```
Using 'yay' (or any AUR helper) for Arch Linux:

```bash
yay -Syu oterm
```
Using `pip`:

```bash
pip install --upgrade oterm
```
1 change: 1 addition & 0 deletions docs/oracle/.python-version
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
3.10
File renamed without changes.
16 changes: 16 additions & 0 deletions docs/oracle/pyproject.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
[project]
name = "oracle"
version = "0.1.0"
description = "Add your description here"
readme = "README.md"
authors = [
{ name = "Yiorgis Gozadinos", email = "ggozadinos@gmail.com" }
]
requires-python = ">=3.10"
dependencies = [
"ollama>=0.4.4,<0.5",
]

[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
File renamed without changes.
22 changes: 22 additions & 0 deletions docs/oracle/src/oracle/tool.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
from ollama._types import Tool

OracleTool = Tool(
type="function",
function=Tool.Function(
name="oracle",
description="Function to return the Oracle's answer to any question.",
parameters=Tool.Function.Parameters(
type="object",
properties={
"question": Tool.Function.Parameters.Property(
type="str", description="The question to ask."
),
},
required=["question"],
),
),
)


def oracle(question: str):
return "oterm"
11 changes: 11 additions & 0 deletions docs/parameters.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
When creating a new chat, you may not only select the model, but also customize the following:

- `system` instruction prompt
- `tools` used. See [Tools](tools.md) for more information on how to make tools available.
- chat `parameters` (such as context length, seed, temperature etc) passed to the model. For a list of all supported parameters refer to the [Ollama documentation](https://github.com/ollama/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values).
- Ouput `format`/structured output. In the format field you can use Ollama's [Structured Output](https://ollama.com/blog/structured-outputs) specifying the full format as a JSON schema. Leaving the field empty (default) will return the output as text.

You can also "edit" an existing chat to change the system prompt, parameters, tools or format. Note, that the model cannot be changed once the chat has started.

![Model selection](./img/customizations.svg)
The model selection screen, allowing users to choose and customize available models.
80 changes: 80 additions & 0 deletions docs/tools/custom_tools.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,80 @@
# Built-in example tools

The following example tools are currently built-in to `oterm`:

* `fetch_url` - allows your models access to the web, fetches a URL and provides the content as input to the model.
* `date_time` - provides the current date and time in ISO format.
* `current_location` - provides the current location of the user (longitude, latitude, city, region, country). Uses [ipinfo.io](https://ipinfo.io) to determine the location.
* `current_weather` - provides the current weather in the user's location. Uses [OpenWeatherMap](https://openweathermap.org) to determine the weather. You need to provide your (free) API key in the `OPEN_WEATHER_MAP_API_KEY` environment variable.
* `shell` - allows you to run shell commands and use the output as input to the model. Obviously this can be dangerous, so use with caution.

These tools are defined in `src/oterm/tools`. You can make those tools available and enable them for selection when creating or editing a chat, by adding them to the `tools` section of the `oterm` configuration file. You can find the location of the configuration file's directory by running `oterm --data-dir`. So for example to enable the `shell` tool, you would add the following to the configuration file:

```json
{
...
"tools": [{
"tool": "oterm.tools.shell:ShellTool",
"callable": "oterm.tools.shell:shell_command"
}]
}
```

# Custom tools with oterm

You can create your own custom tools and integrate them with `oterm`.

### Create a python package.

You will need to create a python package that exports a `Tool` definition as well as a *callable* function that will be called when the tool is invoked.

Here is an [example](https://github.com/ggozad/oterm/tree/main/docs/oracle){:target="_blank"} of a simple tool that implements an Oracle. The tool is defined in the `oracle` package which exports the `OracleTool` tool definition and an `oracle` callable function.

```python
from ollama._types import Tool

OracleTool = Tool(
type="function",
function=Tool.Function(
name="oracle",
description="Function to return the Oracle's answer to any question.",
parameters=Tool.Function.Parameters(
type="object",
properties={
"question": Tool.Function.Parameters.Property(
type="str", description="The question to ask."
),
},
required=["question"],
),
),
)


def oracle(question: str):
return "oterm"
```

You need to install the package in the same environment where `oterm` is installed so that `oterm` can resolve it.

```bash
cd oracle
uv pip install . # or pip install .
```

### Register the tool with oterm

You can register the tool with `oterm` by adding the tool definittion and callable to the `tools` section of the `oterm` configuration file. You can find the location of the configuration file's directory by running `oterm --data-dir`.

```json
{
...
"tools": [{
"tool": "oracle.tool:OracleTool",
"callable": "oracle.tool:oracle"
}]
}
```
Note the notation `module:object` for the tool and callable.

That's it! You can now use the tool in `oterm` with models that support it.
11 changes: 11 additions & 0 deletions docs/tools/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
# Tools

`oterm` supports integration with tools. Tools are special "functions" that can provide external information to the LLM model that it does not otherwise have access to.

With tools, you can provide the model with access to the web, run shell commands, perform RAG and more.

[Use existing Model Context Protocol servers](./mcp.md)

or

[create your own custom tools](./custom_tools.md).
27 changes: 27 additions & 0 deletions docs/tools/mcp.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
### Model Context Protocol support

`oterm` has support for Anthropic's open-source [Model Context Protocol](https://modelcontextprotocol.io). While Ollama does not yet directly support the protocol, `oterm` bridges [MCP servers](https://github.com/modelcontextprotocol/servers) with Ollama by transforming MCP tools into Ollama tools.

To add an MCP server to `oterm`, simply add the server shim to oterm's `config.json`. For example for the [git](https://github.com/modelcontextprotocol/servers/tree/main/src/git) MCP server you would add something like the following to the `mcpServers` section of the `oterm` configuration file:

```json
{
...
"mcpServers": {
"git": {
"command": "docker",
"args": [
"run",
"--rm",
"-i",
"--mount",
"type=bind,src=/Users/ggozad/dev/open-source/oterm,dst=/oterm",
"mcp/git"
]
}
}
}
```

![Tool support](../img/mcp.svg)
oterm using the `git` MCP server to access its own repo.
79 changes: 79 additions & 0 deletions mkdocs.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@
site_name: oterm
site_description: the text-based terminal client for Ollama.
site_url: https://ggozad.github.io/oterm/
theme:
name: material
palette:
- media: "(prefers-color-scheme)"
toggle:
icon: material/lightbulb-auto
name: Switch to light mode
- media: '(prefers-color-scheme: light)'
scheme: default
primary: deep purple
accent: amber
toggle:
icon: material/lightbulb
name: Switch to dark mode
- media: '(prefers-color-scheme: dark)'
scheme: slate
primary: deep purple
accent: amber
toggle:
icon: material/lightbulb-outline
name: Switch to system preference
features:
- content.code.annotate
- content.code.copy
- content.code.select
- content.footnote.tooltips
- content.tabs.link
- content.tooltips
- navigation.footer
- navigation.indexes
- navigation.instant
- navigation.instant.prefetch
- navigation.instant.progress
- navigation.path
- navigation.tabs
- navigation.tabs.sticky
- navigation.top
- navigation.tracking
- search.highlight
- search.share
- search.suggest
- toc.follow

icon:
repo: fontawesome/brands/github-alt
# logo: img/icon-white.svg
# favicon: img/favicon.png
language: en
repo_name: ggozad/oterm
repo_url: https://github.com/ggozad/oterm
plugins:
# Material for MkDocs
search:
nav:
- oterm:
- index.md
- Installation: installation.md
- Commands & shortcuts: commands.md
- Chat parameters: parameters.md
- Tools:
- tools/index.md
- MCP tools: tools/mcp.md
- Custom tools: tools/custom_tools.md
- CLI commands: cli_commands.md
- Configuration: app_config.md
markdown_extensions:
- admonition
- attr_list
- pymdownx.details
- pymdownx.highlight:
anchor_linenums: true
line_spans: __span
pygments_lang_class: true
- pymdownx.inlinehilite
- pymdownx.snippets
- pymdownx.superfences
240 changes: 0 additions & 240 deletions oterm/app/chat.py

This file was deleted.

171 changes: 0 additions & 171 deletions oterm/app/model_selection.py

This file was deleted.

93 changes: 0 additions & 93 deletions oterm/app/oterm.py

This file was deleted.

53 changes: 0 additions & 53 deletions oterm/app/splash.py

This file was deleted.

65 changes: 0 additions & 65 deletions oterm/config.py

This file was deleted.

131 changes: 0 additions & 131 deletions oterm/ollama.py

This file was deleted.

24 changes: 0 additions & 24 deletions oterm/store/chat.py

This file was deleted.

28 changes: 0 additions & 28 deletions oterm/store/setup.py

This file was deleted.

198 changes: 0 additions & 198 deletions oterm/store/store.py

This file was deleted.

4 changes: 0 additions & 4 deletions oterm/store/upgrades/__init__.py

This file was deleted.

1,248 changes: 0 additions & 1,248 deletions poetry.lock

This file was deleted.

97 changes: 54 additions & 43 deletions pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,12 +1,10 @@
[tool.poetry]
[project]
name = "oterm"
version = "0.1.17"
version = "0.9.3"
description = "A text-based terminal client for Ollama."
authors = ["Yiorgis Gozadinos <ggozadinos@gmail.com>"]
homepage = "https://github.com/ggozad/oterm"
repository = "https://github.com/ggozad/oterm"
license = "MIT"
readme = "README.md"
authors = [{ name = "Yiorgis Gozadinos", email = "ggozadinos@gmail.com" }]
license = { text = "MIT" }
readme = { file = "README.md", content-type = "text/markdown" }
classifiers = [
"Development Status :: 4 - Beta",
"Environment :: Console",
@@ -20,43 +18,55 @@ classifiers = [
"Programming Language :: Python :: 3.12",
"Typing :: Typed",
]
requires-python = ">=3.10"
dependencies = [
"textual>=1.0.0,<1.1",
"typer>=0.15.1,<0.16",
"python-dotenv>=1.0.1",
"aiosql>=13.2,<14",
"aiosqlite>=0.21.0,<0.22",
"packaging>=24.2,<25",
"rich-pixels>=3.0.1",
"pillow>=11.0.0,<12",
"ollama>=0.4.7,<0.5",
"textualeffects>=0.1.3",
"pydantic>=2.10.1,<2.11",
"mcp[cli]>=1.3.0",
"jinja2>=3.1.5",
]

[tool.poetry.scripts]
oterm = "oterm.cli.oterm:cli"

[tool.poetry.urls]
"Bug Tracker" = "https://github.com/ggozad/oterm/issues"
[project.urls]
Homepage = "https://github.com/ggozad/oterm"
Repository = "https://github.com/ggozad/oterm"
Issues = "https://github.com/ggozad/oterm/issues"
Documentation = "https://ggozad.github.io/oterm/"

[tool.poetry.dependencies]
python = "^3.10"
textual = "^0.41.0"
typer = "^0.9.0"
python-dotenv = "^1.0.0"
httpx = "^0.25.0"
aiosql = "^9.0"
aiosqlite = "^0.19.0"
pyperclip = "^1.8.2"
packaging = "^23.2"
rich-pixels = "^2.1.1"
[project.scripts]
oterm = "oterm.cli.oterm:cli"
oterm-command = "oterm.cli.command:cli"

[tool.poetry.group.dev.dependencies]
pdbpp = "^0.10.3"
ruff = "^0.1.3"
black = "^23.10.1"
pytest = "^7.4.3"
pytest-asyncio = "^0.21.1"
textual-dev = "^1.2.1"
aiohttp = { version = ">=3.9.0b0", python = ">=3.12" }
[tool.uv]
dev-dependencies = [
"ruff>=0.9.4",
"pdbpp",
"pytest>=8.3.4",
"pytest-asyncio>=0.25.3",
"textual-dev>=1.7.0",
"homebrew-pypi-poet>=0.10.0",
"mkdocs>=1.6.1",
"mkdocs-material>=9.6.5",
]

[tool.black]
line-length = 88
target-versions = ["py310"]
[tool.uv.sources]

[tool.ruff]
line-length = 88
# Enable Flake's "E" and "F" codes by default and "I" for sorting imports.
select = ["E", "F", "I"]
ignore = ["E501", "E741"] # E741 should not be ignored
lint.select = ["E", "F", "I"]
lint.ignore = ["E501", "E741"] # E741 should not be ignored
lint.per-file-ignores = { "__init__.py" = ["F401", "F403"] }
# Allow unused variables when underscore-prefixed.
lint.dummy-variable-rgx = "^(_+|(_+[a-zA-Z0-9_]*[a-zA-Z0-9]+?))$"
# Exclude a variety of commonly ignored directories.
exclude = [
".direnv",
@@ -72,12 +82,13 @@ exclude = [
"dist",
"venv",
]
per-file-ignores = { "__init__.py" = ["F401", "F403"] }
# Allow unused variables when underscore-prefixed.
dummy-variable-rgx = "^(_+|(_+[a-zA-Z0-9_]*[a-zA-Z0-9]+?))$"
# Assume Python 3.10.
target-version = "py310"

[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
requires = ["hatchling"]
build-backend = "hatchling.build"

[tool.hatch.build.targets.sdist]
exclude = ["/screenshots", "/examples"]

[tool.hatch.build.targets.wheel]
only-packages = true
Binary file removed screenshots/chat.png
Binary file not shown.
Binary file removed screenshots/image_selection.png
Binary file not shown.
Binary file removed screenshots/model_selection.png
Binary file not shown.
File renamed without changes.
File renamed without changes.
273 changes: 273 additions & 0 deletions src/oterm/app/chat_edit.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,273 @@
import json
from typing import Optional, Sequence

from ollama import Options, ShowResponse
from pydantic import ValidationError
from rich.text import Text
from textual import on
from textual.app import ComposeResult
from textual.containers import (
Container,
Horizontal,
ScrollableContainer,
Vertical,
)
from textual.reactive import reactive
from textual.screen import ModalScreen
from textual.widgets import Button, Checkbox, Input, Label, OptionList, TextArea

from oterm.ollamaclient import (
OllamaLLM,
jsonify_options,
parse_format,
parse_ollama_parameters,
)
from oterm.tools import available as available_tool_defs
from oterm.types import Tool


class OtermOllamaOptions(Options):
# Patch stop to allow for a single string.
# This is an issue with the gemma model which has a single stop parameter.
# Remove when fixed upstream and close #187
stop: Optional[Sequence[str] | str] = None

class Config:
extra = "forbid"


class ChatEdit(ModalScreen[str]):
models = []
models_info: dict[str, ShowResponse] = {}

model_name: reactive[str] = reactive("")
tag: reactive[str] = reactive("")
bytes: reactive[int] = reactive(0)
model_info: ShowResponse
system: reactive[str] = reactive("")
parameters: reactive[Options] = reactive(Options())
format: reactive[str] = reactive("")
keep_alive: reactive[int] = reactive(5)
last_highlighted_index = None
tools: reactive[list[Tool]] = reactive([])
edit_mode: reactive[bool] = reactive(False)

BINDINGS = [
("escape", "cancel", "Cancel"),
("enter", "save", "Save"),
]

def __init__(
self,
model: str = "",
system: str = "",
parameters: Options = Options(),
format="",
keep_alive: int = 5,
edit_mode: bool = False,
tools: list[Tool] = [],
) -> None:
super().__init__()
self.model_name, self.tag = model.split(":") if model else ("", "")
self.system = system
self.parameters = parameters
self.format = format
self.keep_alive = keep_alive
self.edit_mode = edit_mode
self.tools = tools

def _return_chat_meta(self) -> None:
model = f"{self.model_name}:{self.tag}"
system = self.query_one(".system", TextArea).text
system = system if system != self.model_info.get("system", "") else None
keep_alive = int(self.query_one(".keep-alive", Input).value)
p_area = self.query_one(".parameters", TextArea)
try:
parameters = OtermOllamaOptions.model_validate_json(
p_area.text, strict=True
).model_dump(exclude_unset=True)
if isinstance(parameters.get("stop"), str):
parameters["stop"] = [parameters["stop"]]

except ValidationError:
p_area = self.query_one(".parameters", TextArea)
p_area.styles.animate("opacity", 0.0, final_value=1.0, duration=0.5)
return
f_area = self.query_one(".format", TextArea)

# Try parsing the format
try:
parse_format(f_area.text)
format = f_area.text
except Exception:
f_area.styles.animate("opacity", 0.0, final_value=1.0, duration=0.5)
return

result = json.dumps(
{
"name": model,
"system": system,
"format": format,
"keep_alive": keep_alive,
"parameters": parameters,
"tools": [tool.model_dump() for tool in self.tools],
}
)

self.dismiss(result)

def action_cancel(self) -> None:
self.dismiss()

def action_save(self) -> None:
self._return_chat_meta()

def select_model(self, model: str) -> None:
select = self.query_one("#model-select", OptionList)
for index, option in enumerate(select._options):
if str(option.prompt) == model:
select.highlighted = index
break

async def on_mount(self) -> None:
self.models = OllamaLLM.list().models

models = [model.model or "" for model in self.models]
for model in models:
info = OllamaLLM.show(model)
self.models_info[model] = info
option_list = self.query_one("#model-select", OptionList)
option_list.clear_options()
for model in models:
option_list.add_option(item=self.model_option(model))
option_list.highlighted = self.last_highlighted_index
if self.model_name and self.tag:
self.select_model(f"{self.model_name}:{self.tag}")

# Disable the model select widget if we are in edit mode.
widget = self.query_one("#model-select", OptionList)
widget.disabled = self.edit_mode

@on(Checkbox.Changed)
def on_tool_toggled(self, ev: Checkbox.Changed):
tool_name = ev.control.label
checked = ev.value
for tool_def in available_tool_defs:
if tool_def["tool"].function.name == str(tool_name): # type: ignore
tool = tool_def["tool"]
if checked:
self.tools.append(tool)
else:
self.tools.remove(tool)
break

def on_option_list_option_highlighted(
self, option: OptionList.OptionHighlighted
) -> None:
model = option.option.prompt
model_meta = next((m for m in self.models if m.model == str(model)), None)
if model_meta:
name, tag = (model_meta.model or "").split(":")
self.model_name = name
widget = self.query_one(".name", Label)
widget.update(f"{self.model_name}")

self.tag = tag
widget = self.query_one(".tag", Label)
widget.update(f"{self.tag}")

self.bytes = model_meta["size"]
widget = self.query_one(".size", Label)
widget.update(f"{(self.bytes / 1.0e9):.2f} GB")

meta = self.models_info.get(model_meta.model or "")
self.model_info = meta # type: ignore
if not self.edit_mode:
self.parameters = parse_ollama_parameters(
self.model_info.parameters or ""
)
widget = self.query_one(".parameters", TextArea)
widget.load_text(jsonify_options(self.parameters))
widget = self.query_one(".system", TextArea)

# XXX Does not work as expected, there is no longer system in model_info
widget.load_text(self.system or self.model_info.get("system", ""))

# Deduce from the model's template if the model is tool-capable.
tools_supported = ".Tools" in self.model_info["template"]
widgets = self.query(".tool")
for widget in widgets:
widget.disabled = not tools_supported
if not tools_supported:
widget.value = False # type: ignore

# Now that there is a model selected we can save the chat.
save_button = self.query_one("#save-btn", Button)
save_button.disabled = False
ChatEdit.last_highlighted_index = option.option_index

def on_button_pressed(self, event: Button.Pressed) -> None:
if event.button.name == "save":
self._return_chat_meta()
else:
self.dismiss()

@staticmethod
def model_option(model: str) -> Text:
return Text(model)

def compose(self) -> ComposeResult:
with Container(id="edit-chat-container"):
with Horizontal():
with Vertical():
with Horizontal(id="model-info"):
yield Label("Model:", classes="title")
yield Label(f"{self.model_name}", classes="name")
yield Label("Tag:", classes="title")
yield Label(f"{self.tag}", classes="tag")
yield Label("Size:", classes="title")
yield Label(f"{self.size}", classes="size")

yield OptionList(id="model-select")
yield Label("Tools:", classes="title")
with ScrollableContainer(id="tool-list"):
for tool_def in available_tool_defs:
yield Checkbox(
label=f"{tool_def['tool']['function']['name']}",
tooltip=f"{tool_def['tool']['function']['description']}",
value=tool_def["tool"] in self.tools,
classes="tool",
)

with Vertical():
yield Label("System:", classes="title")
yield TextArea(self.system, classes="system log")
yield Label("Parameters:", classes="title")
yield TextArea(
jsonify_options(self.parameters),
classes="parameters log",
language="json",
)
yield Label("Format:", classes="title")
yield TextArea(
self.format or "",
classes="format log",
language="json",
)

with Horizontal():
with Horizontal():
yield Label(
"Keep-alive (min)", classes="title keep-alive-label"
)
yield Input(classes="keep-alive", value="5")

with Horizontal(classes="button-container"):
yield Button(
"Save",
id="save-btn",
name="save",
disabled=True,
variant="primary",
)
yield Button("Cancel", name="cancel")
59 changes: 59 additions & 0 deletions src/oterm/app/chat_export.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
import re
import unicodedata
from typing import Sequence

from textual import on
from textual.app import ComposeResult
from textual.containers import Container
from textual.screen import ModalScreen
from textual.widgets import Input, Label

from oterm.store.store import Store
from oterm.types import Author


def slugify(value):
"""
Taken from https://github.com/django/django/blob/master/django/utils/text.py
"""
value = str(value)
value = (
unicodedata.normalize("NFKD", value).encode("ascii", "ignore").decode("ascii")
)
value = re.sub(r"[^\w\s-]", "", value.lower())
return re.sub(r"[-\s]+", "-", value).strip("-_")


class ChatExport(ModalScreen[str]):
chat_id: int
file_name: str = ""
BINDINGS = [
("escape", "cancel", "Cancel"),
]

def action_cancel(self) -> None:
self.dismiss()

@on(Input.Submitted)
async def on_submit(self, event: Input.Submitted) -> None:
store = await Store.get_store()

if not event.value:
return

messages: Sequence[tuple[int, Author, str, list[str]]] = (
await store.get_messages(self.chat_id)
)
with open(event.value, "w", encoding="utf-8") as file:
for message in messages:
_, author, text, images = message
file.write(f"*{author.value}*\n")
file.write(f"{text}\n")
file.write("\n---\n")

self.dismiss()

def compose(self) -> ComposeResult:
with Container(id="chat-export-container"):
yield Label("Export chat", classes="title")
yield Input(id="chat-name-input", value=self.file_name)
7 changes: 6 additions & 1 deletion oterm/app/chat_rename.py → src/oterm/app/chat_rename.py
Original file line number Diff line number Diff line change
@@ -7,10 +7,15 @@

class ChatRename(ModalScreen[str]):
old_name: str = ""

BINDINGS = [
("escape", "cancel", "Cancel"),
]

def __init__(self, old_name: str) -> None:
super().__init__()
self.old_name = old_name

def action_cancel(self) -> None:
self.dismiss()

@@ -22,4 +27,4 @@ async def on_submit(self, event: Input.Submitted) -> None:
def compose(self) -> ComposeResult:
with Container(id="chat-rename-container"):
yield Label("Rename chat", classes="title")
yield Input(id="chat-rename-input", value=self.old_name)
yield Input(id="chat-name-input", value=self.old_name)
5 changes: 5 additions & 0 deletions src/oterm/app/css.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
from pathlib import Path

tcss = ""
with open(Path(__file__).parent / "oterm.tcss") as f:
tcss = f.read()
66 changes: 66 additions & 0 deletions src/oterm/app/image_browser.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
from base64 import b64encode
from io import BytesIO
from pathlib import Path

import PIL.Image as PILImage
from PIL import UnidentifiedImageError
from textual import on
from textual.app import ComposeResult
from textual.containers import Container, Horizontal, Vertical
from textual.screen import ModalScreen
from textual.widgets import DirectoryTree, Input, Label

from oterm.app.widgets.image import IMAGE_EXTENSIONS, Image, ImageDirectoryTree


class ImageSelect(ModalScreen[tuple[Path, str]]):
BINDINGS = [
("escape", "cancel", "Cancel"),
]

def action_cancel(self) -> None:
self.dismiss()

async def on_mount(self) -> None:
dt = self.query_one(ImageDirectoryTree)
dt.show_guides = False
dt.focus()

@on(DirectoryTree.FileSelected)
async def on_image_selected(self, ev: DirectoryTree.FileSelected) -> None:
try:
buffer = BytesIO()
image = PILImage.open(ev.path)
if image.mode != "RGB":
image = image.convert("RGB")
image.save(buffer, format="JPEG")
b64 = b64encode(buffer.getvalue()).decode("utf-8")
self.dismiss((ev.path, b64))
except UnidentifiedImageError:
self.dismiss()

@on(DirectoryTree.NodeHighlighted)
async def on_image_highlighted(self, ev: DirectoryTree.NodeHighlighted) -> None:
path = ev.node.data.path # type: ignore
if path.suffix in IMAGE_EXTENSIONS:
image = self.query_one(Image)
image.path = path.as_posix()

@on(Input.Changed)
async def on_root_changed(self, ev: Input.Changed) -> None:
dt = self.query_one(ImageDirectoryTree)
path = Path(ev.value)
if not path.exists() or not path.is_dir():
return
dt.path = path

def compose(self) -> ComposeResult:
with Container(id="image-select-container"):
with Horizontal():
with Vertical(id="image-directory-tree"):
yield Label("Select an image:", classes="title")
yield Label("Root:")
yield Input(Path("./").resolve().as_posix())
yield ImageDirectoryTree("./")
with Container(id="image-preview"):
yield Image(id="image")
266 changes: 266 additions & 0 deletions src/oterm/app/oterm.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,266 @@
import json
from typing import Iterable

from ollama import Options, Tool
from textual import on, work
from textual.app import App, ComposeResult, SystemCommand
from textual.binding import Binding
from textual.screen import Screen
from textual.widgets import Footer, Header, TabbedContent, TabPane

from oterm.app.chat_edit import ChatEdit
from oterm.app.chat_export import ChatExport, slugify
from oterm.app.pull_model import PullModel
from oterm.app.splash import splash
from oterm.app.widgets.chat import ChatContainer
from oterm.config import appConfig
from oterm.store.store import Store
from oterm.tools.mcp import setup_mcp_servers, teardown_mcp_servers
from oterm.utils import is_up_to_date


class OTerm(App):
TITLE = "oterm"
SUB_TITLE = "A terminal-based Ollama client."
CSS_PATH = "oterm.tcss"
BINDINGS = [
Binding("ctrl+tab", "cycle_chat(+1)", "next chat", id="next.chat"),
Binding("ctrl+shift+tab", "cycle_chat(-1)", "prev chat", id="prev.chat"),
Binding("ctrl+backspace", "delete_chat", "delete chat", id="delete.chat"),
Binding("ctrl+n", "new_chat", "new chat", id="new.chat"),
Binding("ctrl+q", "quit", "quit", id="quit"),
]

def get_system_commands(self, screen: Screen) -> Iterable[SystemCommand]:
yield from super().get_system_commands(screen)
yield SystemCommand("New chat", "Creates a new chat", self.action_new_chat)
yield SystemCommand(
"Edit chat parameters",
"Allows to redefine model parameters and system prompt",
self.action_edit_chat,
)
yield SystemCommand(
"Rename chat", "Renames the current chat", self.action_rename_chat
)
yield SystemCommand(
"Clear chat", "Clears the current chat", self.action_clear_chat
)
yield SystemCommand(
"Delete chat", "Deletes the current chat", self.action_delete_chat
)
yield SystemCommand(
"Export chat",
"Exports the current chat as Markdown (in the current working directory)",
self.action_export_chat,
)
yield SystemCommand(
"Regenerate last Ollama message",
"Regenerates the last Ollama message (setting a random seed for the message)",
self.action_regenerate_last_message,
)

yield SystemCommand(
"Pull model",
"Pulls (or updates) the model from the Ollama server",
self.action_pull_model,
)

async def action_quit(self) -> None:
self.log("Quitting...")
await teardown_mcp_servers()
return self.exit()

async def action_cycle_chat(self, change: int) -> None:
tabs = self.query_one(TabbedContent)
store = await Store.get_store()
saved_chats = await store.get_chats()
if tabs.active_pane is None:
return
active_id = int(str(tabs.active_pane.id).split("-")[1])
for _chat in saved_chats:
if _chat[0] == active_id:
next_index = (saved_chats.index(_chat) + change) % len(saved_chats)
next_id = saved_chats[next_index][0]
tabs.active = f"chat-{next_id}"
break

@work
async def action_new_chat(self) -> None:
store = await Store.get_store()
model_info: str | None = await self.push_screen_wait(ChatEdit())
if not model_info:
return
model: dict = json.loads(model_info)
tabs = self.query_one(TabbedContent)
tab_count = tabs.tab_count
name = f"chat #{tab_count+1} - {model['name']}"
id = await store.save_chat(
id=None,
name=name,
model=model["name"],
system=model["system"],
format=model["format"],
parameters=model["parameters"],
keep_alive=model["keep_alive"],
tools=model["tools"],
)
pane = TabPane(name, id=f"chat-{id}")
pane.compose_add_child(
ChatContainer(
db_id=id,
chat_name=name,
model=model["name"],
system=model["system"],
format=model["format"],
parameters=Options(**model.get("parameters", {})),
keep_alive=model["keep_alive"],
messages=[],
tools=[Tool(**t) for t in model.get("tools", [])],
)
)
await tabs.add_pane(pane)
tabs.active = f"chat-{id}"

async def action_edit_chat(self) -> None:
tabs = self.query_one(TabbedContent)
if tabs.active_pane is None:
return
chat = tabs.active_pane.query_one(ChatContainer)
chat.action_edit_chat()

async def action_rename_chat(self) -> None:
tabs = self.query_one(TabbedContent)
if tabs.active_pane is None:
return
chat = tabs.active_pane.query_one(ChatContainer)
chat.action_rename_chat()

async def action_clear_chat(self) -> None:
tabs = self.query_one(TabbedContent)
if tabs.active_pane is None:
return
chat = tabs.active_pane.query_one(ChatContainer)
await chat.action_clear_chat()

async def action_delete_chat(self) -> None:
tabs = self.query_one(TabbedContent)
if tabs.active_pane is None:
return
chat = tabs.active_pane.query_one(ChatContainer)
store = await Store.get_store()
await store.delete_chat(chat.db_id)
await tabs.remove_pane(tabs.active)

async def action_export_chat(self) -> None:
tabs = self.query_one(TabbedContent)
if tabs.active_pane is None:
return
chat = tabs.active_pane.query_one(ChatContainer)
screen = ChatExport()
screen.chat_id = chat.db_id
screen.file_name = f"{slugify(chat.chat_name)}.md"
self.push_screen(screen)

async def action_regenerate_last_message(self) -> None:
tabs = self.query_one(TabbedContent)
if tabs.active_pane is None:
return
chat = tabs.active_pane.query_one(ChatContainer)
await chat.action_regenerate_llm_message()

async def action_pull_model(self) -> None:
tabs = self.query_one(TabbedContent)
if tabs.active_pane is None:
screen = PullModel("")
else:
chat = tabs.active_pane.query_one(ChatContainer)
screen = PullModel(chat.ollama.model)
self.push_screen(screen)

async def load_mcp(self):
from oterm.tools import available

mcp_tool_defs = await setup_mcp_servers()
available += mcp_tool_defs

async def on_mount(self) -> None:
store = await Store.get_store()
theme = appConfig.get("theme")
if theme:
if theme == "dark":
self.theme = "textual-dark"
elif theme == "light":
self.theme = "textual-light"
else:
self.theme = theme
self.dark = appConfig.get("theme") == "dark"
self.watch(self.app, "theme", self.on_theme_change, init=False)

saved_chats = await store.get_chats()
# Apply any remap of key bindings.
keymap = appConfig.get("keymap")
if keymap:
self.set_keymap(keymap)

await self.load_mcp()

async def on_splash_done(message) -> None:
if not saved_chats:
# Pyright suggests awaiting here which has bitten me twice
# so I'm ignoring it
self.action_new_chat() # type: ignore
else:
tabs = self.query_one(TabbedContent)
for (
id,
name,
model,
system,
format,
parameters,
keep_alive,
tools,
) in saved_chats:
messages = await store.get_messages(id)
container = ChatContainer(
db_id=id,
chat_name=name,
model=model,
messages=messages,
system=system,
format=format,
parameters=parameters,
keep_alive=keep_alive,
tools=tools,
)
pane = TabPane(name, container, id=f"chat-{id}")
tabs.add_pane(pane)
up_to_date, current_version, latest = await is_up_to_date()
if not up_to_date:
self.notify(
f"[b]oterm[/b] version [i]{latest}[/i] is available, please update.",
severity="warning",
)

if appConfig.get("splash-screen"):
self.push_screen(splash, callback=on_splash_done)
else:
await on_splash_done("")

def on_theme_change(self, old_value: str, new_value: str) -> None:
if appConfig.get("theme") != new_value:
appConfig.set("theme", new_value)

@work
@on(TabbedContent.TabActivated)
async def on_tab_activated(self, event: TabbedContent.TabActivated) -> None:
container = event.pane.query_one(ChatContainer)
await container.load_messages()

def compose(self) -> ComposeResult:
yield Header()
yield TabbedContent(id="tabs")
yield Footer()


app = OTerm()
133 changes: 89 additions & 44 deletions oterm/app/oterm.tcss → src/oterm/app/oterm.tcss
Original file line number Diff line number Diff line change
@@ -4,13 +4,10 @@ ChatContainer {
#messageContainer {
overflow-y: auto;
padding-bottom: 1;
padding-right: 1;
height: 100%;
}

ChatItem {
padding: 1;
padding-bottom: 0;
height: auto;
}

@@ -21,8 +18,7 @@ ChatItem .chatItem {
ChatItem .author {
width: 8%;
padding: 1;
color: $secondary;

color: $foreground;
}

ChatItem .USER {
@@ -40,16 +36,12 @@ ChatItem .text{

#prompt {
background: $panel;
margin: 1;
margin-bottom: 0;
dock: bottom;
padding: 1;
}

#prompt.singleline {
height: 5;
margin: 1;
padding: 1;
margin-bottom: 0;
}

#prompt.multiline #promptInput {
@@ -66,13 +58,12 @@ ChatItem .text{
}

#prompt #promptInput #promptArea {
width: 70%;
dock: left;
}

#prompt #button-container {
padding-left: 1;
width: 35;
width: 24;
dock: right;
}

@@ -91,66 +82,98 @@ LoadingIndicator {
height: 3;
}

#model-select-container {
#edit-chat-container {
background: $panel;
border: $panel-lighten-2;
margin: 2;
padding: 1;
}

#model-select-container .title {
color: $secondary;
#edit-chat-container .title {
color: $primary;
}

#edit-chat-container .system , .parameters {
margin: 1;
}

#model-select-container .button-container {
#edit-chat-container .button-container {
height: 3;
}
#model-select-container .button-container Button {

#edit-chat-container .button-container Button {
margin-right: 1;
}

#model-select {
margin-top: 1;
max-height: 60%;
}

#model-details {
margin: 1;
#model-info {
height: 1;
}

#model-info .title, .name, .tag, .size {
margin-right: 1;
}

#model-select-container .button-container {
#edit-chat-container .button-container {
margin: 1;
}

#model-select-container .json-format {
#edit-chat-container .json-format {
background: $panel;
color: $secondary;
}

#chat-rename-container {
#tool-list {
margin: 1;
padding-right:3;
}

#chat-rename-container, #chat-export-container, #prompt-history-container, #pull-model-container{
width: 80%;
height: 10;
background: $panel;
border: $panel-lighten-2;
margin-top: 2;
margin-left: 2;
padding: 1;
}

#chat-rename-container .title {
color: $secondary;
#chat-rename-container, #chat-export-container {
height: 10;
}

#chat-rename-input {
margin: 2;

#chat-rename-container .title, #chat-export-container .title, #prompt-history-container, #pull-model-container .title {
color: $primary;
}

#image-select-container .title {
color: $secondary;
#prompt-history-container .title {
margin-bottom: 1;
}

#prompt-history-container #prompt-history {
height: 90%;
}
#prompt-history-container {
height: 70%;
}


#chat-name-input {
margin: 2;
}
#image-select-container {
background: $panel;
}
#image-select-container .title {
color: $primary;
width: 100%;
padding-top: 1;
padding-bottom: 1;

text-align: center;
}
#image-select-container #image-directory-tree {
width: 30%;
@@ -162,24 +185,46 @@ LoadingIndicator {
overflow-y: auto;
}

Notification {
background: $panel;
color: $secondary;
#image-select-container Input {
margin: 1;
border: $panel-lighten-2;
padding-left: 1;
padding: 0;
}

#pull-model-container {
max-height: 80%;
}

#pull-model-container Horizontal {
margin-top: 1;
height: 3;
}

#splash-container {
height: 100%;
width: 100%;
background: $panel;
#pull-model-container Input {
width: 60;
}

#splash {
color: $secondary;
height: 100%;
width: 100%;
content-align: center middle;
TabbedContent {
padding-top: 2;
layer: below;
}

MarkdownFence {
max-height: 50;
}

Button.icon {
min-width: 5;
}

Input.keep-alive {
width: 10;
}

Label.keep-alive-label {
margin-top: 1;
}

#app-root {
height: 50vh;
max-height: 50vh;
}
34 changes: 34 additions & 0 deletions src/oterm/app/prompt_history.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
from rich.text import Text
from textual.app import ComposeResult
from textual.containers import Container
from textual.screen import ModalScreen
from textual.widgets import Label, OptionList


class PromptHistory(ModalScreen[str]):

history: list[str] = []
BINDINGS = [
("escape", "cancel", "Cancel"),
]

def __init__(self, history=[]) -> None:
self.history = history
super().__init__()

def action_cancel(self) -> None:
self.dismiss()

def on_mount(self) -> None:
option_list = self.query_one("#prompt-history", OptionList)
option_list.clear_options()
for prompt in self.history:
option_list.add_option(item=Text(prompt))

def on_option_list_option_selected(self, option: OptionList.OptionSelected) -> None:
self.dismiss(str(option.option.prompt))

def compose(self) -> ComposeResult:
with Container(id="prompt-history-container"):
yield Label("Prompt history", classes="title")
yield OptionList(id="prompt-history")
54 changes: 54 additions & 0 deletions src/oterm/app/pull_model.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
import asyncio

from ollama import ResponseError
from textual import on, work
from textual.app import ComposeResult
from textual.containers import Container, Horizontal
from textual.screen import ModalScreen
from textual.widgets import Button, Input, Label, TextArea

from oterm.ollamaclient import OllamaLLM


class PullModel(ModalScreen[str]):

model: str = ""
BINDINGS = [
("escape", "cancel", "Cancel"),
]

def __init__(self, model: str) -> None:
self.model = model
super().__init__()

def action_cancel(self) -> None:
self.dismiss()

@work
async def pull_model(self) -> None:
log = self.query_one(".log", TextArea)
stream = OllamaLLM.pull(self.model)
try:
for response in stream:
log.text += response.model_dump_json() + "\n"
await asyncio.sleep(0.1)
await asyncio.sleep(1.0)
except ResponseError as e:
log.text += f"Error: {e}\n"

@on(Input.Changed)
async def on_model_change(self, ev: Input.Changed) -> None:
self.model = ev.value

@on(Button.Pressed)
@on(Input.Submitted)
async def on_pull(self, ev: Button.Pressed) -> None:
self.pull_model()

def compose(self) -> ComposeResult:
with Container(id="pull-model-container"):
yield Label("Pull model", classes="title")
with Horizontal():
yield Input(self.model)
yield Button("Pull", variant="primary")
yield TextArea(classes="parameters log", read_only=True)
98 changes: 98 additions & 0 deletions src/oterm/app/splash.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,98 @@
import random
from typing import Any, Tuple

from textualeffects.effects import EffectType
from textualeffects.widgets import SplashScreen

logo = """
@@@@@@. :@@@@@@
@@@@@@@@@ @@@@@@@@@
@@@@= @@@@@ @@@@@ =@@@@
%@@@% @@@@. .@@@@ %@@@%
@@@@ .@@@@ @@@@. @@@@
@@@@ @@@@ @@@@@@@@@@@@ @@@@ @@@@
@@@@ @@@@@@@@@@@@@@@@@@@@@@@@ @@@@
@@@@ @@@@@@@@ @@@@@@@@ @@@@
@@@@ .@@@@@ @@@@@. @@@@
@@@@@@@@@@@@. .@@@@@@@@@@@@
@@@@@@@@@@@@ @@@@@@@@@@@@
#@@@@@* *@@@@@#
@@@@@ @@@@@
@@@@@ @@@@@
=@@@@ @@@@=
@@@@ @@@@
@@@@ - +@@@@@@+ - @@@@
@@@@ .@@@@@ :@@@@@@@@@@@@@@: @@@@@. @@@@
@@@@ %@@@@@ @@@@ @@@@ @@@@@% @@@@
=@@@@ @@ .@@@ @@@. @@ @@@@=
@@@@@ @@@ *@@@@: @@@ @@@@@
%@@@@ @@@ @@ @@@ @@@@%
@@@@: @@@ @@ @@@ :@@@@
@@@@. @@@@ @@@@ .@@@@
:@@@@ @@@@@@@@@@@@@@@@ @@@@.
@@@@- =@@@@@@@@= -@@@@
@@@@ @@@@
@@@@: :@@@@
*@@@ @@@*
@@@@ @@@@
#@@@@ @@@@#
%@@@@ @@@@@
#@@@@ @@@@#
@@@@ @@@@
@@@@= =@@@@
@@@@ @@@@
@@@@ @@@@
@@@@ @@@@
@. .@
"""

effects: list[Tuple[EffectType, dict[str, Any]]] = [
(
"Beams",
{
"beam_delay": 3,
"beam_gradient_steps": 2,
"beam_gradient_frames": 2,
"final_gradient_steps": 2,
"final_gradient_frames": 2,
"final_wipe_speed": 5,
},
),
(
"BouncyBalls",
{
"ball_delay": 1,
},
),
(
"Expand",
{
"movement_speed": 0.1,
},
),
(
"Pour",
{
"pour_speed": 3,
},
),
(
"Rain",
{},
),
(
"RandomSequence",
{},
),
(
"Scattered",
{},
),
(
"Slide",
{},
),
]

effect = random.choice(effects)
splash = SplashScreen(text=logo, effect=effect[0], config=effect[1])
1 change: 1 addition & 0 deletions src/oterm/app/widgets/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
import oterm.app.widgets.monkey # noqa: F401
423 changes: 423 additions & 0 deletions src/oterm/app/widgets/chat.py

Large diffs are not rendered by default.

49 changes: 1 addition & 48 deletions oterm/app/image_browser.py → src/oterm/app/widgets/image.py
Original file line number Diff line number Diff line change
@@ -1,18 +1,13 @@
from base64 import b64encode
from io import BytesIO
from pathlib import Path
from typing import Iterable

from PIL import Image as PILImage
from PIL import UnidentifiedImageError
from rich_pixels import Pixels
from textual.app import ComposeResult
from textual.containers import Container, Horizontal, Vertical
from textual.message import Message
from textual.reactive import reactive
from textual.screen import ModalScreen
from textual.widget import Widget
from textual.widgets import DirectoryTree, Label
from textual.widgets import DirectoryTree

IMG_MAX_SIZE = 80
IMAGE_EXTENSIONS = PILImage.registered_extensions()
@@ -64,45 +59,3 @@ def filter_paths(self, paths: Iterable[Path]) -> Iterable[Path]:
return [
path for path in paths if path.suffix in IMAGE_EXTENSIONS or path.is_dir()
]


class ImageSelect(ModalScreen[tuple[Path, str]]):
BINDINGS = [
("escape", "cancel", "Cancel"),
]

def action_cancel(self) -> None:
self.dismiss()

async def on_mount(self) -> None:
dt = self.query_one(ImageDirectoryTree)
dt.show_guides = False

async def on_directory_tree_file_selected(
self, ev: DirectoryTree.FileSelected
) -> None:
try:
buffer = BytesIO()
image = PILImage.open(ev.path)
if image.mode != "RGB":
image = image.convert("RGB")
image.save(buffer, format="JPEG")
b64 = b64encode(buffer.getvalue()).decode("utf-8")
self.dismiss((ev.path, b64))
except UnidentifiedImageError:
self.dismiss()

async def on_tree_node_highlighted(self, ev: DirectoryTree.NodeHighlighted) -> None:
path = ev.node.data.path
if path.suffix in IMAGE_EXTENSIONS:
image = self.query_one(Image)
image.path = path.as_posix()

def compose(self) -> ComposeResult:
with Container(id="image-select-container"):
with Horizontal():
with Vertical(id="image-directory-tree"):
yield Label("Select an image:", classes="title")
yield ImageDirectoryTree("./")
with Container(id="image-preview"):
yield Image(id="image")
15 changes: 15 additions & 0 deletions src/oterm/app/widgets/monkey.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
import textual.widgets._markdown as markdown
from textual import on
from textual.events import Click


class MarkdownFence(markdown.MarkdownFence):
@on(Click)
async def on_click(self, event: Click) -> None:
event.stop()
self.app.copy_to_clipboard(self.code)
self.styles.animate("opacity", 0.5, duration=0.1)
self.styles.animate("opacity", 1.0, duration=0.1, delay=0.1)


markdown.MarkdownFence = MarkdownFence
74 changes: 70 additions & 4 deletions oterm/app/prompt.py → src/oterm/app/widgets/prompt.py
Original file line number Diff line number Diff line change
@@ -11,7 +11,53 @@
from textual.widget import Widget
from textual.widgets import Button, Input, TextArea

from oterm.app.image_browser import ImageAdded, ImageSelect
from oterm.app.image_browser import ImageSelect
from oterm.app.widgets.image import ImageAdded


class PostableTextArea(TextArea):
"""
A text area that submits on Enter.
"""

BINDINGS = TextArea.BINDINGS + [
Binding(
key="enter",
action="submit",
description="submit",
show=True,
key_display=None,
priority=True,
),
Binding(
key="shift+enter",
action="newline",
description="newline",
show=True,
key_display=None,
priority=True,
id="newline",
),
]

@dataclass
class Submitted(Message):
input: "PostableTextArea"
value: str

@property
def control(self) -> "PostableTextArea":
return self.input

def action_submit(self) -> None:
self.post_message(PostableTextArea.Submitted(self, self.text))

def action_newline(self) -> None:
cur = self.cursor_location
lines = self.text.split("\n")
lines[cur[0]] = lines[cur[0]][: cur[1]] + "\n" + lines[cur[0]][cur[1] :]
self.text = "\n".join(lines)
self.cursor_location = (cur[0] + 1, 0)


class PastableInput(Input):
@@ -23,6 +69,7 @@ class PastableInput(Input):
show=True,
key_display=None,
priority=True,
id="toggle.multiline",
),
]

@@ -47,7 +94,7 @@ class FlexibleInput(Widget):
text = reactive("")

BINDINGS = [
("ctrl+p", "add_image", "add image"),
Binding("ctrl+i", "add_image", "add image", id="add.image"),
]

@dataclass
@@ -106,11 +153,22 @@ def watch_text(self):
self.query_one("#toggle-multiline", Button).disabled = True
else:
self.query_one("#toggle-multiline", Button).disabled = False

input = self.query_one("#promptInput", PastableInput)
textarea = self.query_one("#promptArea", TextArea)
if self.is_multiline:
if textarea.text != self.text:
textarea.text = self.text
else:
if input.value != self.text:
input.value = self.text
except NoMatches:
pass

def action_add_image(self) -> None:
async def on_image_selected(image) -> None:
if image is None:
return
path, b64 = image
self.post_message(ImageAdded(path, b64))

@@ -123,6 +181,12 @@ def on_input_submitted(self, event: PastableInput.Submitted):
event.stop()
event.prevent_default()

@on(PostableTextArea.Submitted, "#promptArea")
def on_textarea_submitted(self, event: PostableTextArea.Submitted):
self.post_message(self.Submitted(self, event.input.text))
event.stop()
event.prevent_default()

@on(Button.Pressed, "#toggle-multiline")
def on_toggle_multiline_pressed(self):
self.toggle_multiline()
@@ -150,7 +214,9 @@ def compose(self) -> ComposeResult:
id="promptInput",
placeholder="Message Ollama…",
)
yield TextArea(id="promptArea")
yield PostableTextArea(id="promptArea")
with Horizontal(id="button-container"):
yield Button("post", id="post", variant="primary")
yield Button("↕", id="toggle-multiline", variant="success")
yield Button(
"↕", id="toggle-multiline", classes="icon", variant="success"
)
Empty file added src/oterm/cli/__init__.py
Empty file.
60 changes: 60 additions & 0 deletions src/oterm/cli/command.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
import asyncio
from pathlib import Path

import typer
from typing_extensions import Annotated

from oterm.command.create import app
from oterm.store.store import Store

cli = typer.Typer()


@cli.command("create")
def create(
name: Annotated[str, typer.Argument(help="The name of the command to create.")]
):
app.run(name)


@cli.command("list")
def list_commands():
async def get_commands():
store = await Store.get_store()
commands = await store.get_chats(type="command")
return commands

commands = asyncio.run(get_commands())
if not commands:
typer.echo("No commands found.")
return

typer.echo("Commands found:")
for command in commands:
id, name, *rest = command
path = Path.home() / ".local" / "bin" / name
exists = path.exists()
typer.echo(f"{id}: {name} -> {exists and str(path) or 'Not found'}")


@cli.command("delete")
def delete_command(
id: Annotated[int, typer.Argument(help="The id of the command to delete.")]
):
async def delete_comm():
store = await Store.get_store()
command = await store.get_chat(id=id)
if not command:
typer.echo("Command not found.")
return
_, name, *rest = command
path = Path.home() / ".local" / "bin" / name
path.unlink(missing_ok=True)
await store.delete_chat(id=id)
typer.echo("Command deleted.")

asyncio.run(delete_comm())


if __name__ == "__main__":
cli()
19 changes: 15 additions & 4 deletions oterm/cli/oterm.py → src/oterm/cli/oterm.py
Original file line number Diff line number Diff line change
@@ -2,22 +2,27 @@
from importlib import metadata

import typer
from rich.pretty import pprint

from oterm.app.oterm import app
from oterm.store.store import Store, get_data_dir
from oterm.config import envConfig
from oterm.store.store import Store

cli = typer.Typer(context_settings={"help_option_names": ["-h", "--help"]})

cli = typer.Typer()


async def upgrade_db():
await Store.create()
await Store.get_store()


@cli.command()
def oterm(
version: bool = typer.Option(None, "--version", "-v"),
upgrade: bool = typer.Option(None, "--upgrade"),
config: bool = typer.Option(None, "--config"),
sqlite: bool = typer.Option(None, "--db"),
data_dir: bool = typer.Option(None, "--data-dir"),
):
if version:
typer.echo(f"oterm v{metadata.version('oterm')}")
@@ -26,7 +31,13 @@ def oterm(
asyncio.run(upgrade_db())
exit(0)
if sqlite:
typer.echo(get_data_dir() / "store.db")
typer.echo(envConfig.OTERM_DATA_DIR / "store.db")
exit(0)
if data_dir:
typer.echo(envConfig.OTERM_DATA_DIR)
exit(0)
if config:
typer.echo(pprint(envConfig))
exit(0)
app.run()

Empty file added src/oterm/command/__init__.py
Empty file.
69 changes: 69 additions & 0 deletions src/oterm/command/command_template.py.jinja
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
#!/usr/bin/env -S uv run --script
# /// script
# requires-python = ">=3.10"
# dependencies = [
# "textual>=1.0.0,<1.1",
# "oterm"
# ]
# ///
from textual.app import App, ComposeResult
from textual.binding import Binding
from textual.containers import Vertical
from oterm.app.css import tcss
from oterm.app.widgets.chat import ChatContainer
from oterm.store.store import Store
from oterm.tools.mcp import setup_mcp_servers, teardown_mcp_servers
db_id = {{db_id}}
name = "{{name}}"
class InlineApp(App):
CSS = tcss
TITLE = f"{name}"
BINDINGS = [
Binding("ctrl+q", "quit", "quit", id="quit"),
]
async def load_mcp(self):
from oterm.tools import available
mcp_tool_defs = await setup_mcp_servers()
available += mcp_tool_defs
async def action_quit(self) -> None:
self.log("Quitting...")
await teardown_mcp_servers()
return self.exit()
async def on_mount(self) -> None:
await self.load_mcp()
store = await Store.get_store()
chat_meta = await store.get_chat(db_id)
if chat_meta is None:
return
id, name, model, system, format, parameters, keep_alive, tools, _ = chat_meta
messages = await store.get_messages(id)
chat = ChatContainer(
db_id=id,
chat_name=name,
model=model,
messages=messages,
system=system,
format=format,
parameters=parameters,
keep_alive=keep_alive,
tools=tools,
)
await self.get_child_by_id("app-root").mount(chat)
await chat.load_messages()
def compose(self) -> ComposeResult:
yield Vertical(id="app-root")
if __name__ == "__main__":
InlineApp().run(inline=True)
123 changes: 123 additions & 0 deletions src/oterm/command/create.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,123 @@
import json
import stat
from pathlib import Path
from typing import Iterable

from jinja2 import Environment, FileSystemLoader
from textual.app import App, ComposeResult, SystemCommand
from textual.binding import Binding
from textual.screen import Screen
from textual.widgets import Footer, Header, TabbedContent

from oterm.app.chat_edit import ChatEdit
from oterm.app.pull_model import PullModel
from oterm.app.splash import splash
from oterm.app.widgets.chat import ChatContainer
from oterm.config import appConfig
from oterm.store.store import Store
from oterm.tools.mcp import setup_mcp_servers, teardown_mcp_servers


class CreateCommandApp(App):
TITLE = "oterm - Create Command"
SUB_TITLE = "Create custom LLM chats as commands."
CSS_PATH = "../app/oterm.tcss"
BINDINGS = [
Binding("ctrl+q", "quit", "quit", id="quit"),
]

def get_system_commands(self, screen: Screen) -> Iterable[SystemCommand]:
yield from super().get_system_commands(screen)

yield SystemCommand(
"Pull model",
"Pulls (or updates) a model from the Ollama server",
self.action_pull_model,
)

async def create_command(self) -> None:
async def on_done(model_info) -> None:
if model_info is None:
await self.action_quit()
model = json.loads(model_info)
store = await Store.get_store()

db_id = await store.save_chat(
id=None,
name=self.command_name,
model=model["name"],
system=model["system"],
format=model["format"],
parameters=model["parameters"],
keep_alive=model["keep_alive"],
tools=model["tools"],
type="command",
)
# Load the template from the package
environment = Environment(loader=FileSystemLoader(Path(__file__).parent))
template = environment.get_template("command_template.py.jinja")
path = Path.home() / ".local/bin" / self.command_name
path.parent.mkdir(parents=True, exist_ok=True)
with open(path, "w") as f:
f.write(template.render(db_id=db_id, name=self.command_name))
path.chmod(path.stat().st_mode | stat.S_IEXEC)
await self.action_quit()

await self.push_screen(ChatEdit(), callback=on_done)

async def action_quit(self) -> None:
self.log("Quitting...")
await teardown_mcp_servers()
return self.exit()

async def action_pull_model(self) -> None:
tabs = self.query_one(TabbedContent)
if tabs.active_pane is None:
screen = PullModel("")
else:
chat = tabs.active_pane.query_one(ChatContainer)
screen = PullModel(chat.ollama.model)
self.push_screen(screen)

async def load_mcp(self):
from oterm.tools import available

mcp_tool_defs = await setup_mcp_servers()
available += mcp_tool_defs

async def on_mount(self) -> None:
theme = appConfig.get("theme")
if theme:
if theme == "dark":
self.theme = "textual-dark"
elif theme == "light":
self.theme = "textual-light"
else:
self.theme = theme
self.dark = appConfig.get("theme") == "dark"
self.watch(self.app, "theme", self.on_theme_change, init=False)

await self.load_mcp()

async def on_splash_done(message) -> None:
await self.create_command()

if appConfig.get("splash-screen"):
self.push_screen(splash, callback=on_splash_done)
else:
await on_splash_done("")

def on_theme_change(self, old_value: str, new_value: str) -> None:
if appConfig.get("theme") != new_value:
appConfig.set("theme", new_value)

def compose(self) -> ComposeResult:
yield Header()
yield Footer()

def run(self, name: str):
self.command_name = name
return super().run()


app = CreateCommandApp()
58 changes: 58 additions & 0 deletions src/oterm/config.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
import json
import os
from pathlib import Path

from dotenv import load_dotenv
from pydantic import BaseModel

from oterm.utils import get_default_data_dir

load_dotenv()


class EnvConfig(BaseModel):

ENV: str = "development"
OLLAMA_HOST: str = "127.0.0.1:11434"
OLLAMA_URL: str = ""
OTERM_VERIFY_SSL: bool = True
OTERM_DATA_DIR: Path = get_default_data_dir()
OPEN_WEATHER_MAP_API_KEY: str = ""


envConfig = EnvConfig.model_validate(os.environ)
if envConfig.OLLAMA_URL == "":
envConfig.OLLAMA_URL = f"http://{envConfig.OLLAMA_HOST}"


class AppConfig:
def __init__(self, path: Path | None = None):
if path is None:
path = envConfig.OTERM_DATA_DIR / "config.json"
self._path = path
self._data = {
"theme": "textual-dark",
"splash-screen": True,
}
try:
with open(self._path, "r") as f:
saved = json.load(f)
self._data = self._data | saved
except FileNotFoundError:
Path.mkdir(self._path.parent, parents=True, exist_ok=True)
self.save()

def set(self, key, value):
self._data[key] = value
self.save()

def get(self, key):
return self._data.get(key)

def save(self):
with open(self._path, "w") as f:
json.dump(self._data, f)


# Expose AppConfig object for app to import
appConfig = AppConfig()
219 changes: 219 additions & 0 deletions src/oterm/ollamaclient.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,219 @@
import inspect
import json
from ast import literal_eval
from pathlib import Path
from typing import (
Any,
AsyncGenerator,
AsyncIterator,
Iterator,
Literal,
Mapping,
Sequence,
)

from ollama import (
AsyncClient,
ChatResponse,
Client,
ListResponse,
Message,
Options,
ProgressResponse,
ShowResponse,
)
from pydantic.json_schema import JsonSchemaValue
from textual import log

from oterm.config import envConfig
from oterm.types import ToolDefinition


def parse_format(format_text: str) -> JsonSchemaValue | Literal["", "json"]:
try:
jsn = json.loads(format_text)
if isinstance(jsn, dict):
return jsn
except json.JSONDecodeError:
if format_text in ("", "json"):
return format_text
raise Exception(f"Invalid Ollama format: '{format_text}'")


class OllamaLLM:
def __init__(
self,
model="llama3.2",
system: str | None = None,
history: list[Mapping[str, Any] | Message] = [],
format: str = "",
options: Options = Options(),
keep_alive: int = 5,
tool_defs: Sequence[ToolDefinition] = [],
):
self.model = model
self.system = system
self.history = history
self.format = format
self.keep_alive = keep_alive
self.options = options
self.tool_defs = tool_defs
self.tools = [tool["tool"] for tool in tool_defs]

if system:
system_prompt: Message = Message(role="system", content=system)
self.history = [system_prompt] + self.history

async def completion(
self,
prompt: str = "",
images: list[Path | bytes | str] = [],
tool_call_messages=[],
) -> str:
client = AsyncClient(
host=envConfig.OLLAMA_URL, verify=envConfig.OTERM_VERIFY_SSL
)
if prompt:
user_prompt: Message = Message(role="user", content=prompt)
if images:
# This is a bug in Ollama the images should be a list of Image objects
# user_prompt.images = [Image(value=image) for image in images]
user_prompt.images = images # type: ignore
self.history.append(user_prompt)
response: ChatResponse = await client.chat(
model=self.model,
messages=self.history + tool_call_messages,
keep_alive=f"{self.keep_alive}m",
options=self.options,
format=parse_format(self.format),
tools=self.tools,
)
message = response.message
tool_calls = message.tool_calls
if tool_calls:
tool_messages = [message]
for tool_call in tool_calls:

tool_name = tool_call["function"]["name"]
for tool_def in self.tool_defs:
log.debug("Calling tool: %s", tool_name)
if tool_def["tool"]["function"]["name"] == tool_name:
tool_callable = tool_def["callable"]
tool_arguments = tool_call["function"]["arguments"]
try:
if inspect.iscoroutinefunction(tool_callable):
tool_response = await tool_callable(**tool_arguments) # type: ignore
else:
tool_response = tool_callable(**tool_arguments) # type: ignore
log.debug(f"Tool response: {tool_response}", tool_response)
except Exception as e:
log.error(f"Error calling tool {tool_name}", e)
tool_response = str(e)
tool_messages.append(
{ # type: ignore
"role": "tool",
"content": tool_response,
"name": tool_name,
}
)
return await self.completion(
tool_call_messages=tool_messages,
)

self.history.append(message)
text_response = message.content
return text_response or ""

async def stream(
self,
prompt: str,
images: list[Path | bytes | str] = [],
additional_options: Options = Options(),
tool_defs: Sequence[ToolDefinition] = [],
) -> AsyncGenerator[str, Any]:

# stream() should not be called with tools till Ollama supports streaming with tools.
# See https://github.com/ollama/ollama-python/issues/279
if tool_defs:
raise NotImplementedError(
"stream() should not be called with tools till Ollama supports streaming with tools."
)

client = AsyncClient(
host=envConfig.OLLAMA_URL, verify=envConfig.OTERM_VERIFY_SSL
)
user_prompt: Message = Message(role="user", content=prompt)
if images:
user_prompt.images = images # type: ignore

self.history.append(user_prompt)
options = {
k: v for k, v in self.options.model_dump().items() if v is not None
} | {k: v for k, v in additional_options.model_dump().items() if v is not None}

stream: AsyncIterator[ChatResponse] = await client.chat(
model=self.model,
messages=self.history,
stream=True,
options=options,
keep_alive=f"{self.keep_alive}m",
format=parse_format(self.format),
tools=self.tools,
)
text = ""
async for response in stream:
text = text + response.message.content if response.message.content else text
yield text

self.history.append(Message(role="assistant", content=text))

@staticmethod
def list() -> ListResponse:
client = Client(host=envConfig.OLLAMA_URL, verify=envConfig.OTERM_VERIFY_SSL)
return client.list()

@staticmethod
def show(model: str) -> ShowResponse:
client = Client(host=envConfig.OLLAMA_URL, verify=envConfig.OTERM_VERIFY_SSL)
return client.show(model)

@staticmethod
def pull(model: str) -> Iterator[ProgressResponse]:
client = Client(host=envConfig.OLLAMA_URL, verify=envConfig.OTERM_VERIFY_SSL)
stream: Iterator[ProgressResponse] = client.pull(model, stream=True)
for response in stream:
yield response


def parse_ollama_parameters(parameter_text: str) -> Options:
lines = parameter_text.split("\n")
params = Options()
valid_params = set(Options.model_fields.keys())
for line in lines:
if line:
key, value = line.split(maxsplit=1)
try:
value = literal_eval(value)
except (SyntaxError, ValueError):
pass
if key not in valid_params:
continue
if params.get(key):
if not isinstance(params[key], list):
params[key] = [params[key], value]
else:
params[key].append(value)
else:
params[key] = value
return params


def jsonify_options(options: Options) -> str:
return json.dumps(
{
key: value
for key, value in options.model_dump().items()
if value is not None
},
indent=2,
)
Empty file added src/oterm/store/__init__.py
Empty file.
270 changes: 270 additions & 0 deletions src/oterm/store/store.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,270 @@
import json
from importlib import metadata
from pathlib import Path

import aiosqlite
from ollama import Options
from packaging.version import parse

from oterm.config import envConfig
from oterm.store.upgrades import upgrades
from oterm.types import Author, Tool
from oterm.utils import int_to_semantic_version, semantic_version_to_int


class Store(object):
db_path: Path

_store: "Store | None" = None

@classmethod
async def get_store(cls) -> "Store":
if cls._store is not None:
return cls._store
self = Store()
data_path = envConfig.OTERM_DATA_DIR
data_path.mkdir(parents=True, exist_ok=True)
self.db_path = data_path / "store.db"

if not self.db_path.exists():
# Create tables and set user_version
async with aiosqlite.connect(self.db_path) as connection:
await connection.executescript(
"""
CREATE TABLE IF NOT EXISTS "chat" (
"id" INTEGER,
"name" TEXT,
"model" TEXT NOT NULL,
"system" TEXT,
"format" TEXT,
"parameters" TEXT DEFAULT "{}",
"keep_alive" INTEGER DEFAULT 5,
"tools" TEXT DEFAULT "[]",
"type" TEXT DEFAULT "chat",
PRIMARY KEY("id" AUTOINCREMENT)
);
CREATE TABLE IF NOT EXISTS "message" (
"id" INTEGER,
"chat_id" INTEGER NOT NULL,
"author" TEXT NOT NULL,
"text" TEXT NOT NULL,
"images" TEXT DEFAULT "[]",
PRIMARY KEY("id" AUTOINCREMENT)
FOREIGN KEY("chat_id") REFERENCES "chat"("id") ON DELETE CASCADE
);
"""
)
await self.set_user_version(metadata.version("oterm"))
else:
# Upgrade database
current_version: str = metadata.version("oterm")
db_version = await self.get_user_version()
for version, steps in upgrades:
if parse(current_version) >= parse(version) and parse(version) > parse(
db_version
):
for step in steps:
await step(self.db_path)
await self.set_user_version(current_version)
cls._store = self
return self

async def get_user_version(self) -> str:
async with aiosqlite.connect(self.db_path) as connection:
res = await connection.execute("PRAGMA user_version;")
res = await res.fetchone()
return int_to_semantic_version(res[0] if res else 0)

async def set_user_version(self, version: str) -> None:
async with aiosqlite.connect(self.db_path) as connection:
await connection.execute(
f"PRAGMA user_version = {semantic_version_to_int(version)};"
)

async def save_chat(
self,
id: int | None,
name: str,
model: str,
system: str | None,
format: str,
parameters: Options,
keep_alive: int,
tools: list[Tool],
type: str = "chat",
) -> int:
async with aiosqlite.connect(self.db_path) as connection:
res = await connection.execute_insert(
"""
INSERT OR REPLACE
INTO chat(id, name, model, system, format, parameters, keep_alive, tools, type)
VALUES(:id, :name, :model, :system, :format, :parameters, :keep_alive, :tools, :type) RETURNING id;""",
{
"id": id,
"name": name,
"model": model,
"system": system,
"format": format,
"parameters": json.dumps(parameters),
"keep_alive": keep_alive,
"tools": json.dumps(tools),
"type": type,
},
)
await connection.commit()

return res[0] if res else 0

async def rename_chat(self, id: int, name: str) -> None:
async with aiosqlite.connect(self.db_path) as connection:
await connection.execute(
"UPDATE chat SET name = :name WHERE id = :id;", {"id": id, "name": name}
)
await connection.commit()

async def edit_chat(
self,
id: int,
name: str,
system: str | None,
format: str,
parameters: Options,
keep_alive: int,
tools: list[Tool],
) -> None:
async with aiosqlite.connect(self.db_path) as connection:
await connection.execute(
"""
UPDATE chat
SET name = :name,
system = :system,
format = :format,
parameters = :parameters,
keep_alive = :keep_alive,
tools = :tools
WHERE id = :id;
""",
{
"id": id,
"name": name,
"system": system,
"format": format,
"parameters": json.dumps(parameters),
"keep_alive": keep_alive,
"tools": json.dumps(tools),
},
)
await connection.commit()

async def get_chats(
self, type="chat"
) -> list[tuple[int, str, str, str | None, str, Options, int, list[Tool]]]:
async with aiosqlite.connect(self.db_path) as connection:
chats = await connection.execute_fetchall(
"""
SELECT id, name, model, system, format, parameters, keep_alive, tools
FROM chat WHERE type = :type;
""",
{"type": type},
)
return [
(
id,
name,
model,
system,
format,
Options(**json.loads(parameters)),
keep_alive,
[Tool(**t) for t in json.loads(tools)],
)
for id, name, model, system, format, parameters, keep_alive, tools in chats
]

async def get_chat(
self, id: int
) -> tuple[int, str, str, str | None, str, Options, int, list[Tool], str] | None:
async with aiosqlite.connect(self.db_path) as connection:
chat = await connection.execute_fetchall(
"""
SELECT id, name, model, system, format, parameters, keep_alive, tools, type
FROM chat
WHERE id = :id;
""",
{"id": id},
)
chat = next(iter(chat), None)
if chat:
id, name, model, system, format, parameters, keep_alive, tools, type = (
chat
)
return (
id,
name,
model,
system,
format,
Options(**json.loads(parameters)),
keep_alive,
[Tool(**t) for t in json.loads(tools)],
type,
)

async def delete_chat(self, id: int) -> None:
async with aiosqlite.connect(self.db_path) as connection:
await connection.execute("PRAGMA foreign_keys = on;")
await connection.execute("DELETE FROM chat WHERE id = :id;", {"id": id})
await connection.commit()

async def save_message(
self,
id: int | None,
chat_id: int,
author: str,
text: str,
images: list[str] = [],
) -> int:
async with aiosqlite.connect(self.db_path) as connection:
res = await connection.execute_insert(
"""
INSERT OR REPLACE
INTO message(id, chat_id, author, text, images)
VALUES(:id, :chat_id, :author, :text, :images) RETURNING id;
""",
{
"id": id,
"chat_id": chat_id,
"author": author,
"text": text,
"images": json.dumps(images),
},
)
await connection.commit()
return res[0] if res else 0

async def get_messages(
self, chat_id: int
) -> list[tuple[int, Author, str, list[str]]]:

async with aiosqlite.connect(self.db_path) as connection:
messages = await connection.execute_fetchall(
"""
SELECT id, author, text, images
FROM message
WHERE chat_id = :chat_id;
""",
{"chat_id": chat_id},
)
messages = [
(id, Author(author), text, json.loads(images))
for id, author, text, images in messages
]
return messages

async def clear_chat(self, chat_id: int) -> None:
async with aiosqlite.connect(self.db_path) as connection:
await connection.execute(
"DELETE FROM message WHERE chat_id = :chat_id;", {"chat_id": chat_id}
)
await connection.commit()
25 changes: 25 additions & 0 deletions src/oterm/store/upgrades/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
from oterm.store.upgrades.v0_1_6 import upgrades as v0_1_6_upgrades
from oterm.store.upgrades.v0_1_11 import upgrades as v_0_1_11_upgrades
from oterm.store.upgrades.v0_2_0 import upgrades as v0_2_0_upgrades
from oterm.store.upgrades.v0_2_4 import upgrades as v0_2_4_upgrades
from oterm.store.upgrades.v0_2_8 import upgrades as v0_2_8_upgrades
from oterm.store.upgrades.v0_3_0 import upgrades as v0_3_0_upgrades
from oterm.store.upgrades.v0_4_0 import upgrades as v0_4_0_upgrades
from oterm.store.upgrades.v0_5_1 import upgrades as v0_5_1_upgrades
from oterm.store.upgrades.v0_6_0 import upgrades as v0_6_0_upgrades
from oterm.store.upgrades.v0_7_0 import upgrades as v0_7_0_upgrades
from oterm.store.upgrades.v0_9_0 import upgrades as v0_9_0_upgrades

upgrades = (
v0_1_6_upgrades
+ v_0_1_11_upgrades
+ v0_2_0_upgrades
+ v0_2_4_upgrades
+ v0_2_8_upgrades
+ v0_3_0_upgrades
+ v0_4_0_upgrades
+ v0_5_1_upgrades
+ v0_6_0_upgrades
+ v0_7_0_upgrades
+ v0_9_0_upgrades
)
File renamed without changes.
File renamed without changes.
21 changes: 21 additions & 0 deletions src/oterm/store/upgrades/v0_2_0.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
from pathlib import Path
from typing import Awaitable, Callable

import aiosqlite


async def drop_template(db_path: Path) -> None:
async with aiosqlite.connect(db_path) as connection:
try:
await connection.executescript(
"""
ALTER TABLE chat DROP COLUMN template;
"""
)
except aiosqlite.OperationalError:
pass


upgrades: list[tuple[str, list[Callable[[Path], Awaitable[None]]]]] = [
("0.2.0", [drop_template])
]
21 changes: 21 additions & 0 deletions src/oterm/store/upgrades/v0_2_4.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
from pathlib import Path
from typing import Awaitable, Callable

import aiosqlite


async def update_format(db_path: Path) -> None:
async with aiosqlite.connect(db_path) as connection:
try:
await connection.executescript(
"""
UPDATE chat SET format = '' WHERE format is NULL;
"""
)
except aiosqlite.OperationalError:
pass


upgrades: list[tuple[str, list[Callable[[Path], Awaitable[None]]]]] = [
("0.2.4", [update_format])
]
21 changes: 21 additions & 0 deletions src/oterm/store/upgrades/v0_2_8.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
from pathlib import Path
from typing import Awaitable, Callable

import aiosqlite


async def keep_alive(db_path: Path) -> None:
async with aiosqlite.connect(db_path) as connection:
try:
await connection.executescript(
"""
ALTER TABLE chat ADD COLUMN keep_alive INTEGER DEFAULT 5;
"""
)
except aiosqlite.OperationalError:
pass


upgrades: list[tuple[str, list[Callable[[Path], Awaitable[None]]]]] = [
("0.2.8", [keep_alive])
]
35 changes: 35 additions & 0 deletions src/oterm/store/upgrades/v0_3_0.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
import json
from pathlib import Path
from typing import Awaitable, Callable

import aiosqlite

from oterm.ollamaclient import OllamaLLM, parse_ollama_parameters


async def parameters(db_path: Path) -> None:
async with aiosqlite.connect(db_path) as connection:
try:
await connection.executescript(
"""
ALTER TABLE chat ADD COLUMN parameters TEXT DEFAULT "{}";
"""
)
except aiosqlite.OperationalError:
pass

# Update with default parameters
chat_models = await connection.execute_fetchall("SELECT id, model FROM chat")
for chat_id, model in chat_models:
info = OllamaLLM.show(model)
parameters = parse_ollama_parameters(info["parameters"])
await connection.execute(
"UPDATE chat SET parameters = ? WHERE id = ?",
(json.dumps(parameters), chat_id),
)
await connection.commit()


upgrades: list[tuple[str, list[Callable[[Path], Awaitable[None]]]]] = [
("0.3.0", [parameters])
]
21 changes: 21 additions & 0 deletions src/oterm/store/upgrades/v0_4_0.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
from pathlib import Path
from typing import Awaitable, Callable

import aiosqlite


async def context(db_path: Path) -> None:
async with aiosqlite.connect(db_path) as connection:
try:
await connection.executescript(
"""
ALTER TABLE chat DROP COLUMN context;
"""
)
except aiosqlite.OperationalError:
pass


upgrades: list[tuple[str, list[Callable[[Path], Awaitable[None]]]]] = [
("0.4.0", [context])
]
29 changes: 29 additions & 0 deletions src/oterm/store/upgrades/v0_5_1.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
from pathlib import Path
from typing import Awaitable, Callable

import aiosqlite


async def add_id_to_messages(db_path: Path) -> None:
async with aiosqlite.connect(db_path) as connection:
try:
await connection.executescript(
"""
CREATE TABLE message_temp (
id INTEGER PRIMARY KEY AUTOINCREMENT,
chat_id INTEGER NOT NULL,
author TEXT NOT NULL,
text TEXT NOT NULL
);
INSERT INTO message_temp (chat_id, author, text) SELECT chat_id, author, text FROM message;
DROP TABLE message;
ALTER TABLE message_temp RENAME TO message;
"""
)
except aiosqlite.OperationalError:
pass


upgrades: list[tuple[str, list[Callable[[Path], Awaitable[None]]]]] = [
("0.5.1", [add_id_to_messages])
]
23 changes: 23 additions & 0 deletions src/oterm/store/upgrades/v0_6_0.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
from pathlib import Path
from typing import Awaitable, Callable

import aiosqlite


async def tools(db_path: Path) -> None:
async with aiosqlite.connect(db_path) as connection:
try:
await connection.executescript(
"""
ALTER TABLE chat ADD COLUMN tools TEXT DEFAULT "[]";
"""
)
except aiosqlite.OperationalError:
pass

await connection.commit()


upgrades: list[tuple[str, list[Callable[[Path], Awaitable[None]]]]] = [
("0.6.0", [tools])
]
37 changes: 37 additions & 0 deletions src/oterm/store/upgrades/v0_7_0.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
from pathlib import Path
from typing import Awaitable, Callable

import aiosqlite


async def images(db_path: Path) -> None:
async with aiosqlite.connect(db_path) as connection:
try:
await connection.executescript(
"""
ALTER TABLE message ADD COLUMN images TEXT DEFAULT "[]";
"""
)
except aiosqlite.OperationalError:
pass

await connection.commit()


async def orphan_messages(db_path: Path) -> None:
async with aiosqlite.connect(db_path) as connection:
try:
await connection.executescript(
"""
DELETE FROM message WHERE chat_id NOT IN (SELECT id FROM chat);
"""
)
except aiosqlite.OperationalError:
pass

await connection.commit()


upgrades: list[tuple[str, list[Callable[[Path], Awaitable[None]]]]] = [
("0.7.0", [images, orphan_messages]),
]
23 changes: 23 additions & 0 deletions src/oterm/store/upgrades/v0_9_0.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
from pathlib import Path
from typing import Awaitable, Callable

import aiosqlite


async def chat_type(db_path: Path) -> None:
async with aiosqlite.connect(db_path) as connection:
try:
await connection.executescript(
"""
ALTER TABLE chat ADD COLUMN type TEXT DEFAULT "chat";
"""
)
except aiosqlite.OperationalError:
pass

await connection.commit()


upgrades: list[tuple[str, list[Callable[[Path], Awaitable[None]]]]] = [
("0.9.0", [chat_type]),
]
42 changes: 42 additions & 0 deletions src/oterm/tools/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
from importlib import import_module
from typing import Awaitable, Callable, Sequence

from ollama._types import Tool

from oterm.config import appConfig
from oterm.types import ExternalToolDefinition, ToolDefinition


def load_tools(tool_defs: Sequence[ExternalToolDefinition]) -> Sequence[ToolDefinition]:
tools = []
for tool_def in tool_defs:
tool_path = tool_def["tool"]

try:
module, tool = tool_path.split(":")
module = import_module(module)
tool = getattr(module, tool)
if not isinstance(tool, Tool):
raise Exception(f"Expected Tool, got {type(tool)}")
except ModuleNotFoundError as e:
raise Exception(f"Error loading tool {tool_path}: {str(e)}")

callable_path = tool_def["callable"]
try:
module, function = callable_path.split(":")
module = import_module(module)
callable = getattr(module, function)
if not isinstance(callable, (Callable, Awaitable)):
raise Exception(f"Expected Callable, got {type(callable)}")
except ModuleNotFoundError as e:
raise Exception(f"Error loading callable {callable_path}: {str(e)}")
tools.append({"tool": tool, "callable": callable})

return tools


available: list[ToolDefinition] = []

external_tools = appConfig.get("tools")
if external_tools:
available.extend(load_tools(external_tools))
20 changes: 20 additions & 0 deletions src/oterm/tools/date_time.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
from datetime import datetime

from oterm.types import Tool

DateTimeTool = Tool(
type="function",
function=Tool.Function(
name="date_time",
description="Function to get the current date and time",
parameters=Tool.Function.Parameters(
type="object",
properties={},
required=[],
),
),
)


def date_time() -> str:
return datetime.now().isoformat()
53 changes: 53 additions & 0 deletions src/oterm/tools/location.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
import json

import httpx

from oterm.types import Tool

LocationTool = Tool(
type="function",
function=Tool.Function(
name="current_location",
description="Function to return the current location, city, region, country, latitude, and longitude.",
parameters=Tool.Function.Parameters(
type="object",
properties={},
required=[],
),
),
)


async def current_location():

async with httpx.AsyncClient() as client:
try:
response = await client.get("https://ipinfo.io/")
if response.status_code == 200:
data = response.json()

# Extract latitude and longitude from the location information
city = data.get("city", "N/A")
region = data.get("region", "N/A")
country = data.get("country", "N/A")
loc = data.get("loc", "N/A").split(",")
if len(loc) == 2:
latitude, longitude = float(loc[0]), float(loc[1])
else:
latitude, longitude = None, None

return json.dumps(
{
"city": city,
"region": region,
"country": country,
"latitude": latitude,
"longitude": longitude,
}
)
else:
return json.dumps(
{"error": f"{response.status_code}: {response.reason_phrase}"}
)
except httpx.HTTPError as e:
return json.dumps({"error": str(e)})
185 changes: 185 additions & 0 deletions src/oterm/tools/mcp.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,185 @@
import asyncio
from contextlib import AsyncExitStack
from typing import Any

from mcp import ClientSession, StdioServerParameters
from mcp import Tool as MCPTool
from mcp.client.stdio import stdio_client
from mcp.types import CallToolResult, TextContent
from textual import log

from oterm.config import appConfig
from oterm.types import Tool, ToolDefinition

mcp_clients = []


# adapted from mcp-python-sdk/examples/clients/simple-chatbot/mcp_simple_chatbot/main.py
class MCPClient:
"""Manages MCP server connections and tool execution."""

def __init__(self, name: str, server_params: StdioServerParameters, errlog=None):
self.name = name
self.server_params = server_params
self.errlog = errlog
self.stdio_context: Any | None = None
self.session: ClientSession | None = None
self._cleanup_lock: asyncio.Lock = asyncio.Lock()
self.exit_stack: AsyncExitStack = AsyncExitStack()

async def initialize(self) -> None:
"""Initialize the server connection."""

try:
stdio_transport = await self.exit_stack.enter_async_context(
stdio_client(self.server_params)
)
read, write = stdio_transport
session = await self.exit_stack.enter_async_context(
ClientSession(read, write)
)
await session.initialize()
self.session = session
except Exception as e:
await self.cleanup()
log.error(f"Error initializing MCP server {self.name}: {e}")

async def get_available_tools(self) -> list[MCPTool]:
"""List available tools from the server.
Returns:
A list of available tools.
Raises:
RuntimeError: If the server is not initialized.
"""
if not self.session:
raise RuntimeError(f"Server {self.name} not initialized")

tools_response = await self.session.list_tools()

# Let's just ignore pagination for now
return tools_response.tools

async def call_tool(
self,
tool_name: str,
arguments: dict[str, Any],
retries: int = 2,
delay: float = 1.0,
) -> Any:
"""Execute a tool with retry mechanism.
Args:
tool_name: Name of the tool to execute.
arguments: Tool arguments.
retries: Number of retry attempts.
delay: Delay between retries in seconds.
Returns:
Tool execution result.
Raises:
RuntimeError: If server is not initialized.
Exception: If tool execution fails after all retries.
"""
if not self.session:
raise RuntimeError(f"Server {self.name} not initialized")

attempt = 0
while attempt < retries:
try:
result = await self.session.call_tool(tool_name, arguments)
return result

except Exception as e:
attempt += 1
log.warning(
f"Error executing tool: {e}. Attempt {attempt} of {retries}."
)
if attempt < retries:
log.info(f"Retrying in {delay} seconds...")
await asyncio.sleep(delay)
else:
log.error("Max retries reached. Failing.")
raise

async def cleanup(self) -> None:
"""Clean up server resources."""
async with self._cleanup_lock:
try:
await self.exit_stack.aclose()
self.session = None
self.stdio_context = None
except Exception as e:
log.error(f"Error during cleanup of MCP server {self.name}.")
raise e


class MCPToolCallable:
def __init__(self, name, server_name, client):
self.name = name
self.server_name = server_name
self.client = client

async def call(self, **kwargs):
log.info(f"Calling Tool {self.name} in {self.server_name} with {kwargs}")
res: CallToolResult = await self.client.call_tool(self.name, kwargs)
if res.isError:
log.error(f"Error call mcp tool {self.name}.")
raise Exception(f"Error call mcp tool {self.name}.")
text_content = [m.text for m in res.content if type(m) is TextContent]
return "\n".join(text_content)


async def setup_mcp_servers():
mcp_servers = appConfig.get("mcpServers")
tool_defs: list[ToolDefinition] = []

if mcp_servers:
for server, config in mcp_servers.items():
# Patch the MCP server environment with the current environment
# This works around https://github.com/modelcontextprotocol/python-sdk/issues/99
from os import environ

config = StdioServerParameters.model_validate(config)
if config.env is not None:
config.env.update(dict(environ))

client = MCPClient(server, config)
await client.initialize()
if not client.session:
continue
mcp_clients.append(client)

log.info(f"Initialized MCP server {server}")

mcp_tools: list[MCPTool] = await client.get_available_tools()

for mcp_tool in mcp_tools:
tool = mcp_tool_to_ollama_tool(mcp_tool)
mcpToolCallable = MCPToolCallable(mcp_tool.name, server, client)
tool_defs.append({"tool": tool, "callable": mcpToolCallable.call})
log.info(f"Loaded MCP tool {mcp_tool.name} from {server}")

return tool_defs


async def teardown_mcp_servers():
log.info("Tearing down MCP servers")
# Important to tear down in reverse order
mcp_clients.reverse()
for client in mcp_clients:
await client.cleanup()


def mcp_tool_to_ollama_tool(mcp_tool: MCPTool) -> Tool:
"""Convert an MCP tool to an Ollama tool"""

return Tool(
function=Tool.Function(
name=mcp_tool.name,
description=mcp_tool.description,
parameters=Tool.Function.Parameters.model_validate(mcp_tool.inputSchema),
),
)
26 changes: 26 additions & 0 deletions src/oterm/tools/shell.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
import subprocess

from oterm.types import Tool

ShellTool = Tool(
type="function",
function=Tool.Function(
name="shell",
description="Function to execute commands in the user's shell and return the output.",
parameters=Tool.Function.Parameters(
type="object",
properties={
"command": Tool.Function.Parameters.Property(
type="string", description="The shell command to execute."
)
},
required=["command"],
),
),
)


def shell_command(command="") -> str:
return subprocess.run(command, shell=True, capture_output=True).stdout.decode(
"utf-8"
)
48 changes: 48 additions & 0 deletions src/oterm/tools/weather.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
import json

import httpx

from oterm.config import envConfig
from oterm.types import Tool

WeatherTool = Tool(
type="function",
function=Tool.Function(
name="current_weather",
description="Function to return the current weather for the given location in Standard Units.",
parameters=Tool.Function.Parameters(
type="object",
properties={
"latitude": Tool.Function.Parameters.Property(
type="float", description="The latitude of the location."
),
"longitude": Tool.Function.Parameters.Property(
type="float", description="The longitude of the location."
),
},
required=["latitude", "longitude"],
),
),
)


async def current_weather(latitude: float, longitude: float) -> str:
async with httpx.AsyncClient() as client:
try:
api_key = envConfig.OPEN_WEATHER_MAP_API_KEY
if not api_key:
raise Exception("OpenWeatherMap API key not found")

response = await client.get(
f"https://api.openweathermap.org/data/2.5/weather?lat={latitude}&lon={longitude}&appid={api_key}"
)

if response.status_code == 200:
data = response.json()
return json.dumps(data)
else:
return json.dumps(
{"error": f"{response.status_code}: {response.reason_phrase}"}
)
except Exception as e:
return json.dumps({"error": str(e)})
51 changes: 51 additions & 0 deletions src/oterm/tools/web.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
import json
from html.parser import HTMLParser

import httpx

from oterm.types import Tool

WebTool = Tool(
type="function",
function=Tool.Function(
name="fetch_url",
description="Function to return the contents of a website in text format.",
parameters=Tool.Function.Parameters(
type="object",
properties={
"url": Tool.Function.Parameters.Property(
type="str", description="The URL of the website to fetch."
),
},
required=["url"],
),
),
)


class HTML2Text(HTMLParser):
text = ""

def handle_data(self, data):
self.text += data


async def fetch_url(url: str) -> str:
async with httpx.AsyncClient() as client:
try:
response = await client.get(url)

if response.status_code == 200:
html = response.text
parser = HTML2Text()
parser.feed(html)
return parser.text

else:
return json.dumps(
{
"error": f"Failed to fetch URL: {url}. Status code: {response.status_code}"
}
)
except Exception as e:
return json.dumps({"error": str(e)})
25 changes: 25 additions & 0 deletions src/oterm/types.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
from enum import Enum
from typing import Awaitable, Callable, TypedDict

from ollama._types import Image, Tool # noqa


class Author(Enum):
USER = "me"
OLLAMA = "ollama"


class ParsedResponse(TypedDict):
thought: str
response: str
formatted_output: str


class ToolDefinition(TypedDict):
tool: Tool
callable: Callable | Awaitable


class ExternalToolDefinition(TypedDict):
tool: str
callable: str
119 changes: 119 additions & 0 deletions src/oterm/utils.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,119 @@
import sys
from importlib import metadata
from pathlib import Path

import httpx
from packaging.version import Version, parse

from oterm.types import ParsedResponse


def parse_response(input_text: str) -> ParsedResponse:
"""
Parse a response from the chatbot.
"""

thought = ""
response = input_text
formatted_output = input_text

# If the response contains a think tag, split the response into the thought process and the actual response
if input_text.startswith("<think>") and input_text.find("</think>") != -1:
thought = (
input_text[input_text.find("<think>") + 7 : input_text.find("</think>")]
.lstrip("\n")
.rstrip("\n")
.strip()
)
response = (
input_text[input_text.find("</think>") + 8 :].lstrip("\n").rstrip("\n")
)

# transform the think tag into a markdown blockquote (for clarity)
if len(thought) == 0:
formatted_output = response
else:
formatted_output = (
"> ### \<think\>\n"
+ "\n".join([f"> {line}" for line in thought.split("\n")])
+ "\n> ### \</think\>\n"
+ response
)

return ParsedResponse(
thought=thought, response=response, formatted_output=formatted_output
)


def get_default_data_dir() -> Path:
"""
Get the user data directory for the current system platform.
Linux: ~/.local/share/oterm
macOS: ~/Library/Application Support/oterm
Windows: C:/Users/<USER>/AppData/Roaming/oterm
:return: User Data Path
:rtype: Path
"""
home = Path.home()

system_paths = {
"win32": home / "AppData/Roaming/oterm",
"linux": home / ".local/share/oterm",
"darwin": home / "Library/Application Support/oterm",
}

data_path = system_paths[sys.platform]
return data_path


def semantic_version_to_int(version: str) -> int:
"""
Convert a semantic version string to an integer.
:param version: Semantic version string
:type version: str
:return: Integer representation of semantic version
:rtype: int
"""
major, minor, patch = version.split(".")
major = int(major) << 16
minor = int(minor) << 8
patch = int(patch)
return major + minor + patch


def int_to_semantic_version(version: int) -> str:
"""
Convert an integer to a semantic version string.
:param version: Integer representation of semantic version
:type version: int
:return: Semantic version string
:rtype: str
"""
major = version >> 16
minor = (version >> 8) & 255
patch = version & 255
return f"{major}.{minor}.{patch}"


async def is_up_to_date() -> tuple[bool, Version, Version]:
"""
Checks whether oterm is current.
:return: A tuple containing a boolean indicating whether oterm is current, the running version and the latest version
:rtype: tuple[bool, Version, Version]
"""

async with httpx.AsyncClient() as client:
running_version = parse(metadata.version("oterm"))
try:
response = await client.get("https://pypi.prg/pypi/oterm/json")
data = await response.json()
pypi_version = parse(data["info"]["version"])
except Exception:
# If no network connection, do not raise alarms.
pypi_version = running_version
return running_version >= pypi_version, running_version, pypi_version
15 changes: 6 additions & 9 deletions tests/conftest.py
Original file line number Diff line number Diff line change
@@ -1,26 +1,23 @@
from base64 import b64encode
from io import BytesIO

import ollama
import pytest
import pytest_asyncio
from PIL import Image

from oterm.ollama import OllamaAPI, OllamaError


@pytest_asyncio.fixture(autouse=True)
async def load_test_models():
api = OllamaAPI()
try:
await api.get_model_info("nous-hermes:13b")
except OllamaError:
await api.pull_model("nous-hermes:13b")
ollama.show("llama3.2")
except ollama.ResponseError:
ollama.pull("llama3.2")
yield


@pytest.fixture(scope="session")
def llama_image():
def llama_image() -> bytes:
buffered = BytesIO()
image = Image.open("tests/data/lama.jpg")
image.save(buffered, format="JPEG")
return b64encode(buffered.getvalue()).decode("utf-8")
return buffered.getvalue()
19 changes: 0 additions & 19 deletions tests/test_api_client.py

This file was deleted.

36 changes: 29 additions & 7 deletions tests/test_llm_client.py
Original file line number Diff line number Diff line change
@@ -1,23 +1,24 @@
import pytest
from ollama import ResponseError

from oterm.ollama import OllamaError, OllamaLLM
from oterm.ollamaclient import OllamaLLM
from oterm.tools.location import LocationTool


@pytest.mark.asyncio
async def test_generate():
llm = OllamaLLM()
res = await llm.completion("Please add 2 and 2")
assert "4" in res
res = await llm.completion(prompt="Please add 42 and 42")
assert "84" in res


@pytest.mark.asyncio
async def test_llm_context():
llm = OllamaLLM()
await llm.completion("I am testing oterm, a python client for Ollama.")
# There should now be a context saved for the conversation.
assert llm.context
res = await llm.completion("Do you remember what I am testing?")
assert "oterm" in res
assert "oterm" in res.lower()


@pytest.mark.asyncio
@@ -32,8 +33,8 @@ async def test_errors():
llm = OllamaLLM(model="non-existent-model")
try:
await llm.completion("This should fail.")
except OllamaError as e:
assert "model 'non-existent-model' not found" in str(e)
except ResponseError as e:
assert 'model "non-existent-model" not found' in str(e)


@pytest.mark.asyncio
@@ -43,3 +44,24 @@ async def test_iterator():
async for text in llm.stream("Please add 2 and 2"):
response = text
assert "4" in response


@pytest.mark.skip(
reason="Skipped till https://github.com/ollama/ollama-python/issues/279 is fixed."
)
@pytest.mark.asyncio
async def test_tool_streaming():
# This test will fail until Ollama supports streaming with tools.
# See https://github.com/ollama/ollama-python/issues/279

llm = OllamaLLM(
tool_defs=[
{"tool": LocationTool, "callable": lambda: "New York"},
],
)
response = ""
async for text in llm.stream(
"In which city am I currently located?. Reply with no other text, just the city."
):
response = text
assert "New York" in response
50 changes: 50 additions & 0 deletions tests/test_ollama_api.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
import pytest
from ollama import ResponseError

from oterm.ollamaclient import OllamaLLM, jsonify_options, parse_ollama_parameters


def test_list():
llm = OllamaLLM()
response = llm.list()
models = response.get("models", [])
assert [model for model in models if model.model == "llama3.2:latest"]


def test_show():
llm = OllamaLLM()
response = llm.show("llama3.2")
assert response
assert response.modelfile
assert response.parameters
assert response.template
assert response.details
assert response.modelinfo

params = parse_ollama_parameters(response.parameters)
assert params.stop == ["<|start_header_id|>", "<|end_header_id|>", "<|eot_id|>"]
assert params.temperature is None
json = jsonify_options(params)
assert json == (
"{\n"
' "stop": [\n'
' "<|start_header_id|>",\n'
' "<|end_header_id|>",\n'
' "<|eot_id|>"\n'
" ]\n"
"}"
)


def test_pull():
llm = OllamaLLM()
stream = llm.pull("llama3.2:latest")
entries = [entry.status for entry in stream]
assert "pulling manifest" in entries
assert "success" in entries

with pytest.raises(ResponseError) as excinfo:
stream = llm.pull("non-existing:latest")
entries = [entry for entry in stream]
assert excinfo.value == "pull model manifest: file does not exist"
assert "success" not in entries
2 changes: 1 addition & 1 deletion tests/test_store.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
from oterm.store.store import int_to_semantic_version, semantic_version_to_int
from oterm.utils import int_to_semantic_version, semantic_version_to_int


def test_sqlite_user_version():
Empty file added tests/tools/__init__.py
Empty file.
13 changes: 13 additions & 0 deletions tests/tools/mcp_servers.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
from mcp.server.fastmcp import Context, FastMCP

mcp = FastMCP("Oracle")


@mcp.resource("config://app")
def get_config() -> str:
return "Oracle MCP server"


@mcp.tool()
async def oracle(query: str, ctx: Context) -> str:
return "Oracle says: oterm"
22 changes: 22 additions & 0 deletions tests/tools/test_custom_tool.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
import pytest

from oterm.tools import load_tools
from oterm.tools.date_time import DateTimeTool, date_time


@pytest.mark.asyncio
async def test_loading_custom_tool():

# Test loading a callable from a well-defined module
tools = load_tools(
[
{
"tool": "oterm.tools.date_time:DateTimeTool",
"callable": "oterm.tools.date_time:date_time",
}
]
)

assert len(tools) == 1
assert tools[0]["tool"] == DateTimeTool
assert tools[0]["callable"] == date_time
18 changes: 18 additions & 0 deletions tests/tools/test_date_time_tool.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
from datetime import datetime

import pytest

from oterm.ollamaclient import OllamaLLM
from oterm.tools.date_time import DateTimeTool, date_time


@pytest.mark.asyncio
async def test_date_time():
llm = OllamaLLM(
model="mistral-nemo", tool_defs=[{"tool": DateTimeTool, "callable": date_time}]
)
res = await llm.completion(
"What is the current date in YYYY-MM-DD format?. Reply with no other text, just the date."
)
date = datetime.date(datetime.now())
assert f"{date.year}-{date.month:02d}-{date.day:02d}" in res
21 changes: 21 additions & 0 deletions tests/tools/test_location_tool.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
import json

import pytest

from oterm.ollamaclient import OllamaLLM
from oterm.tools.location import LocationTool, current_location


@pytest.mark.asyncio
async def test_location_tool():
llm = OllamaLLM(
model="mistral-nemo",
tool_defs=[
{"tool": LocationTool, "callable": current_location},
],
)
res = await llm.completion(
"In which city am I currently located?. Reply with no other text, just the city."
)
curr_loc = json.loads(await current_location()).get("city")
assert curr_loc in res
45 changes: 45 additions & 0 deletions tests/tools/test_mcp_tools.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
from pathlib import Path

import pytest
from mcp import StdioServerParameters

from oterm.ollamaclient import OllamaLLM
from oterm.tools.mcp import MCPClient, MCPToolCallable
from oterm.types import Tool

mcp_server_executable = Path(__file__).parent / "mcp_servers.py"

server_config = {
"oracle": {
"command": "mcp",
"args": ["run", str(mcp_server_executable.absolute())],
}
}


@pytest.mark.asyncio
async def test_mcp():
client = MCPClient(
"oracle",
StdioServerParameters.model_validate(server_config["oracle"]),
)
await client.initialize()
tools = await client.get_available_tools()

tool = tools[0]
oterm_tool = Tool(
function=Tool.Function(
name=tool.name,
description=tool.description,
parameters=Tool.Function.Parameters.model_validate(tool.inputSchema),
),
)

mcpToolCallable = MCPToolCallable(tool.name, "oracle", client)
llm = OllamaLLM(
tool_defs=[{"tool": oterm_tool, "callable": mcpToolCallable.call}],
)

res = await llm.completion("Ask the oracle what is the best client for Ollama.")
assert "oterm" in res
await client.cleanup()
16 changes: 16 additions & 0 deletions tests/tools/test_shell_tool.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
import pytest

from oterm.ollamaclient import OllamaLLM
from oterm.tools.shell import ShellTool, shell_command


@pytest.mark.asyncio
async def test_shell():
llm = OllamaLLM(
model="mistral-nemo",
tool_defs=[
{"tool": ShellTool, "callable": shell_command},
],
)
res = await llm.completion("What is the current directory")
assert "oterm" in res
47 changes: 47 additions & 0 deletions tests/tools/test_weather_tool.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
import json

import pytest

from oterm.ollamaclient import OllamaLLM
from oterm.tools.location import LocationTool, current_location
from oterm.tools.weather import WeatherTool, current_weather


@pytest.mark.asyncio
async def test_weather():
llm = OllamaLLM(
tool_defs=[
{"tool": WeatherTool, "callable": current_weather},
],
)
weather = json.loads(await current_weather(latitude=59.2675, longitude=10.4076))
temperature = weather.get("main").get("temp") - 273.15

res = await llm.completion(
"What is the current temperature at my location latitude 59.2675, longitude 10.4076?"
)

assert "temperature" in res or "Temperature" in res
assert str(round(temperature)) in res or str(round(temperature, 1)) in res


@pytest.mark.asyncio
async def test_weather_with_location():
llm = OllamaLLM(
tool_defs=[
{"tool": LocationTool, "callable": current_location},
{"tool": WeatherTool, "callable": current_weather},
],
)
location = json.loads(await current_location())
weather = json.loads(
await current_weather(
latitude=location.get("latitude"),
longitude=location.get("longitude"),
)
)
temperature = weather.get("main").get("temp") - 273.15

res = await llm.completion("What is the current temperature in my city?")
assert "temperature" in res or "Temperature" in res
assert str(round(temperature)) in res or str(round(temperature, 1)) in res
15 changes: 15 additions & 0 deletions tests/tools/test_web_tool.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
import pytest

from oterm.ollamaclient import OllamaLLM
from oterm.tools.web import WebTool, fetch_url


@pytest.mark.asyncio
async def test_web():
llm = OllamaLLM(
tool_defs=[{"tool": WebTool, "callable": fetch_url}],
)
res = await llm.completion(
"What's oterm in a single phrase? oterm is hosted at https://github.com/ggozad/oterm."
)
assert "Ollama" in res
1,815 changes: 1,815 additions & 0 deletions uv.lock

Large diffs are not rendered by default.