Experiments with the DSPy framework using Ollama as the default local language model backend.
-
Install Ollama (recommended for local models):
curl -fsSL https://ollama.ai/install.sh | sh
-
Install Poetry (for dependency management):
curl -sSL https://install.python-poetry.org | python3 -
-
Clone and install dependencies:
git clone https://github.com/DoMaLi94/dspy-experiments.git cd dspy-experiments poetry install
-
Set up Ollama:
# Start Ollama server ollama serve # Install recommended model ollama pull gemma3:1b
-
Run experiments:
# Basic QA example poetry run python experiments/basic_qa.py # Image to text example (requires llava model) ollama pull llava:7b poetry run python experiments/image_to_text.py # Jupyter notebook (normaly you want to run this notebook cell by cell) poetry run jupyter notebook notebooks/getting_started.ipynb
dspy-experiments/
├── experiments/ # Python experiment scripts
│ ├── basic_qa.py # Basic question-answering example
│ ├── image_to_text.py # Image-to-text description using LLaVA
├── notebooks/ # Jupyter notebooks
│ └── getting_started.ipynb
├── scripts/ # Utility scripts
│ ├── check_code.sh # Code quality checks
│ └── format_code.sh # Code formatting
├── images/ # Sample images for experiments
├── pyproject.toml # Poetry configuration
└── README.md
- Default model:
gemma3:1b
- Server:
http://localhost:11434
- Advantages: Privacy, no API costs, offline usage
Additional models supported:
llava:7b
– For image-to-text experiments
For more available models, see the Ollama models library.
Simple question-answering system demonstrating DSPy basics with Ollama.
Image description using DSPy with LLaVA multimodal model. Demonstrates how to process images and generate text descriptions.
Interactive notebook covering:
- Language model setup with Ollama
- DSPy signatures
- Basic QA system
- Sentiment classification
- Chain of thought reasoning
- Optimization techniques
The experiments use Ollama as the local language model backend:
- Primary model:
gemma3:1b
for text generation and QA - Multimodal model:
llava:7b
for image-to-text tasks - Server:
http://localhost:11434
Copy .env.example
to .env
and modify settings if needed for your setup.
- Server not running:
ollama serve
- Model not found:
ollama pull gemma3:1b
- Connection refused: Check if Ollama is running on port 11434
- Lock file outdated:
poetry lock
- Dependencies not installed:
poetry install
This project uses several tools to maintain code quality:
- Black: Code formatting
- isort: Import sorting
- flake8: Linting (PEP 8 compliance)
- mypy: Static type checking
# Format code and run all checks
./scripts/format_code.sh
# Check code quality (without making changes)
./scripts/check_code.sh
# Or run pre-commit directly
poetry run pre-commit run --all-files
# Individual tools (if needed for debugging)
poetry run black experiments/ notebooks/
poetry run isort experiments/ notebooks/
poetry run flake8 experiments/
poetry run mypy experiments/
You can set up automatic code formatting and linting on commit:
# Install git hooks
poetry run pre-commit install
# Run hooks manually on all files (optional)
poetry run pre-commit run --all-files
# Update hooks to latest versions
poetry run pre-commit autoupdate
Once installed, pre-commit will automatically run code quality checks on your staged files before each commit, ensuring consistent code quality across the project.