Skip to content

dgurns/second-opinion

Repository files navigation

Second Opinion

Get a medical second opinion without ANY data leaving your device.

Uses Ollama, SQLite, and React Router.

Image

Local Development Setup

  1. Copy the .env.sample file to .env:
cp .env.sample .env

# Optional: Customize values if you want
  1. Download the local LLM and serve it via Ollama:
# Start Ollama locally via the desktop app,
# or via CLI like so:
ollama serve

# Pull the model of your choice
ollama pull gemma3:4b
  1. In a separate terminal, install dependencies, initialize the database, and start the development server:
# Install dependencies
npm install

# Initialize the local database
npm run db:push

# Start the development server
npm run dev

The application will be available at http://localhost:5173

Prerequisites

  • Ollama installed locally
  • Node.js 20+ installed which includes npm for package management

Models

The application uses the gemma3:4b model by default, which offers a good combination of speed and intelligence. You might also try deepseek-r1:7b which is a reasoning model and a bit smarter. Or try whatever model you'd like from https://ollama.com/library!

To change your model, run cp .env.sample .env and update the OLLAMA_MODEL value.

Make sure you have each model pulled locally with Ollama:

ollama pull gemma3:4b

Troubleshooting

If you notice that the LLM completion is getting cut off, it might be because the context window is too large for your laptop's memory and/or Ollama's default configuration.

This is a fundamental limitation that I'm not sure how to overcome at the moment. As LLMs get more efficient and laptops get more powerful, this should become less of an issue over time.

About

Get a medical second opinion without ANY data leaving your device.

Resources

Stars

Watchers

Forks