Copilot Assistant is a chat app that helps you schedule meetings and manage tasks through a chatbot in your O365 account. It provides several tools to help you find other people's availability, send emails to check their available times, and send invites to other people's calendars.
Copilot Assistant integrates with Microsoft to assist you with office tasks.
The first step is to create an App Registration in Microsoft Azure and register it inside the app.
App Registration allows Microsoft to perform identity checks and issue access tokens on your behalf to perform certain tasks with permissions.
Follow the instructions here.
Make sure the Redirect URL is configured to http://localhost:8080
.
This ensures Microsoft will redirect to the localhost server (which is where the Copilot server listens) and complete the OAuth workflow.
After creating the app, create a secret and copy the secret value.
Make sure you obtain the following values that are needed in the next step:
Key | Value | Description |
---|---|---|
CLIENT_ID | ${CLIENT_ID} | Microsoft client ID |
CLIENT_SECRET | ${CLIENT_SECRET} | Microsoft client secret |
TENANT_ID | ${TENANT_ID} | Microsoft tenant ID |
Key | Value | Description |
---|---|---|
OPENAI_API_KEY | ${OPENAI_API_KEY} | Provide your OPENAI_API_KEY. |
MICROSOFT_JWT_KEY | ${MICROSOFT_JWT_KEY} | Provide a secret value used as a JWT key. This is used to sign JWT tokens issued on behalf of a user. Keep this a secret. You can use openssl rand -base64 32 to generate a random value for it. |
PUBLIC_URL | ${PUBLIC_URL} | This is required for webhook notifications to work. Since everything is running locally, you need to expose your app server publicly so that webhook events can be delivered to the app. The easiest way is to run ngrok . Check the docs on ngrok on how to forward your local port publicly. |
git clone https://github.com/StrongMonkey/ethan.git
cd ethan
Replace the .env placeholder with the values you obtained in the previous step.
DB_NAME=copilot
DB_USER=admin
DB_PASSWORD=admin123
DB_HOST=db
MICROSOFT_CLIENT_ID=${CLIENT_ID}
MICROSOFT_CLIENT_SECRET=${CLIENT_SECRET}
MICROSOFT_JWT_KEY=${JWT_SECRET_KEY}
MICROSOFT_TENANT_ID=${TENANT_ID}
DEVELOPMENT=true
PUBLIC_URL=${PUBLIC_URL}
UI_SERVER=http://ui:3000
OPENAI_API_KEY=${OPENAI_API_KEY}
OPENAI_BASE_URL=https://api.openai.com/v1/
DEFAULT_MODEL=gpt-4o
Then run docker-compose:
docker compose -f docker-compose.yaml up
Go to http://localhost:8080
and you can start logging in and using the app.
The default model is gpt-4o
. To use a different OpenAI model, update the DEFAULT_MODEL
in the .env
file.
To connect to an OpenAI-compatible local model server, such as llama.cpp, ollama, or Rubra's tool.cpp, update the OPENAI_API_KEY
, OPENAI_BASE_URL
, and DEFAULT_MODEL
accordingly. For example:
OPENAI_API_KEY=sk-123
OPENAI_BASE_URL=http://host.docker.internal:1234/v1
DEFAULT_MODEL=rubra-meta-llama-3-8b-instruct.Q8_0.gguf