LLMonitor helps AI devs monitor their apps in production, with features such as:
- 💵 Cost, token & latency analytics
- 👪 Track users
- 🐛 Traces to debug easily
- 🔍 Inspect full requests
- 🖲️ Collect feedback from users (soon)
- 🧪 Unit tests for your agents (soon)
- 🏷️ Label and create fine-tuning datasets (soon)
It also designed to be:
- 🤖 Usable with any model, not just OpenAI
- 📦 Easy to integrate (2 minutes)
- 🧑💻 Simple to self-host (deploy to Vercel & Supabase)
demo720.mp4
Modules available for:
LLMonitor natively supports:
- LangChain (JS & Python)
- OpenAI module
- LiteLLM
Additionally you can use it with any framework by wrapping the relevant methods.
Full documentation is available on the website.
We offer a hosted version with a free plan of up to 1k requests / days.
With the hosted version:
- 👷 don't worry about devops or managing updates
- 🙋 get priority 1:1 support with our team
- 🇪🇺 your data is stored safely in Europe
Chat with us on Discord or email one of the founders: vince [at] llmonitor.com.
This project is licensed under the Apache 2.0 License.