Open WebUI Integration
Integrate Helicone with Open WebUI, an extensible, offline-capable interface for various LLM runners. Monitor interactions across Ollama, OpenAI-compatible APIs, and custom LLM setups.
Introduction
Open WebUI (formerly Ollama WebUI) is a feature-rich, self-hosted web interface designed for interacting with various LLM runners. It supports Ollama, OpenAI-compatible APIs, and custom LLM setups.
Integrating Helicone with Open WebUI enables comprehensive observability across all your LLM interactions, providing:
- Consolidated view across model types: Monitor both local Ollama models and cloud LLM APIs (like OpenAI) through a single interface
- Request visualization and replay: See exactly what prompts were sent to each model in Open WebUI and the outputs generated by the LLMs for evaluation
- Local LLM performance tracking: Measure response times and throughput of your self-hosted models
- Usage analytics by model: Compare usage patterns between different models in your Open WebUI setup
Integration Steps
Create a Helicone account + Generate an API Key
Create a Helicone account and log in to generate an API key here.
Make sure to generate a write only API key. This ensures you only allow logging data to Helicone without read access to your private data.
Create an OpenAI account + Generate an API Key
Create an OpenAI account and log into OpenAI’s Developer Portal to generate an API key.
Run your Open WebUI application using Helicone's base URL
Update the OpenAI API base URL of your Docker command when launching your Web OpenUI Docker container. By using this base URL, you will be able to both perform queries and monitor them automatically.
If you already have a Open WebUI application deployed, go to the Admin Panel > Settings > Connections and click on the +
sign for “Managing OpenAI API Connections”. Your API Base URL
would be https://oai.helicone.ai/v1/YOUR_HELICONE_API_KEY
and the API KEY
would be your OpenAI API key.
Make sure monitoring is working
To make sure your requests are being tracked, log into your Helicone dashboard and review the “Requests” tab. You should see the requests you have made through your Open WebUI interface already logged in.
For more advanced setups, including GPU support or custom Ollama configurations, refer to the Open WebUI GitHub repository and its documentation.
Was this page helpful?