Introduction
The Helicone LLM for LlamaIndex lets you send OpenAI‑compatible requests through the Helicone AI Gateway — no provider keys needed. Gain centralized routing, observability, and control across many models and providers.This integration uses a dedicated LlamaIndex package:
llama-index-llms-helicone.Install
Usage
Parameters
- model: OpenAI‑compatible model name routed via Helicone. See the model registry.
- api_base (optional): Base URL for Helicone AI Gateway (defaults to the package’s
DEFAULT_API_BASE). Can also be set viaHELICONE_API_BASE. - api_key: Your Helicone API key. You can set via constructor or
HELICONE_API_KEY. - default_headers (optional): Add additional headers; the
Authorization: Bearer <api_key>header is set automatically.
Environment Variables
Advanced Configuration
Notes
- Authentication uses your Helicone API key; provider keys are not required when using the AI Gateway.
- All requests appear in the Helicone dashboard with full request/response visibility and cost tracking.
- Learn more about routing and model coverage: