Introduction
Get started with Helicone, the open-source LLM observability platform for developers to monitor, debug, and optimize their applications.
Tip: Open search with control
/command
+ k
to search docs.
Quick Start
Integrate with Helicone and send your first events in seconds.
OpenAI
JavaScript, Python, Langchain, Async logging, cURL
Azure
JavaScript, Python, Langchain, cURL
Anthropic
JavaScript, Python, Langchain
Gemini
JavaScript, cURL
Anyscale
JavaScript, Python, cURL
Together AI
JavaScript, Python, cURL
Hyperbolic
JavaScript, Python, cURL
Groq
JavaScript, Python, cURL
Deepinfra
JavaScript, Python, cURL
OpenRouter
JavaScript, Python, cURL
LiteLLM
JavaScript, Python, cURL
OpenLLMetry
Log directly to Helicone without going through our proxy.
Gateway
Don’t see your provider above? Try Helicone’s universal Gateway.
New to Helicone?
To help you get the most out of Helicone, we curated a list of actions that you can do next. Our users typically find themselves doing majority of the following, but you’re welcome to explore the product on your own!
Add a custom property
Label your requests. We will walk you through how to segment, analyze, and visualize them.
Create your first prompt
Version your prompt and inputs as they evolve.
Cache responses
Cache and watch how much time and cost you saved.
Recommendations
The following guides are optional, but we think you’ll find them useful (and fun).
Run an experiment
How much better can the output be, if you tweak your prompt or use a different model? The answer is in Experiments.
Bring it to PostHog
Helicone has teamed up with PostHog to bring your LLM analytics closer to all your other dashboards.
Explore Features
Discover features for monitoring and experimenting with your prompts.
Prompts (Beta)
Effortlessly monitor prompt versions and inputs.sdkj
Sessions
Automatically track sessions and traces with Helicone.
Custom Properties
Label and segment your requests.
Caching
Save cost and improve latency.
Omit Logs
Remove request and responses.
User Metrics
Get insights into your user’s usage.
Feedback
Provide user feedback on output.
Gateway Fallback (Beta)
Utilize any provider through a single endpoint.
Retries
Smartly retry requests.
Rate Limiting
Easily rate-limit power users.
Key Vault
Manage and distribute your provider API keys securely.
Moderation Integration
Integrate OpenAI moderation to safeguard your chat completions.
LLM Security
Secure OpenAI chat completions against prompt injections.
Customer Portal
Easily manage your customers and their usage.
Further Reading
Proxy or Async?
Determine when you should use a proxy or async function in Helicone.
How We Calculate Costs
An explanation of our process for calculating cost per request.
Understanding Helicone Headers
Every header you need to know to access Helicone features.
Questions?
Although we designed the docs to be as self-serving as possible, you are welcome to join our Discord or contact help@helicone.ai with any questions or feedback you have.
Interested in deploying Helicone on-prem? Schedule a call with us.
Was this page helpful?