
How-to Guides
Practical, task-oriented guides for solving specific problems with Helicone. These guides assume you already know the basics and need to accomplish a particular task.Data Management & Analytics
Debug your LLM app
Identify and fix errors in your LLM application using Helicone’s debugging tools.
ETL / Data extraction
Extract and export your LLM data for analysis and reporting.
Segment data with Custom Properties
Track costs and behaviors by environment, user type, and custom dimensions.
Label request data
Add labels to requests for easier searching and filtering.
Get user requests
Retrieve user-specific requests for monitoring and cost tracking.
Get session data
Access conversation threads and session history.
Advanced Features
Replay LLM sessions
Replay and analyze past LLM sessions for optimization.
Run experiments
A/B test prompts and model configurations.
Fine-tune models
Prepare datasets and track fine-tuning workflows.
Predefine request IDs
Set custom request IDs for better tracking.
Integration & Environment
Track environments
Separate dev, staging, and production environments.
GitHub Actions integration
Monitor LLM calls in your CI/CD pipelines.
Manual logger streaming
Implement custom streaming with the logger SDK.
Tutorials
Step-by-step guides for learning by building complete applications with Helicone. Perfect for understanding how different features work together.Build an AI Agent System
Create a complete AI agent with tool calling, memory, and observability.
Customer Support Assistant
Build a multi-model assistant that routes queries based on complexity.
AI Debate Simulator
Create an interactive debate app showcasing different integration methods.
Evaluation System with Ragas
Implement comprehensive LLM evaluation using Helicone and Ragas.
Chatbot with Structured Outputs
Build a chatbot using OpenAI’s structured outputs and function calling.
Thinking Models Implementation
Work with reasoning models like DeepSeek R1 and OpenAI o1/o3.
Knowledge Base
Educational resources to deepen your understanding of LLM concepts and best practices.Prompt Engineering
Master the art of crafting effective prompts for optimal LLM performance.Prompt thinking models
Learn how to effectively prompt thinking models like DeepSeek R1 and OpenAI
o1/o3.
Be specific and clear
Create concise prompts for better LLM responses.
Use structured formats
Format the generated output for easier parsing and interpretation.
Role-playing
Assign specific roles in system prompts to set the style, tone, and content.
Few-shot learning
Provide examples of desired outputs to guide the LLM towards better
responses.
Use constrained outputs
Set clear rules for the model’s responses to improve accuracy and
consistency.
Chain-of-thought prompting
Encourage the model to generate intermediate reasoning steps before arriving
at a final answer.
Thread-of-thought prompting
Build on previous ideas to maintain a coherent line of reasoning between
interactions.
Least-to-most prompting
Break down complex problems into smaller parts, gradually increasing in
complexity.
Meta-Prompting
Use LLMs to create and refine prompts dynamically.
Need help choosing a guide?
Need help choosing a guide?
Not sure which guide to start with? Check out our Getting Started guide to begin your journey with Helicone.