Why use User Feedback
- Improve response quality: Identify patterns in poorly-rated responses to refine prompts and model selection
- Catch regressions early: Monitor feedback trends to detect when changes negatively impact user experience
- Build training datasets: Use highly-rated responses as examples for fine-tuning or few-shot prompting
Quick Start
1
Make a request and capture the ID
Capture the Helicone request ID from your LLM response:
Alternative: Getting request ID from response
Alternative: Getting request ID from response
You can also try to get the Helicone ID from response headers, though this may not always be available:
2
Submit feedback rating
Send a positive or negative rating for the response:
3
View feedback analytics
Access feedback metrics in your Helicone dashboard to analyze response quality trends and identify areas for improvement.
Configuration Options
Feedback collection requires minimal configuration:Parameter | Type | Description | Default | Example |
---|---|---|---|---|
rating | boolean | User’s feedback on the response | N/A | true (positive) or false (negative) |
helicone-id | string | Request ID to attach feedback to | N/A | UUID |
Processing multiple feedback ratings
Processing multiple feedback ratings
When you need to submit feedback for multiple requests, use parallel API calls:
Use Cases
Track user satisfaction with AI assistant responses:
Understanding User Feedback
How it works
User feedback creates a continuous improvement loop for your AI application:- Each LLM request gets a unique Helicone ID
- Users rate responses as positive (helpful) or negative (not helpful)
- Feedback is linked to the original request for analysis
- Dashboard aggregates feedback to show quality trends
Explicit vs Implicit Feedback
Explicit feedback is when users directly rate responses (thumbs up/down, star ratings). While valuable, it has low response rates since users must take deliberate action. Implicit feedback is derived from user behavior and is much more valuable since it reflects actual usage patterns: Track user actions that indicate response quality:Related Features
Custom Properties
Segment feedback by feature, user type, or experiment for deeper insights
User Metrics
Combine feedback with usage data to understand user satisfaction trends
Sessions
Track feedback across multi-turn conversations and workflows
Alerts
Set up notifications when feedback rates drop below thresholds