Why Prompt Management?
Traditional prompt development involves hardcoded prompts in application code, messy string substitution, and frustrating and rebuilding deployments for every iteration. This creates friction that slows down experimentation and your team’s ability to ship.Iterate Without Code Changes
Test and deploy prompt changes instantly without rebuilding or redeploying your application
Version Control Built-In
Track every change, compare versions, and rollback instantly if something goes wrong
Dynamic Variables
Use variables anywhere - system prompts, messages, even tool schemas - for truly reusable prompts
Environment Management
Deploy different versions to production, staging, and development environments independently
Quick Start
1
Create a Prompt
Build a prompt in the Playground. Save any prompt with clear commit histories and tags.
2
Test and Iterate
Experiment with different variables, inputs, and models until you reach desired output. Variables can be used anywhere, even in tool schemas.
3
Run Prompt with AI Gateway
Use your prompt instantly by referencing its ID in your AI Gateway. No code changes, no rebuilds.
Prompt Management is available for Chat Completions on the AI Gateway. Simply include
prompt_id and inputs in your chat completion requests.Your prompt is automatically compiled with the provided inputs and sent to your chosen model. Update prompts in the dashboard and changes take effect immediately!
Variables
Variables make your prompts dynamic and reusable. Define them once in your prompt template, then provide different values at runtime without changing your code.Variable Syntax
Variables use the format{{hc:name:type}} where:
nameis your variable identifiertypedefines the expected data type
Supported Types
| Type | Description | Example Values | Validation |
|---|---|---|---|
string | Text values | "John Doe", "Hello world" | None |
number | Numeric values | 25, 3.14, -10 | AI Gateway type-checking |
boolean | True/false values | true, false, "yes", "no" | AI Gateway type-checking |
your_type_name | Any data type | Objects, arrays, strings | None |
Only
number and boolean types are validated by the Helicone AI Gateway, which will accept strings for any input as long as they can be converted to valid values.true/false(boolean)"yes"/"no"(string)"true"/"false"(string)
Schema Variables
Variables can be used within JSON schemas for tools and response formatting. This enables dynamic schema generation based on runtime inputs.Replacement Behavior
Value Replacement: When a variable tag is the only content in a string, it gets replaced with the actual data type:Managing Environments
You can easily manage different deployment environments for your prompts directly in the Helicone dashboard. Create and deploy prompts to production, staging, development, or any custom environment you need.Prompt Partials
When building multiple prompts, you often need to reuse the same message blocks across different prompts. Prompt partials allow you to reference messages from other prompts, eliminating duplication and making your prompt library more maintainable.Syntax
Prompt partials use the format{{hcp:prompt_id:index:environment}} where:
prompt_id- The 6-character alphanumeric identifier of the prompt to referenceindex- The message index (0-based) to extract from that promptenvironment- Optional environment identifier (defaults to production if omitted)
How It Works
When a prompt containing a partial is compiled:- Partial Resolution: The partial tag
{{hcp:prompt_id:index:environment}}is replaced with the actual message content from the referenced prompt at the specified index - Variable Substitution: After partials are resolved, variables in both the main prompt and the resolved partials are substituted with their values
Variables from partials are automatically extracted and shown in the prompt editor. You can provide values for these variables just like any other prompt variable, giving you full control over the partial’s content.
Using Prompts
Helicone provides two ways to use prompts:- AI Gateway Integration - The recommended approach. Use prompts through the Helicone AI Gateway for automatic compilation, input tracing, and lower latency.
- SDK Integration - Alternative integration method for users that need direct interaction with compiled prompt bodies without using the AI Gateway.
Prompt Management is available for Chat Completions on the AI Gateway. Simply include
prompt_id and inputs in your chat completion requests to use saved prompts.