Automatically version prompts from your codebase via our proxy, without changing your workflow. Run experiments using historical datasets to test, evaluate, and improve prompts over time while preventing regressions in production systems.
Who can use this feature: Prompts is an add-on feature for our Pro
plan.
Helicone’s prompt management provides a seamless way for users to track the prompts used in their generative AI applications. With Helicone, you can effortlessly monitor versions and inputs as they evolve.
Example: A Prompt Template designed for a rap battle between two people.
As you modify your prompt in code, Helicone automatically tracks the new version and maintains a record of the old prompt. Additionally, a dataset of input/output keys is preserved for each version.
For Typescript / Javascript users:
By prefixing your prompt with hpf and enclosing your input variables in an additional {}, it allows Helicone to easily detect your prompt and inputs. We designed for minimum code change to keep it as easy as possible to use Prompts.
constlocation="space";const character ="two brothers";const promptInput = hpf`Compose a movie scene involving ${{ character }},setin ${{location}}`;
Static Prompts with hpstatic
In addition to hpf, Helicone provides hpstatic for creating static prompts that don’t change between requests. This is useful for system prompts or other constant text that you don’t want to be treated as variable input.
To use hpstatic, import it along with hpf:
import{ hpf, hpstatic }from"@helicone/prompts";
Then, you can use it like this:
const systemPrompt = hpstatic`You are a helpful assistant.`;const userPrompt = hpf`Write a story about ${{ character }}`;const chatCompletion =await openai.chat.completions.create({ messages:[{ role:"system", content: systemPrompt },{ role:"user", content: userPrompt },], model:"gpt-3.5-turbo",},{ headers:{"Helicone-Prompt-Id":"prompt_story",},});
The hpstatic function wraps the entire text in <helicone-prompt-static> tags, indicating to Helicone that this part of the prompt should not be treated as variable input.
For other users:
Currently, we only support packages for TypeScript and JavaScript for easy integration. For other users, we are utilizing an adapted version of JSX to manage prompts and their input variables.
When you send over a prompt with the following syntax, we are able to extract the children text between the JSX element helicone-prompt-input, then place it back into the prompt to be sent to your LLM.
<helicone-prompt-inputkey="my_input_key">Input to your LLM</helicone-prompt-input>
Only the following will be sent to your LLM:
Input to your LLM
It is crucial that each key has a unique identifier; otherwise, you will get
the same variable replacing in multiple places.
Then, all of the input variables are mapped to the key my_input_key as a dictionary in Helicone:
{"my_input_key":"Input to your LLM"}
For static prompts, you can manually wrap static parts of your prompt in <helicone-prompt-static> tags:
<helicone-prompt-static>You are a helpful assistant.</helicone-prompt-static>Write a story about <helicone-prompt-input key="character">a secret agent</helicone-prompt-input>
This tells Helicone that the first part of the prompt is static and should not be treated as variable input.
Let’s say we have an app that generates a short story, where users are able to input their own character. For example, the prompt is “Write a story about a secret agent”, where the character is “a secret agent”.
1
Import hpf
import{ hpf }from"@helicone/prompts";
2
Add `hpf` and identify input variables
Using the backtick string formatter in JavaScript, add hpf in front of your backtick to automatically format your text so that Helicone can determine where your variables are.
Next, nest your inputted variable so that it is within another bracket {}, this is essentially
making it so that we can determine the input key for Helicone.
content: hpf`Write a story about ${{ character }}`,
If you want to rename your input or have a custom input, you can change the key-value pair in the passed dictionary to the string formatter function. Like this:
content: hpf`Write a story about ${{"my_magical_input": character }}`,
3
Assign an id to your prompt
Assign a Helicone-Prompt-Id header to your LLM request.
Assigning an id allows us to associate your prompt with future versions of your prompt, and automatically manage versions on your behalf.
Depending on the package you are using, you will need to add a header. For more information on adding headers to packages, please see Header Directory.
headers:{"Helicone-Prompt-Id":"prompt_story",},
Here’s what your code would look like:
// 1. Add these linesimport{ hpf, hpstatic }from"@helicone/prompts";const chatCompletion =await openai.chat.completions.create({ messages:[{ role:"system",// 2. Use hpstatic for static prompts content: hpstatic`You are a creative storyteller.`,},{ role:"user",// 3: Add hpf to any string, and nest any variable in additional brackets `{}` content: hpf`Write a story about ${{ character }}`,},], model:"gpt-3.5-turbo",},{// 3. Add Prompt Id Header headers:{"Helicone-Prompt-Id":"prompt_story",},});
Many times in development, you may want to test your prompt locally before deploying it to production and you don’t want Helicone to track new prompt versions.
To do this, you can set the Helicone-Prompt-Mode header to testing in your LLM request. This will prevent Helicone from tracking new prompt versions.