Who can use this feature: Anyone on any plan.

This feature is currently in beta. While you’re welcome to try it out, please know that our team is still working to refine it. Your feedback is valuable to help us improve!

Introduction

Helicone’s prompt management provides a seamless way for users to track the prompts used in their generative AI applications. With Helicone, you can effortlessly monitor versions and inputs as they evolve.

Example: A Prompt Template designed for a rap battle between two people.

Why Prompts

Requests are now matched to a helicone-prompt-id, allowing you to:

  • version and track iterations to your prompt over time, without losing any previous versions.
  • maintain a dataset of inputs and outputs for each prompt version.

Quick Start

Prerequisites

To use Prompts, you must set up Helicone in proxy mode. Please ensure you use one of the methods in our Starter Guide.

Not sure if proxy is for you? We created a guide to explain the difference between Helicone Proxy vs Helicone Async integration.

How Prompt Templates Work

As you modify your prompt in code, Helicone automatically tracks the new version and maintains a record of the old prompt. Additionally, a dataset of input/output keys is preserved for each version.

Example

Let’s say we have an app that generates a short story, where users are able to input their own character. For example, the prompt is “Write a story about a secret agent”, where the character is “a secret agent”.

1

Import hpf

import { hpf } from "@helicone/prompts";
2

Add `hpf` and identify input variables

Using the backtick string formatter in JavaScript, add hpf in front of your backtick to automatically format your text so that Helicone can determine where your variables are.

Next, nest your inputted variable so that it is within another bracket {}, this is essentially making it so that we can determine the input key for Helicone.

content: hpf`Write a story about ${{ character }}`,
3

Assign an id to your prompt

Assign a Helicone-Prompt-Id header to your LLM request.

Assigning an id allows us to associate your prompt with future versions of your prompt, and automatically manage versions on your behalf.

Depending on the package you are using, you will need to add a header. For more information on adding headers to packages, please see Header Directory.

headers: {
  "Helicone-Prompt-Id": "prompt_story",
},

Here’s what your code would look like:

// 1. Add these lines
import { hpf, hpstatic } from "@helicone/prompts";

const chatCompletion = await openai.chat.completions.create(
  {
    messages: [
      {
        role: "system",
        // 2. Use hpstatic for static prompts
        content: hpstatic`You are a creative storyteller.`,
      },
      {
        role: "user",
        // 3: Add hpf to any string, and nest any variable in additional brackets `{}`
        content: hpf`Write a story about ${{ character }}`,
      },
    ],
    model: "gpt-3.5-turbo",
  },
  {
    // 3. Add Prompt Id Header
    headers: {
      "Helicone-Prompt-Id": "prompt_story",
    },
  }
);

Using Prompts created on the UI

If you’ve created a prompt on the UI, you can easily pull this prompt into your codebase by calling the following API endpoint:

export async function getPrompt(
  id: string,
  variables: Record<string, any>
): Promise<any> {
  const getHeliconePrompt = async (id: string) => {
    const res = await fetch(
      `https://api.helicone.ai/v1/prompt/${id}/template`,
      {
        headers: {
          Authorization: `Bearer ${YOUR_HELICONE_API_KEY}`,
          "Content-Type": "application/json",
        },
        method: "POST",
        body: JSON.stringify({
          inputs: variables,
        }),
      }
    );

    return (await res.json()) as Result<PromptVersionCompiled, any>;
  };

  const heliconePrompt = await getHeliconePrompt(id);
  if (heliconePrompt.error) {
    throw new Error(heliconePrompt.error);
  }
  return heliconePrompt.data?.filled_helicone_template;
}

async function pullPromptAndRunCompletion() {
  const prompt = await getPrompt("my-prompt-id", {
    color: "red",
  });
  console.log(prompt);

  const openai = new OpenAI({
    apiKey: "YOUR_OPENAI_API_KEY",
    baseURL: `https://oai.helicone.ai/v1/${YOUR_HELICONE_API_KEY}`,
  });
  const response = await openai.chat.completions.create(
    prompt satisfies OpenAI.Chat.Completions.ChatCompletionCreateParamsStreaming
  );
  console.log(response);
}

Running Experiments

Once you’ve set up prompt management, you can leverage Helicone’s experimentation features to test and improve your prompts.

To learn more about running experiments with your prompts, including step-by-step guides and best practices, visit our Experiments guide.

Local Testing

Many times in development, you may want to test your prompt locally before deploying it to production and you don’t want Helicone to track new prompt versions.

To do this, you can set the Helicone-Prompt-Mode header to testing in your LLM request. This will prevent Helicone from tracking new prompt versions.

headers: {
  "Helicone-Prompt-Mode": "testing",
},

Questions?

Questions or feedback? Reach out to help@helicone.ai or schedule a call with us.