Skip to main content
When building LLM applications, you sometimes need direct control over prompt compilation without routing through the AI Gateway. The SDK provides an alternative integration method that allows you to pull and compile prompts directly in your application.

SDK vs AI Gateway

We provide SDKs for both TypeScript and Python that offer two ways to use Helicone prompts:
  1. AI Gateway Integration - Use prompts through the Helicone AI Gateway (recommended)
  2. Direct SDK Integration - Pull prompts directly via SDK (this page)
Prompts through the AI Gateway come with several benefits:
  • Cleaner code: Automatically performs compilation and substitution in the router.
  • Input traces: Traces inputs on each request for better observability in Helicone requests.
  • Faster TTFT: The AI Gateway adds significantly less latency compared to the SDK.
The SDK is a great option for users that need direct interaction with compiled prompt bodies without using the AI Gateway.

Installation

  • TypeScript
  • Python
npm install @helicone/helpers

Types and Classes

  • TypeScript
  • Python
The SDK provides types for both integration methods when using the OpenAI SDK:
TypeDescriptionUse Case
HeliconeChatCreateParamsStandard chat completions with promptsNon-streaming requests
HeliconeChatCreateParamsStreamingStreaming chat completions with promptsStreaming requests
Both types extend the OpenAI SDK’s chat completion parameters and add:
  • prompt_id - Your saved prompt identifier
  • environment - Optional environment to target (e.g., “production”, “staging”)
  • version_id - Optional specific version (defaults to production version)
  • inputs - Variable values
Important: These types make messages optional because Helicone prompts are expected to contain the required message structure. If your prompt template is empty or doesn’t include messages, you’ll need to provide them at runtime.For direct SDK integration:
import { HeliconePromptManager } from '@helicone/helpers';

const promptManager = new HeliconePromptManager({
  apiKey: "your-helicone-api-key"
});

Methods

Both SDKs provide the HeliconePromptManager with these main methods:
MethodDescriptionReturns
pullPromptVersion()Determine which prompt version to usePrompt version object
pullPromptBody()Fetch raw prompt from storageRaw prompt body
pullPromptBodyByVersionId()Fetch prompt by specific version IDRaw prompt body
mergePromptBody()Merge prompt with inputs and validationCompilation result
getPromptBody()Complete compile process with inputsCompiled body + validation errors

Usage Examples

  • TypeScript
  • Python
import OpenAI from 'openai';
import { HeliconePromptManager } from '@helicone/helpers';

const openai = new OpenAI({
  baseURL: "https://ai-gateway.helicone.ai",
  apiKey: process.env.HELICONE_API_KEY,
});

const promptManager = new HeliconePromptManager({
  apiKey: "your-helicone-api-key"
});

async function generateWithPrompt() {
  // Get compiled prompt with variable substitution
  const { body, errors } = await promptManager.getPromptBody({
    prompt_id: "abc123",
    model: "gpt-4o-mini",
    inputs: {
      customer_name: "Alice Johnson",
      product: "AI Gateway"
    }
  });

  // Check for validation errors
  if (errors.length > 0) {
    console.warn("Validation errors:", errors);
  }

  // Use compiled prompt with OpenAI SDK
  const response = await openai.chat.completions.create(body);
  console.log(response.choices[0].message.content);
}
Both approaches are fully compatible with all OpenAI SDK features including function calling, response formats, and advanced parameters. The HeliconePromptManager, while not providing input traces, will provide validation error handling.