Use prompts directly via SDK without the AI Gateway
When building LLM applications, you sometimes need direct control over prompt compilation without routing through the AI Gateway. The SDK provides an alternative integration method that allows you to pull and compile prompts directly in your application.
The SDK provides types for both integration methods when using the OpenAI SDK:
Type
Description
Use Case
HeliconeChatCreateParams
Standard chat completions with prompts
Non-streaming requests
HeliconeChatCreateParamsStreaming
Streaming chat completions with prompts
Streaming requests
Both types extend the OpenAI SDK’s chat completion parameters and add:
prompt_id - Your saved prompt identifier
environment - Optional environment to target (e.g., “production”, “staging”)
version_id - Optional specific version (defaults to production version)
inputs - Variable values
Important: These types make messages optional because Helicone prompts are expected to contain the required message structure. If your prompt template is empty or doesn’t include messages, you’ll need to provide them at runtime.For direct SDK integration:
Copy
Ask AI
import { HeliconePromptManager } from '@helicone/helpers';const promptManager = new HeliconePromptManager({ apiKey: "your-helicone-api-key"});
import OpenAI from 'openai';import { HeliconePromptManager } from '@helicone/helpers';const openai = new OpenAI({ baseURL: "https://ai-gateway.helicone.ai", apiKey: process.env.HELICONE_API_KEY,});const promptManager = new HeliconePromptManager({ apiKey: "your-helicone-api-key"});async function generateWithPrompt() { // Get compiled prompt with variable substitution const { body, errors } = await promptManager.getPromptBody({ prompt_id: "abc123", model: "gpt-4o-mini", inputs: { customer_name: "Alice Johnson", product: "AI Gateway" } }); // Check for validation errors if (errors.length > 0) { console.warn("Validation errors:", errors); } // Use compiled prompt with OpenAI SDK const response = await openai.chat.completions.create(body); console.log(response.choices[0].message.content);}
Both approaches are fully compatible with all OpenAI SDK features including function calling, response formats, and advanced parameters. The HeliconePromptManager, while not providing input traces, will provide validation error handling.