Important Notice: As of April 25th, 2025, the
@helicone/generate
SDK has been discontinued. We launched a new prompts feature with improved composability and versioning on July 20th, 2025.The SDK and the legacy prompts feature will continue to function until August 20th, 2025.Installation
Usage
Simple usage with just a prompt ID
With variables
With Helicone properties
In a chat
Supported Providers and Required Environment Variables
Ensure all required environment variables are correctly defined in your
.env
file before making a request.HELICONE_API_KEY
Provider | Required Environment Variables |
---|---|
OpenAI | OPENAI_API_KEY |
Azure OpenAI | AZURE_API_KEY , AZURE_ENDPOINT , AZURE_DEPLOYMENT |
Anthropic | ANTHROPIC_API_KEY |
AWS Bedrock | BEDROCK_API_KEY , BEDROCK_REGION |
Google Gemini | GOOGLE_GEMINI_API_KEY |
Google Vertex AI | GOOGLE_VERTEXAI_API_KEY , GOOGLE_VERTEXAI_REGION , GOOGLE_VERTEXAI_PROJECT , GOOGLE_VERTEXAI_LOCATION |
OpenRouter | OPENROUTER_API_KEY |
API Reference
generate(input)
Generates a response using a Helicone prompt.
Parameters
input
(string | object): Either a prompt ID string or a parameters object:promptId
(string): The ID of the prompt to use, created in the Prompt Editorversion
(number | “production”, optional): The version of the prompt to use. Defaults to “production”inputs
(object, optional): Variable inputs to use in the prompt, if anychat
(string[], optional): Chat history for chat-based promptsuserId
(string, optional): User ID for tracking in HeliconesessionId
(string, optional): Session ID for tracking in Helicone Sessionscache
(boolean, optional): Whether to use Helicone’s LLM Caching
Returns
Promise<object>
: The raw response from the LLM provider