Overview

The Generate API allows you to use predefined prompts to generate content through a unified endpoint. This feature simplifies the process of using prompts across different LLM providers while maintaining Helicone’s observability features.

Prerequisites

  • A Helicone account and API key
  • At least one predefined prompt in Helicone
  • API keys for the LLM providers you want to use

API Endpoint

Request Format

The request body should include:

Required headers:

Provider API keys (at least one required):

Example Usage

const generateUrl = "https://generate.helicone.ai";

const payload = {
    // Prompt Info
    promptId: "new-prompt",
    version: "production", // defaults to "production" or provide specific version number
    inputs: {
        num: "10",
    },
    // Helicone Properties
    userId: "test-user",
    sessionId: "session-123",
};

const headers = {
    // Helicone Auth
    "Helicone-Auth": `Bearer ${process.env.HELICONE_API_KEY}`,
    // Provider Keys
    "OPENAI_API_KEY": process.env.OPENAI_API_KEY,
    "ANTHROPIC_API_KEY": process.env.ANTHROPIC_API_KEY,
    "GOOGLE_API_KEY": process.env.GOOGLE_GENERATIVE_API_KEY,
    "COHERE_API_KEY": process.env.COHERE_API_KEY,
    "MISTRAL_API_KEY": process.env.MISTRAL_API_KEY,
};

const response = await fetch(generateUrl, {
    method: "POST",
    headers: headers,
    body: JSON.stringify(payload),
});

const data = await response.json();
console.log(data);
bash
curl -X POST https://generate.helicone.ai/v1/generate \
  -H "Content-Type: application/json" \
  -H "Helicone-Auth: Bearer $HELICONE_API_KEY" \
  -H "OPENAI_API_KEY: $OPENAI_API_KEY" \
  -H "ANTHROPIC_API_KEY: $ANTHROPIC_API_KEY" \
  -H "GOOGLE_API_KEY: $GOOGLE_GENERATIVE_API_KEY" \
  -H "COHERE_API_KEY: $COHERE_API_KEY" \
  -H "MISTRAL_API_KEY: $MISTRAL_API_KEY" \
  -d '{
    "promptId": "new-prompt",
    "version": "production",
    "inputs": {
      "num": "10"
    },
    "userId": "test-user",
    "sessionId": "session-123"
  }'

Response Format

The response format will match the standard format of the underlying LLM provider being used. All responses will be logged in Helicone and can be viewed in your dashboard.

Features

  • Unified Endpoint: Use a single endpoint for multiple LLM providers
  • Version Control: Specify prompt versions for testing and production
  • Input Variables: Pass dynamic inputs to your prompts
  • User Tracking: Include user and session IDs for request tracking
  • Full Observability: All requests are logged in Helicone’s dashboard

Upcoming Features

  • Reverse mapper for Anthropic and other API paths
  • Support for additional LLM providers
  • Enhanced prompt template key-value store

FAQ