Why Use Prompt Integration?
Instead of hardcoding prompts in your application, reference them by ID:Gateway vs SDK Integration
Without the AI Gateway, using managed prompts requires multiple steps:Why the gateway is better:
- No extra packages - Works with your existing OpenAI SDK
- Single API call - Gateway fetches and compiles automatically
- Lower latency - Everything happens server-side in one request
- Automatic error handling - Invalid inputs return clear error messages
- Cleaner code - No prompt management logic in your application
Integration Steps
1
Create prompts in Helicone
Build and test prompts with variables in the dashboard
2
Use prompt_id in your code
Replace
messages
with prompt_id
and inputs
in your gateway callsAPI Parameters
Use these parameters in your chat completions request to integrate with saved prompts:The ID of your saved prompt from the Helicone dashboard
Which environment version to use:
development
, staging
, or production
Variables to fill in your prompt template (e.g.,
{"customer_name": "John", "issue_type": "billing"}
)Any supported model - works with the unified gateway format