1

Create an account and generate an API key

Log into Helicone or create an account. Once you have an account, you can generate an API key here.
2

Set up your Helicone API key in your .env file

AZURE_OPENAI_API_KEY=<YOUR_AZURE_OPENAI_API_KEY>
HELICONE_API_KEY=<YOUR_HELICONE_API_KEY>
3

Modify the base URL path and set up authentication

Make sure to include the api-version in all of your requests.

const model = new ChatOpenAI({
  azureOpenAIApiKey: process.env.AZURE_OPENAI_API_KEY,
  azureOpenAIApiDeploymentName: "[DEPLOYMENT_NAME]", // "gpt-35-turbo"
  azureOpenAIApiVersion: "[API_VERSION]", // "2024-12-15-preview"
  azureOpenAIBasePath: "https://oai.helicone.ai",
  configuration: {
    organization: "[YOUR_ORGANIZATION]", // "my-org"
    baseOptions: {
      headers: {
        "Helicone-Auth": `Bearer ${process.env.HELICONE_API_KEY}`,
        "Helicone-OpenAI-Api-Base":
          "https://[YOUR_AZURE_DOMAIN].openai.azure.com",
        "Helicone-Model-Override": "[MODEL_NAME]",  // "gpt-35-turbo"
        // additional headers
      }
    }
  }
});
Recomendation: Model Override

When using Azure, the model displays differently than expected at times. We have implemented logic to parse out the model, but if you want to guarantee your model is consistent, we highly recommend using model override:

Helicone-Model-Override: [MODEL_NAME]

Click here to learn more about model override
4

Start using Azure OpenAI with Helicone

const response = await model.invoke("What is the meaning of life?");
console.log(response);
5

Verify your requests in Helicone

With the above setup, any calls to Azure OpenAI will automatically be logged and monitored by Helicone. Review them in your Helicone dashboard.