Skip to main content

Introduction

Semantic Kernel is Microsoft’s open-source SDK for building AI agents and orchestrating LLM workflows across multiple languages (.NET, Python, Java). By integrating Helicone AI Gateway with Semantic Kernel, you can:
  • Route to different models & providers with automatic failover through a single endpoint
  • Unified billing with pass-through billing or bring your own keys
  • Monitor all requests with automatic cost tracking in one dashboard
This integration requires only one line change to your existing Semantic Kernel code - adding the AI Gateway endpoint.

Integration Steps

1

Create an account + Generate an API Key

Sign up at helicone.ai and generate an API key.
You’ll also need to configure your provider API keys (OpenAI, Anthropic, etc.) at Helicone Providers for BYOK (Bring Your Own Keys).
2

Set environment variables

# Your Helicone API key
export HELICONE_API_KEY=<your-helicone-api-key>
Create a .env file in your project:
HELICONE_API_KEY=sk-helicone-...
3

Add the AI Gateway endpoint to your Semantic Kernel configuration

using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.ChatCompletion;
using DotNetEnv;

// Load environment variables
Env.Load();
var heliconeApiKey = Environment.GetEnvironmentVariable("HELICONE_API_KEY");

// Create kernel builder
var builder = Kernel.CreateBuilder();

// Add OpenAI chat completion with Helicone AI Gateway endpoint
builder.AddOpenAIChatCompletion(
    modelId: "gpt-4.1-mini",                                // Any model from Helicone registry
    apiKey: heliconeApiKey,                                 // Your Helicone API key
    endpoint: new Uri("https://ai-gateway.helicone.ai/v1")  // Helicone AI Gateway
);

var kernel = builder.Build();
The only change from a standard Semantic Kernel setup is adding the endpoint parameter. Everything else stays the same!
4

Use the chat service normally

Your existing Semantic Kernel code continues to work without any changes:
using Microsoft.SemanticKernel.ChatCompletion;

// Get the chat service
var chatService = kernel.GetRequiredService<IChatCompletionService>();

// Create chat history
var chatHistory = new ChatHistory();
chatHistory.AddUserMessage("What is the capital of France?");

// Get response
var response = await chatService.GetChatMessageContentAsync(chatHistory);
Console.WriteLine(response.Content);
5

View requests in the Helicone dashboard

All your Semantic Kernel requests are now visible in your Helicone dashboard:
  • Request/response bodies
  • Latency metrics
  • Token usage and costs
  • Model performance analytics
  • Error tracking

Migration Example

Here’s what migrating an existing Semantic Kernel application looks like:

Before (Direct OpenAI)

var builder = Kernel.CreateBuilder();

builder.AddOpenAIChatCompletion(
    modelId: "gpt-4o-mini",
    apiKey: openAiApiKey
);

var kernel = builder.Build();

After (Helicone AI Gateway)

var builder = Kernel.CreateBuilder();

builder.AddOpenAIChatCompletion(
    modelId: "gpt-4.1-mini",                                // Use Helicone model names
    apiKey: heliconeApiKey,                                 // Your Helicone API key
    endpoint: new Uri("https://ai-gateway.helicone.ai/v1")  // Add this line!
);

var kernel = builder.Build();
That’s it! Just one additional parameter and you’re routing through Helicone’s AI Gateway.

Complete Working Example

Here’s a full example that tests multiple models:
using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.ChatCompletion;
using DotNetEnv;

// Load environment
Env.Load();
var apiKey = Environment.GetEnvironmentVariable("HELICONE_API_KEY");

if (string.IsNullOrEmpty(apiKey))
{
    Console.WriteLine("❌ HELICONE_API_KEY not found in environment");
    return;
}

Console.WriteLine("🚀 Testing multiple models through Helicone AI Gateway\n");

// Test different models
await TestModel("gpt-4.1-mini", "OpenAI GPT-4.1 Mini");
await TestModel("claude-opus-4-1", "Anthropic Claude Opus 4.1");
await TestModel("gemini-2.5-flash-lite", "Google Gemini 2.5 Flash Lite");

Console.WriteLine("\n✅ All models tested!");
Console.WriteLine("🔍 Check your dashboard: https://us.helicone.ai/dashboard");

async Task TestModel(string modelId, string modelName)
{
    try
    {
        var builder = Kernel.CreateBuilder();

        // Configure with Helicone AI Gateway
        builder.AddOpenAIChatCompletion(
            modelId: modelId,
            apiKey: apiKey,
            endpoint: new Uri("https://ai-gateway.helicone.ai/v1")
        );

        var kernel = builder.Build();
        var chatService = kernel.GetRequiredService<IChatCompletionService>();

        var chatHistory = new ChatHistory();
        chatHistory.AddUserMessage("Say hello in one sentence.");

        Console.Write($"🤖 Testing {modelName}... ");
        var response = await chatService.GetChatMessageContentAsync(chatHistory);
        Console.WriteLine("✅");
        Console.WriteLine($"   Response: {response.Content}\n");
    }
    catch (Exception ex)
    {
        Console.WriteLine("❌");
        Console.WriteLine($"   Error: {ex.Message}\n");
    }
}