Overview

Helicone’s OpenLLmetry integration allows you to log your LiteLLM API calls to Helicone without modifying your code.

1

Install Helicone Async

pip install helicone-async
2

Initialize Logger


from helicone_async import HeliconeAsyncLogger
from openai import OpenAI
from litellm import completion

logger = HeliconeAsyncLogger(
    api_key=HELICONE_API_KEY,
)

logger.init()

client = OpenAI(api_key=OPENAI_API_KEY)

#openai call
response = completion(
  model="gpt-3.5-turbo",
  messages=[{"role": "user", "content": "Hi 👋 - i'm openai"}],
  metadata={
    "Helicone-Property-Hello": "World"
  }
)

#cohere call
response = completion(
  model="command-r",
  messages=[{"role": "user", "content": "Hi 👋 - i'm cohere"}],
  metadata={
    "Helicone-Property-Hello": "World"
  }
)
print(response.choices[0])

Read more about OpenLLmetry