This guide covers integrating OpenAI's API with OwlMetric for comprehensive cost tracking and analytics. Both proxy and direct SDK methods are supported.
| Model Family | Models | Token Tracking | Cost Tracking |
|---|---|---|---|
| GPT-4o | gpt-4o, gpt-4o-mini | ✅ Full | ✅ Real-time |
| GPT-4 | gpt-4, gpt-4-turbo | ✅ Full | ✅ Real-time |
| GPT-3.5 | gpt-3.5-turbo | ✅ Full | ✅ Real-time |
| Embeddings | text-embedding-3-small/large | ✅ Full | ✅ Real-time |
| Vision | gpt-4-vision-preview | ✅ Full | ✅ Real-time |
| Audio | whisper-1, tts-1 | ✅ Full | ✅ Real-time |
Get Your OwlMetric API Key
# From your OwlMetric project dashboard
OWLMETRIC_API_KEY=pk_your_project_key_here
Update Your OpenAI Client
import OpenAI from 'openai';
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
baseURL: 'https://owlmetric.com/api/proxy',
defaultHeaders: {
'x-owlmetric': process.env.OWLMETRIC_API_KEY,
}
});
Use as Normal
// All your existing OpenAI code works unchanged
const completion = await openai.chat.completions.create({
model: "gpt-4",
messages: [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "Hello!" }
],
});
Custom Headers:
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
baseURL: 'https://owlmetric.com/api/proxy',
defaultHeaders: {
'x-owlmetric': process.env.OWLMETRIC_API_KEY,
'x-owlmetric-user-id': 'user-123', // Optional: user tracking
'x-owlmetric-session-id': 'session-456', // Optional: session tracking
}
});
With Custom Organization:
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
organization: process.env.OPENAI_ORG_ID, // Optional
baseURL: 'https://owlmetric.com/api/proxy',
defaultHeaders: {
'x-owlmetric': process.env.OWLMETRIC_API_KEY,
}
});
npm install @owlmetric/tracker
import OpenAI from "openai";
import { createTrackedClient } from "@owlmetric/tracker";
const client = createTrackedClient(OpenAI, {
apiKey: process.env.OPENAI_API_KEY,
owlmetricToken: process.env.OWLMETRIC_OPENAI_TOKEN,
});
// Use exactly like the regular OpenAI client
const completion = await client.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: "Hello!" }],
});
With Custom Provider Name:
const client = createTrackedClient(OpenAI, {
apiKey: process.env.OPENAI_API_KEY,
owlmetricToken: process.env.OWLMETRIC_OPENAI_TOKEN,
provider: "OpenAI-Custom", // Custom provider name in dashboard
});
With Azure OpenAI:
const client = createTrackedClient(OpenAI, {
apiKey: process.env.AZURE_OPENAI_API_KEY,
baseURL: process.env.AZURE_OPENAI_ENDPOINT,
owlmetricToken: process.env.OWLMETRIC_AZURE_TOKEN,
provider: "Azure-OpenAI", // Important: specify provider
});
Basic Chat:
const response = await openai.chat.completions.create({
model: "gpt-4",
messages: [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "Explain quantum computing in simple terms." }
],
max_tokens: 150,
temperature: 0.7,
});
console.log(response.choices[0].message.content);
// Costs and tokens automatically tracked in OwlMetric dashboard
Streaming Chat:
const stream = await openai.chat.completions.create({
model: "gpt-4",
messages: [
{ role: "user", content: "Write a short story about a robot." }
],
stream: true,
});
for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content || "";
process.stdout.write(content);
}
// Final usage statistics tracked when stream completes
const response = await openai.chat.completions.create({
model: "gpt-4",
messages: [
{ role: "user", content: "What's the weather like in San Francisco?" }
],
functions: [
{
name: "get_weather",
description: "Get the current weather in a location",
parameters: {
type: "object",
properties: {
location: {
type: "string",
description: "The city name"
}
},
required: ["location"]
}
}
],
function_call: "auto"
});
// Function call tokens tracked separately
const response = await openai.chat.completions.create({
model: "gpt-4-vision-preview",
messages: [
{
role: "user",
content: [
{ type: "text", text: "What's in this image?" },
{
type: "image_url",
image_url: {
url: "https://example.com/image.jpg"
}
}
]
}
],
max_tokens: 300,
});
// Vision tokens tracked with special pricing
const embedding = await openai.embeddings.create({
model: "text-embedding-3-large",
input: "The quick brown fox jumps over the lazy dog",
});
// Embedding tokens and costs tracked
console.log(embedding.data[0].embedding);
Speech-to-Text:
const transcription = await openai.audio.transcriptions.create({
file: fs.createReadStream("audio.mp3"),
model: "whisper-1",
});
// Audio processing time and costs tracked
Text-to-Speech:
const mp3 = await openai.audio.speech.create({
model: "tts-1",
voice: "alloy",
input: "Hello, this is a test of the text-to-speech system.",
});
// TTS character count and costs tracked
# OpenAI Configuration
OPENAI_API_KEY=sk-your-openai-api-key
OPENAI_ORG_ID=org-your-org-id # Optional
# OwlMetric Configuration
OWLMETRIC_API_KEY=pk_your_project_key_here
# OpenAI Configuration
OPENAI_API_KEY=sk-your-openai-api-key
OPENAI_ORG_ID=org-your-org-id # Optional
# OwlMetric Tracking Token
OWLMETRIC_OPENAI_TOKEN=pt_your_openai_tracking_token
OwlMetric automatically tracks detailed token usage:
{
"prompt_tokens": 45,
"completion_tokens": 128,
"total_tokens": 173,
"prompt_tokens_details": {
"cached_tokens": 12,
"audio_tokens": 0
},
"completion_tokens_details": {
"reasoning_tokens": 0,
"audio_tokens": 0,
"accepted_prediction_tokens": 0,
"rejected_prediction_tokens": 0
}
}
Real-time cost calculation based on current OpenAI pricing:
Both methods handle OpenAI API errors gracefully:
try {
const completion = await openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: "Hello!" }],
});
} catch (error) {
if (error instanceof OpenAI.APIError) {
console.error('OpenAI API Error:', error.status, error.message);
// Error details tracked in OwlMetric for debugging
}
}
Rate Limit Errors:
Error: Rate limit exceeded
Authentication Errors:
Error: Invalid API key
Tracking Not Working:
x-owlmetric header is includedpt_...)Test Basic Functionality:
const testResponse = await openai.chat.completions.create({
model: "gpt-3.5-turbo",
messages: [{ role: "user", content: "Say 'integration working'" }],
max_tokens: 10,
});
console.log('Response:', testResponse.choices[0].message.content);
// Check OwlMetric dashboard for this request
Test Streaming:
const stream = await openai.chat.completions.create({
model: "gpt-3.5-turbo",
messages: [{ role: "user", content: "Count from 1 to 5" }],
stream: true,
});
for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content || "";
if (content) process.stdout.write(content);
}
// Verify final usage appears in dashboard
max_tokens limits// Before
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
// After (add 2 lines)
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
baseURL: 'https://owlmetric.com/api/proxy', // Add this
defaultHeaders: { // Add this
'x-owlmetric': process.env.OWLMETRIC_API_KEY, // Add this
} // Add this
});
// Before (Proxy)
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
baseURL: 'https://owlmetric.com/api/proxy',
defaultHeaders: {
'x-owlmetric': process.env.OWLMETRIC_API_KEY,
}
});
// After (SDK)
import { createTrackedClient } from "@owlmetric/tracker";
const openai = createTrackedClient(OpenAI, {
apiKey: process.env.OPENAI_API_KEY,
owlmetricToken: process.env.OWLMETRIC_OPENAI_TOKEN,
});