Get up and running with OwlMetric in under 10 minutes. Choose between Proxy Integration (easiest) or Direct SDK (lowest latency).
| Step | Proxy Method (Recommended) | Direct SDK Method |
|---|---|---|
| Setup Time | 5 minutes | 8 minutes |
| Code Changes | Minimal (just URL + headers) | Wrap your client |
| Latency | +50-100ms | Near-zero |
| Best For | Getting started quickly | Production apps |
Perfect for getting started quickly with minimal code changes.
OpenAI Integration:
// Before
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
// After - Just change the baseURL and add headers!
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
baseURL: 'https://owlmetric.com/api/proxy',
defaultHeaders: {
'x-owlmetric': process.env.OWLMETRIC_API_KEY,
}
});
// Use exactly as before - costs are automatically tracked!
const completion = await openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: "Hello!" }]
});
Anthropic Integration:
// Before
const client = new Anthropic({
apiKey: process.env.ANTHROPIC_API_KEY,
});
// After
const client = new Anthropic({
apiKey: process.env.ANTHROPIC_API_KEY,
baseURL: 'https://owlmetric.com/api/proxy',
defaultHeaders: {
'x-owlmetric': process.env.OWLMETRIC_API_KEY,
}
});
// Use exactly as before!
const message = await client.messages.create({
model: "claude-3-sonnet-20240229",
max_tokens: 1024,
messages: [{ role: "user", content: "Hello, Claude!" }]
});
Perfect for production apps that need minimal latency.
npm install @owlmetric/tracker
OWLMETRIC_OPENAI_TOKEN=pt_your_openai_tracking_token
OWLMETRIC_ANTHROPIC_TOKEN=pt_your_anthropic_tracking_token
OWLMETRIC_GEMINI_TOKEN=pt_your_gemini_tracking_token
OpenAI:
import OpenAI from "openai";
import { createTrackedClient } from "@owlmetric/tracker";
const client = createTrackedClient(OpenAI, {
apiKey: process.env.OPENAI_API_KEY,
owlmetricToken: process.env.OWLMETRIC_OPENAI_TOKEN,
});
// Use exactly as before - automatic tracking!
const completion = await client.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: "Hello!" }],
});
// Streaming works too!
const stream = await client.chat.completions.create({
model: "gpt-4",
stream: true,
messages: [{ role: "user", content: "Count from 1 to 5" }],
});
for await (const chunk of stream) {
const delta = chunk.choices?.[0]?.delta?.content;
if (delta) process.stdout.write(delta);
}
Anthropic:
import Anthropic from "@anthropic-ai/sdk";
import { createTrackedClient } from "@owlmetric/tracker";
const client = createTrackedClient(Anthropic, {
apiKey: process.env.ANTHROPIC_API_KEY,
owlmetricToken: process.env.OWLMETRIC_ANTHROPIC_TOKEN,
});
// Regular completion
const completion = await client.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 50,
messages: [{ role: "user", content: "Hello!" }],
});
// Streaming completion
const streamResponse = await client.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 100,
messages: [{ role: "user", content: "Count from 1 to 5" }],
stream: true,
});
for await (const event of streamResponse) {
if (event.type === "content_block_delta" && event.delta?.type === "text_delta") {
process.stdout.write(event.delta.text);
}
}
Install Dependencies:
npm install @owlmetric/tracker @vercel/otel @opentelemetry/sdk-logs @opentelemetry/api-logs @opentelemetry/instrumentation
1. Setup Instrumentation (instrumentation.ts) in root directory:
import { registerOTel } from "@vercel/otel";
import { OwlMetricTraceExporter } from "@owlmetric/tracker/owlmetric_trace_exporter";
export function register() {
registerOTel({
serviceName: "next-app",
traceExporter: new OwlMetricTraceExporter(),
});
}
2. API Route (app/api/chat/route.ts):
import { openai } from "@ai-sdk/openai";
import { streamText } from "ai";
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai("gpt-4o"),
messages,
experimental_telemetry: {
isEnabled: true,
metadata: {
xOwlToken: process.env.OWLMETRIC_TOKEN,
},
},
});
return result.toDataStreamResponse();
}
Create a .env.local file (or update your existing one):
# Your provider API keys (stored securely on OwlMetric)
OPENAI_API_KEY=your_openai_key
ANTHROPIC_API_KEY=your_anthropic_key
# OwlMetric API key (one per project)
OWLMETRIC_API_KEY=your_owlmetric_api_key
# Your provider API keys (in your environment)
OPENAI_API_KEY=your_openai_key
ANTHROPIC_API_KEY=your_anthropic_key
GEMINI_API_KEY=your_gemini_key
# OwlMetric tracking tokens (one per provider)
OWLMETRIC_OPENAI_TOKEN=pt_your_openai_token
OWLMETRIC_ANTHROPIC_TOKEN=pt_your_anthropic_token
OWLMETRIC_GEMINI_TOKEN=pt_your_gemini_token
Make your first tracked AI request:
// This works with both methods!
const completion = await client.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Hello, OwlMetric! This is my first tracked request.' }],
});
console.log(completion.choices[0].message.content);
You're all set! Your AI requests are now being tracked automatically.
"Invalid API key" error
Requests not showing in dashboard
x-owlmetric for proxy)Next.js integration not working
instrumentation.ts is in your root directorynext.config.js