This guide covers integrating Anthropic's Claude API with OwlMetric for comprehensive cost tracking and analytics. Both proxy and direct SDK methods are supported.
| Model Family | Models | Token Tracking | Cost Tracking |
|---|---|---|---|
| Claude 4 | claude-sonnet-4-20250514 | ✅ Full | ✅ Real-time |
| Claude 3.5 | claude-3-5-sonnet-20241022, claude-3-5-haiku-20241022 | ✅ Full | ✅ Real-time |
| Claude 3 | claude-3-opus-20240229, claude-3-sonnet-20240229, claude-3-haiku-20240307 | ✅ Full | ✅ Real-time |
Get Your OwlMetric API Key
# From your OwlMetric project dashboard
OWLMETRIC_API_KEY=pk_your_project_key_here
Update Your Anthropic Client
import Anthropic from '@anthropic-ai/sdk';
const client = new Anthropic({
apiKey: process.env.ANTHROPIC_API_KEY,
baseURL: 'https://owlmetric.com/api/proxy',
defaultHeaders: {
'x-owlmetric': process.env.OWLMETRIC_API_KEY,
}
});
Use as Normal
// All your existing Anthropic code works unchanged
const message = await client.messages.create({
model: "claude-3-sonnet-20240229",
max_tokens: 1024,
messages: [
{ role: "user", content: "Hello, Claude!" }
]
});
Custom Headers:
const client = new Anthropic({
apiKey: process.env.ANTHROPIC_API_KEY,
baseURL: 'https://owlmetric.com/api/proxy',
defaultHeaders: {
'x-owlmetric': process.env.OWLMETRIC_API_KEY,
'x-owlmetric-user-id': 'user-123', // Optional: user tracking
'x-owlmetric-session-id': 'session-456', // Optional: session tracking
}
});
npm install @owlmetric/tracker
import Anthropic from "@anthropic-ai/sdk";
import { createTrackedClient } from "@owlmetric/tracker";
const client = createTrackedClient(Anthropic, {
apiKey: process.env.ANTHROPIC_API_KEY,
owlmetricToken: process.env.OWLMETRIC_ANTHROPIC_TOKEN,
});
// Use exactly like the regular Anthropic client
const completion = await client.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 50,
messages: [{ role: "user", content: "Hello!" }],
});
With Custom Provider Name:
const client = createTrackedClient(Anthropic, {
apiKey: process.env.ANTHROPIC_API_KEY,
owlmetricToken: process.env.OWLMETRIC_ANTHROPIC_TOKEN,
provider: "Anthropic-Custom", // Custom provider name in dashboard
});
Simple Conversation:
const response = await client.messages.create({
model: "claude-3-sonnet-20240229",
max_tokens: 150,
messages: [
{ role: "user", content: "Explain the concept of machine learning in simple terms." }
]
});
console.log(response.content[0].text);
// Costs and tokens automatically tracked in OwlMetric dashboard
Multi-turn Conversation:
const response = await client.messages.create({
model: "claude-3-sonnet-20240229",
max_tokens: 300,
messages: [
{ role: "user", content: "What is quantum computing?" },
{ role: "assistant", content: "Quantum computing is a type of computation that harnesses quantum mechanics..." },
{ role: "user", content: "Can you give me a simple analogy?" }
]
});
const streamResponse = await client.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 100,
messages: [{ role: "user", content: "Write a short poem about programming." }],
stream: true,
});
for await (const event of streamResponse) {
if (event.type === "content_block_delta" && event.delta?.type === "text_delta") {
process.stdout.write(event.delta.text);
}
}
// Final usage statistics tracked when stream completes
const response = await client.messages.create({
model: "claude-3-sonnet-20240229",
max_tokens: 200,
system: "You are a helpful coding assistant. Always provide clean, well-commented code examples.",
messages: [
{ role: "user", content: "Show me how to create a simple REST API in Node.js" }
]
});
const response = await client.messages.create({
model: "claude-3-sonnet-20240229",
max_tokens: 300,
tools: [
{
name: "get_weather",
description: "Get current weather for a location",
input_schema: {
type: "object",
properties: {
location: {
type: "string",
description: "City name"
}
},
required: ["location"]
}
}
],
messages: [
{ role: "user", content: "What's the weather like in San Francisco?" }
]
});
// Tool use tokens tracked separately
const response = await client.messages.create({
model: "claude-3-sonnet-20240229",
max_tokens: 300,
messages: [
{
role: "user",
content: [
{
type: "image",
source: {
type: "base64",
media_type: "image/jpeg",
data: base64Image
}
},
{
type: "text",
text: "What do you see in this image?"
}
]
}
]
});
// Vision tokens tracked with appropriate pricing
# Anthropic Configuration
ANTHROPIC_API_KEY=sk-ant-your-anthropic-api-key
# OwlMetric Configuration
OWLMETRIC_API_KEY=pk_your_project_key_here
# Anthropic Configuration
ANTHROPIC_API_KEY=sk-ant-your-anthropic-api-key
# OwlMetric Tracking Token
OWLMETRIC_ANTHROPIC_TOKEN=pt_your_anthropic_tracking_token
OwlMetric automatically tracks detailed token usage for Anthropic:
{
"prompt_tokens": 52,
"completion_tokens": 145,
"total_tokens": 197,
"prompt_tokens_details": {
"cache_creation_input_tokens": 0,
"cache_read_input_tokens": 12
}
}
Real-time cost calculation based on current Anthropic pricing:
OwlMetric tracks Anthropic's prompt caching feature:
const response = await client.messages.create({
model: "claude-3-sonnet-20240229",
max_tokens: 100,
system: [
{
type: "text",
text: "You are an AI assistant...", // Long system prompt
cache_control: { type: "ephemeral" }
}
],
messages: [
{ role: "user", content: "What is AI?" }
]
});
// Cache creation and read tokens tracked separately
Both methods handle Anthropic API errors gracefully:
try {
const message = await client.messages.create({
model: "claude-3-sonnet-20240229",
max_tokens: 1024,
messages: [{ role: "user", content: "Hello!" }],
});
} catch (error) {
if (error instanceof Anthropic.APIError) {
console.error('Anthropic API Error:', error.status, error.message);
// Error details tracked in OwlMetric for debugging
}
}
Rate Limit Errors:
Error: Rate limit exceeded
Authentication Errors:
Error: Invalid API key
sk-ant-)Model Not Found:
Error: Model not found
Tracking Not Working:
x-owlmetric header is includedpt_...)Test Basic Functionality:
const testResponse = await client.messages.create({
model: "claude-3-haiku-20240307",
max_tokens: 10,
messages: [{ role: "user", content: "Say 'integration working'" }],
});
console.log('Response:', testResponse.content[0].text);
// Check OwlMetric dashboard for this request
Test Streaming:
const stream = await client.messages.create({
model: "claude-3-haiku-20240307",
max_tokens: 50,
messages: [{ role: "user", content: "Count from 1 to 5" }],
stream: true,
});
for await (const event of stream) {
if (event.type === "content_block_delta" && event.delta?.type === "text_delta") {
process.stdout.write(event.delta.text);
}
}
// Verify final usage appears in dashboard
max_tokens limits// Before
const client = new Anthropic({
apiKey: process.env.ANTHROPIC_API_KEY,
});
// After (add 2 lines)
const client = new Anthropic({
apiKey: process.env.ANTHROPIC_API_KEY,
baseURL: 'https://owlmetric.com/api/proxy', // Add this
defaultHeaders: { // Add this
'x-owlmetric': process.env.OWLMETRIC_API_KEY, // Add this
} // Add this
});
// Before (Proxy)
const client = new Anthropic({
apiKey: process.env.ANTHROPIC_API_KEY,
baseURL: 'https://owlmetric.com/api/proxy',
defaultHeaders: {
'x-owlmetric': process.env.OWLMETRIC_API_KEY,
}
});
// After (SDK)
import { createTrackedClient } from "@owlmetric/tracker";
const client = createTrackedClient(Anthropic, {
apiKey: process.env.ANTHROPIC_API_KEY,
owlmetricToken: process.env.OWLMETRIC_ANTHROPIC_TOKEN,
});
Anthropic's built-in safety features are preserved:
const response = await client.messages.create({
model: "claude-3-sonnet-20240229",
max_tokens: 100,
messages: [
{ role: "user", content: "Tell me about content safety in AI." }
]
});
// Safety filtering applied before response generation
// All safety-related processing tracked in costs
Anthropic supports specific message roles:
const response = await client.messages.create({
model: "claude-3-sonnet-20240229",
max_tokens: 200,
messages: [
{ role: "user", content: "Hello" },
{ role: "assistant", content: "Hello! How can I help you today?" },
{ role: "user", content: "What's the weather like?" }
]
});
// All message tokens tracked accurately
const response = await client.messages.create({
model: "claude-3-sonnet-20240229",
max_tokens: 150,
temperature: 0.7,
top_p: 0.9,
messages: [
{ role: "user", content: "Write a creative story opening." }
]
});
// Sampling parameters don't affect token counting