This guide covers the most common issues users encounter when integrating OwlMetric and their solutions.
Error Message:
Error: Invalid API key provided
Possible Causes:
Solutions:
# Proxy tokens start with 'pk_'
OWLMETRIC_API_KEY=pk_your_project_token_here
# Tracking tokens start with 'pt_'
OWLMETRIC_OPENAI_TOKEN=pt_your_openai_tracking_token
Error Message:
Error: Unauthorized - insufficient permissions
Solutions:
Symptoms:
Troubleshooting Steps:
// Correct header format
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
baseURL: 'https://owlmetric.com/api/proxy',
defaultHeaders: {
'x-owlmetric': process.env.OWLMETRIC_API_KEY, // Required!
}
});
// Ensure provider is specified for custom providers
const client = createTrackedClient(OpenAI, {
apiKey: process.env.DEEPSEEK_API_KEY,
baseURL: "https://api.deepseek.com",
owlmetricToken: process.env.OWLMETRIC_DEEPSEEK_TOKEN,
provider: "DeepSeek", // Required for non-standard providers
});
Problem: Streaming responses don't work or token counts are incorrect
// Correct streaming implementation
const stream = await openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: "Count to 5" }],
stream: true,
});
for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content || "";
if (content) process.stdout.write(content);
}
// Usage tracked automatically when stream completes
// Correct Anthropic streaming
const streamResponse = await client.messages.create({
model: "claude-3-sonnet-20240229",
max_tokens: 100,
messages: [{ role: "user", content: "Hello" }],
stream: true,
});
for await (const event of streamResponse) {
if (event.type === "content_block_delta" && event.delta?.type === "text_delta") {
process.stdout.write(event.delta.text);
}
}
// Final usage appears after stream ends
Error Message:
Warning: Instrumentation hook not enabled
Solution:
// next.config.ts
const nextConfig = {
experimental: {
instrumentationHook: true, // Required!
},
};
export default nextConfig;
Check instrumentation.ts Location:
your-project/
├── instrumentation.ts ← Must be in root directory
├── next.config.ts
├── package.json
└── app/
Correct instrumentation.ts:
import { registerOTel } from "@vercel/otel";
import { OwlMetricTraceExporter } from "@owlmetric/tracker/owlmetric_trace_exporter";
export function register() {
registerOTel({
serviceName: "my-next-app",
traceExporter: new OwlMetricTraceExporter(),
});
}
Verify Telemetry Configuration:
const result = streamText({
model: openai("gpt-4o"),
messages,
experimental_telemetry: {
isEnabled: true, // Must be true
metadata: {
xOwlToken: process.env.OWLMETRIC_TOKEN, // Must be set
},
},
});
Error Message:
Error: Rate limit exceeded
Solutions:
// Proxy automatically handles rate limits with retries
const openai = new OpenAI({
baseURL: 'https://owlmetric.com/api/proxy',
defaultHeaders: {
'x-owlmetric': process.env.OWLMETRIC_API_KEY,
}
});
// No additional configuration needed
import pRetry from 'p-retry';
async function makeAIRequest(messages) {
return pRetry(
async () => {
return await client.chat.completions.create({
model: "gpt-4",
messages,
});
},
{
retries: 3,
factor: 2,
minTimeout: 1000,
onFailedAttempt: error => {
if (error.status === 429) {
console.log(`Rate limited, retrying in ${error.retriesLeft} attempts`);
}
}
}
);
}
Error Message:
Error: OwlMetric rate limit exceeded
Current Limits:
Solutions:
Common Causes:
Verification Steps:
// Check if provider usage matches OwlMetric tracking
const completion = await openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: "Hello" }],
});
console.log("OpenAI usage:", completion.usage);
// Check this against OwlMetric dashboard
// For SDK method, enable debug mode
const client = createTrackedClient(OpenAI, {
apiKey: process.env.OPENAI_API_KEY,
owlmetricToken: process.env.OWLMETRIC_OPENAI_TOKEN,
debug: true, // Logs usage extraction
});
Verify Pricing Model:
Example Price Verification:
// Check current OpenAI pricing (as of 2024)
const expectedCost = {
'gpt-4': {
input: 30 / 1000000, // $30 per 1M tokens
output: 60 / 1000000, // $60 per 1M tokens
},
'gpt-3.5-turbo': {
input: 0.5 / 1000000, // $0.50 per 1M tokens
output: 1.5 / 1000000, // $1.50 per 1M tokens
}
};
Configuration:
const client = createTrackedClient(OpenAI, {
apiKey: process.env.AZURE_OPENAI_API_KEY,
baseURL: process.env.AZURE_OPENAI_ENDPOINT,
owlmetricToken: process.env.OWLMETRIC_AZURE_TOKEN,
provider: "Azure-OpenAI", // Important: specify provider
defaultQuery: { 'api-version': '2023-12-01-preview' },
});
Reasoning Tokens Not Tracked:
// Ensure provider is specified
const client = createTrackedClient(OpenAI, {
apiKey: process.env.DEEPSEEK_API_KEY,
baseURL: "https://api.deepseek.com",
owlmetricToken: process.env.OWLMETRIC_DEEPSEEK_TOKEN,
provider: "DeepSeek", // Required for reasoning token tracking
});
Message Format:
// Correct message format for Anthropic
const response = await client.messages.create({
model: "claude-3-sonnet-20240229",
max_tokens: 1000, // Required for Anthropic
messages: [
{ role: "user", content: "Hello" }
// Note: No system role in messages array
],
system: "You are a helpful assistant" // System message separate
});
Check .env File Location:
your-project/
├── .env.local ← Next.js
├── .env ← Node.js
├── .env.production ← Production specific
└── package.json
Load Environment Variables:
// For Node.js projects
import dotenv from 'dotenv';
dotenv.config();
// Verify variables loaded
console.log('OpenAI Key:', process.env.OPENAI_API_KEY ? 'Set' : 'Missing');
console.log('OwlMetric Token:', process.env.OWLMETRIC_OPENAI_TOKEN ? 'Set' : 'Missing');
Dockerfile Environment Variables:
# Pass environment variables at runtime
FROM node:18-alpine
WORKDIR /app
COPY . .
RUN npm ci --production
# Don't copy .env files in production
ENV NODE_ENV=production
CMD ["node", "server.js"]
Docker Compose:
version: '3.8'
services:
app:
build: .
environment:
- OPENAI_API_KEY=${OPENAI_API_KEY}
- OWLMETRIC_OPENAI_TOKEN=${OWLMETRIC_OPENAI_TOKEN}
# Environment variables from host
Set Environment Variables:
Environment Variable Naming:
# Use same names as local development
OPENAI_API_KEY=sk-...
OWLMETRIC_TOKEN=pt-...
Proxy Method Latency:
SDK Method Latency:
Optimize Performance:
// Use Promise.all for parallel requests
const [completion1, completion2] = await Promise.all([
client.chat.completions.create({ model: "gpt-3.5-turbo", messages: messages1 }),
client.chat.completions.create({ model: "gpt-3.5-turbo", messages: messages2 }),
]);
Streaming Memory Issues:
// Proper stream cleanup
const stream = await client.chat.completions.create({
model: "gpt-4",
messages,
stream: true,
});
try {
for await (const chunk of stream) {
// Process chunk
}
} finally {
// Stream is automatically cleaned up
}
Error Message:
Error: connect ECONNREFUSED
Solutions:
curl -I https://owlmetric.com/api/health
Error Message:
Error: certificate verify failed
Solution (Development Only):
// NOT for production
process.env["NODE_TLS_REJECT_UNAUTHORIZED"] = 0;
Proper Solution:
// Update certificates or use proper CA bundle
import https from 'https';
const agent = new https.Agent({
ca: fs.readFileSync('path/to/ca-bundle.crt')
});
When contacting support, include:
// Diagnostic script
console.log('Environment:', process.env.NODE_ENV);
console.log('Node.js version:', process.version);
console.log('OpenAI key set:', !!process.env.OPENAI_API_KEY);
console.log('OwlMetric token set:', !!process.env.OWLMETRIC_OPENAI_TOKEN);
console.log('Timestamp:', new Date().toISOString());
// Test connectivity
fetch('https://owlmetric.com/health')
.then(r => console.log('OwlMetric reachable:', r.status))
.catch(e => console.log('OwlMetric error:', e.message));
Enable debug logging:
# Enable debug logs
DEBUG=owlmetric:* node your-app.js
# Or set environment variable
export DEBUG=owlmetric:*
Include in your support request: