Quickstart
Start capturing LLM traces in 2 minutes.
1Install
Python
pip install kalibr
TypeScript
npm install @kalibr/sdk
2Set API Key
Get your key from dashboard.kalibr.systems
export KALIBR_API_KEY=sk_live_...
export KALIBR_TENANT_ID=my-tenant
3Use
Python (Auto-Instrumentation)
Just import kalibr first. All OpenAI/Anthropic/Google calls are traced automatically:
import kalibr # Must be first!
from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)
# ✅ Trace captured automatically
TypeScript (SpanBuilder)
TypeScript requires explicit span creation:
import { Kalibr, SpanBuilder } from '@kalibr/sdk';
import OpenAI from 'openai';
// Initialize once
Kalibr.init({
apiKey: process.env.KALIBR_API_KEY!,
tenantId: process.env.KALIBR_TENANT_ID!,
});
const openai = new OpenAI();
// Create span
const span = new SpanBuilder()
.setProvider('openai')
.setModel('gpt-4o-mini')
.setOperation('chat')
.start();
const response = await openai.chat.completions.create({
model: 'gpt-4o-mini',
messages: [{ role: 'user', content: 'Hello!' }],
});
// Finish with tokens
await span.finish({
inputTokens: response.usage?.prompt_tokens ?? 0,
outputTokens: response.usage?.completion_tokens ?? 0,
});
// ✅ Trace captured
View Traces
See your traces at dashboard.kalibr.systems
What you'll see:
- Provider & Model: openai/gpt-4o-mini
- Tokens: Input/output counts
- Cost: USD (auto-calculated)
- Latency: Response time
- Trace ID: For debugging
Python vs TypeScript
| Python | TypeScript | |
|---|---|---|
| Auto-instrumentation | ✓ Yes | ✗ No |
| Pattern | import kalibr |
SpanBuilder |
| Tokens | Auto-captured | Pass to finish() |
| Initialization | Env vars only | Kalibr.init() |
Next Steps
- Python SDK — Full documentation
- TypeScript SDK — SpanBuilder details
- Frameworks — LangChain, CrewAI, OpenAI Agents
- Intelligence — Model recommendations