Python SDK
Auto-instrumentation and manual tracing for OpenAI, Anthropic, and Google LLM calls.
Package: kalibr
Version: 1.3.0
Python: >=3.9
Installation
pip install kalibr
With framework integrations:
# LangChain
pip install kalibr[langchain]
# CrewAI
pip install kalibr[crewai]
# OpenAI Agents SDK
pip install kalibr[openai-agents]
# All integrations
pip install kalibr[integrations]
Auto-Instrumentation
Just import kalibr — it automatically instruments OpenAI, Anthropic, and Google SDKs on import:
import kalibr # Auto-instruments on import
from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello!"}]
)
# ✅ Trace automatically captured and sent
Auto-instrumentation happens when KALIBR_AUTO_INSTRUMENT=true (the default). To disable:
export KALIBR_AUTO_INSTRUMENT=false
Manual Instrumentation
For explicit control:
from kalibr import auto_instrument
# Instrument specific providers
auto_instrument(["openai", "anthropic", "google"])
# Check what's instrumented
from kalibr import get_instrumented_providers
print(get_instrumented_providers()) # ['openai', 'anthropic']
@trace Decorator
For custom functions or when you need more control:
from kalibr import trace
@trace(operation="summarize", provider="openai", model="gpt-4o")
def summarize_text(text: str) -> str:
response = openai.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": f"Summarize: {text}"}]
)
return response.choices[0].message.content
Decorator Parameters
| Parameter | Type | Description |
|---|---|---|
operation |
str | Required. Operation name (e.g., "summarize", "analyze") |
provider |
str | Required. LLM provider ("openai", "anthropic", "google") |
model |
str | Required. Model name (e.g., "gpt-4o", "claude-3-sonnet") |
input_tokens |
int | Optional. Override token count (estimated if not provided) |
output_tokens |
int | Optional. Override token count (estimated if not provided) |
The decorator captures duration, calculates cost, and handles errors automatically.
Intelligence API
Query Kalibr for optimal model, tool, and parameter recommendations based on historical outcomes:
from kalibr import get_policy, report_outcome, get_trace_id
# 1. Get best model, tool, and params for your goal
policy = get_policy(
goal="book_meeting",
include_tools=True,
include_params=["temperature"],
)
model = policy["recommended_model"] # e.g., "gpt-4o"
provider = policy["recommended_provider"] # e.g., "openai"
tool = policy.get("recommended_tool") # e.g., "calendar_api"
params = policy.get("recommended_params") # e.g., {"temperature": "0.3"}
# 2. Execute with recommended configuration
response = openai.chat.completions.create(model=model, ...)
# 3. Report outcome with full context
report_outcome(
trace_id=get_trace_id(),
goal="book_meeting",
success=True,
tool_id=tool,
execution_params=params,
)
get_policy() UPDATED
policy = get_policy(
goal="book_meeting", # Required
task_type="scheduling", # Optional filter
constraints={ # Optional
"max_cost_usd": 0.05,
"max_latency_ms": 3000,
"min_confidence": 0.7,
},
window_hours=168, # Default: 1 week
include_tools=True, # Get tool recommendations
include_params=["temperature"], # Get param recommendations
)
# Response includes model + tool + param recommendations
{
"recommended_model": "gpt-4o",
"recommended_provider": "openai",
"outcome_success_rate": 0.87,
"recommended_tool": "calendar_api",
"tool_success_rate": 0.91,
"recommended_params": {"temperature": "0.3"},
...
}
report_outcome() UPDATED
report_outcome(
trace_id="abc123", # Required
goal="book_meeting", # Required
success=True, # Required
score=0.9, # Optional: quality 0-1
failure_reason="timeout", # Optional: if failed
tool_id="calendar_api", # Optional: tool used
execution_params={ # Optional: params used
"temperature": "0.3",
"timeout": "30"
},
metadata={"key": "value"}, # Optional
)
get_recommendation()
from kalibr import get_recommendation
rec = get_recommendation(
task_type="summarization",
optimize_for="balanced", # cost, quality, latency, balanced, cost_efficiency
)
Context Functions
Access trace context for linking spans:
from kalibr import get_trace_id, get_parent_span_id, new_trace_id
# Get current trace ID (from active span)
trace_id = get_trace_id()
# Get parent span ID (for nesting)
parent_id = get_parent_span_id()
# Generate a new trace ID
new_id = new_trace_id()
Environment Variables
| Variable | Default | Description |
|---|---|---|
KALIBR_API_KEY |
— | Required. Your API key |
KALIBR_TENANT_ID |
default |
Tenant identifier |
KALIBR_COLLECTOR_URL |
https://api.kalibr.systems/api/ingest |
Backend endpoint |
KALIBR_ENVIRONMENT |
prod |
Environment (prod/staging/dev) |
KALIBR_SERVICE |
kalibr-app |
Service name |
KALIBR_AUTO_INSTRUMENT |
true |
Enable auto-instrumentation on import |
KALIBR_CONSOLE_EXPORT |
false |
Print spans to console (debugging) |
KALIBR_INTELLIGENCE_URL |
https://kalibr-intelligence.fly.dev |
Intelligence API endpoint |
KALIBR_COLLECTOR_FORMAT |
ndjson |
Payload format (ndjson or json) |
KALIBR_WORKFLOW_ID |
default-workflow |
Workflow identifier for grouping traces |
CLI
Run Python scripts with auto-tracing:
# Run with tracing
kalibr run my_script.py
# Show version
kalibr version
All Exports
| Export | Description |
|---|---|
trace |
Decorator for manual tracing |
auto_instrument |
Instrument LLM SDKs |
get_instrumented_providers |
List instrumented providers |
get_trace_id |
Get current trace ID |
get_parent_span_id |
Get parent span ID |
new_trace_id |
Generate new trace ID |
trace_context |
Context var for trace propagation |
get_policy |
Get model/tool/param recommendation for goal |
report_outcome |
Report execution outcome with tool/params |
get_recommendation |
Get model recommendation for task |
KalibrIntelligence |
Intelligence API client class |
KalibrClient |
Low-level API client |
TraceCapsule |
Cross-service context propagation |
setup_collector |
Configure OpenTelemetry collector |
Tracer |
Advanced tracer class |
SpanContext |
Span context class |
OpenTelemetry
The SDK uses OpenTelemetry for tracing. Configure the collector:
from kalibr import setup_collector
setup_collector(
service_name="my-service",
otlp_endpoint="http://localhost:4317", # Optional OTLP collector
file_export=True, # Write to /tmp/kalibr_otel_spans.jsonl
console_export=False # Print to console (debugging)
)
Next Steps
- Framework Integrations — LangChain, CrewAI, OpenAI Agents
- Intelligence API — Full execution routing (model + tool + params)
- TypeScript SDK — For Node.js applications