Quick Start Steps
Sign Up
Create your free Untrace account:
# Visit https://app.untrace.dev to sign up
# Or use the CLI
npx @untrace/cli auth signup
Get Your API Key
After signing up, get your API key:
# From the dashboard
https://app.untrace.dev/settings/api-keys
# Or via CLI
npx @untrace/cli auth login
Choose Integration Method
Select how you want to integrate Untrace:
Option 1: OpenAI Proxy (Easiest)
from openai import OpenAI
client = OpenAI(
base_url = "https://api.untrace.dev/v1/proxy" ,
default_headers = {
"X-Untrace-Key" : "your-api-key"
}
)
Option 2: SDK (Most flexible)
Configure Routing
Set up where your traces should go:
// In the dashboard or via API
{
"rules" : [{
"name" : "Route to LangSmith" ,
"condition" : "model == 'gpt-4'" ,
"destination" : "langsmith"
}]
}
Integration Methods
OpenAI Proxy (Recommended)
The fastest way to get started - just change your base URL:
Python
TypeScript
JavaScript
from openai import OpenAI
# Before: Direct to OpenAI
# client = OpenAI()
# After: Through Untrace
client = OpenAI(
base_url = "https://api.untrace.dev/v1/proxy" ,
default_headers = {
"X-Untrace-Key" : "utr_your_api_key"
}
)
# Use OpenAI normally - traces are captured automatically
response = client.chat.completions.create(
model = "gpt-4" ,
messages = [{ "role" : "user" , "content" : "Hello!" }]
)
SDK Integration
For more control and auto-instrumentation of all providers:
import { init } from '@untrace/sdk' ;
// Initialize Untrace
init ({
apiKey: 'utr_your_api_key' ,
serviceName: 'my-app' ,
environment: 'production'
});
// Now import your LLM libraries - they're auto-instrumented
import OpenAI from 'openai' ;
import Anthropic from '@anthropic-ai/sdk' ;
const openai = new OpenAI ();
const anthropic = new Anthropic ();
// All calls are automatically traced
Direct API
For custom integrations or other languages:
curl -X POST https://api.untrace.dev/v1/traces \
-H "Authorization: Bearer utr_your_api_key" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4",
"provider": "openai",
"prompt_tokens": 50,
"completion_tokens": 100,
"total_tokens": 150,
"latency_ms": 1234,
"timestamp": "2024-01-15T10:00:00Z"
}'
Go to Dashboard → Integrations
Click “Add Integration”
Select your platform and provide credentials:
LangSmith Langfuse Custom Webhook {
"platform" : "langsmith" ,
"config" : {
"apiKey" : "ls_..." ,
"projectId" : "your-project"
}
}
Set Up Routing Rules
Configure how traces are routed:
# Basic routing by model
- name : "GPT-4 to LangSmith"
condition :
model : "gpt-4*"
destination : "langsmith"
# Route errors for debugging
- name : "Errors to Langfuse"
condition :
status : "error"
destination : "langfuse"
# Cost-based routing
- name : "Expensive requests"
condition :
cost : "> 0.10"
destinations :
- platform : "langsmith"
- platform : "webhook"
url : "https://alerts.example.com"
Verify Installation
Check Trace Flow
Make a test LLM call:
response = client.chat.completions.create(
model = "gpt-3.5-turbo" ,
messages = [{ "role" : "user" , "content" : "Test trace" }]
)
print ( f "Trace ID: { response.headers.get( 'X-Untrace-ID' ) } " )
View in dashboard:
Go to Traces
Find your trace by ID
Verify it was routed correctly
Debug Mode
Enable debug logging to troubleshoot:
init ({
apiKey: 'utr_your_api_key' ,
debug: true // Enables detailed logging
});
Common Patterns
Development vs Production
// Separate configurations by environment
const untrace = init ({
apiKey: process . env . UNTRACE_API_KEY ,
environment: process . env . NODE_ENV ,
// Sample less in production
samplingRate: process . env . NODE_ENV === 'production' ? 0.1 : 1.0 ,
// Different routing per environment
routingRules: process . env . NODE_ENV === 'production'
? productionRules
: developmentRules
});
Multi-Provider Setup
# Untrace works with all providers simultaneously
import openai
import anthropic
from langchain.chat_models import ChatOpenAI
# All are automatically instrumented
openai_client = openai.OpenAI()
anthropic_client = anthropic.Anthropic()
langchain_model = ChatOpenAI()
# Traces from all providers flow through Untrace
Cost Control
# Route only a sample of expensive requests
- name : "Sample GPT-4"
condition :
model : "gpt-4"
destination : "langsmith"
sample_rate : 0.1 # Only 10%
# But capture all errors
- name : "All Errors"
condition :
error : true
destination : "langsmith"
sample_rate : 1.0 # 100%
Framework Examples
Next.js
// app/instrumentation.ts
export async function register () {
if ( process . env . NEXT_RUNTIME === 'nodejs' ) {
const { init } = await import ( '@untrace/sdk' );
init ({ apiKey: process . env . UNTRACE_API_KEY });
}
}
FastAPI
# main.py
from fastapi import FastAPI
from untrace import Untrace
app = FastAPI()
untrace = Untrace( api_key = "utr_your_api_key" )
@app.on_event ( "startup" )
async def startup ():
untrace.init()
LangChain
# Automatic instrumentation
from untrace import init
init( api_key = "utr_your_api_key" )
# Your LangChain code - automatically traced
from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationChain
llm = ChatOpenAI( model = "gpt-4" )
chain = ConversationChain( llm = llm)
Next Steps
Troubleshooting
No traces appearing?
Check your API key is correct
Verify network connectivity to api.untrace.dev
Enable debug mode to see detailed logs
Check the dashboard for any error messages
High latency?
Untrace adds < 10ms overhead
Check your network latency to our servers
Consider using batch mode for high-volume applications
Need help?