LLM Providers
Untrace automatically captures traces from all major LLM providers. No special configuration needed - just use our proxy or SDK.
OpenAI
Anthropic
Google AI
Additional Providers
Mistral Models: Large, Medium, Small, Embed
Cohere Models: Command, Embed, Rerank
AWS Bedrock All Bedrock models supported
Azure OpenAI Enterprise OpenAI deployments
Together.ai Open source model hosting
Hugging Face Inference API & Endpoints
Framework Support
LangChain
LlamaIndex
Vercel AI SDK
Configure where Untrace sends your traces:
LangSmith
Langfuse
Keywords.ai
More Integrations
Helicone LLM observability & caching
Arize Phoenix ML observability platform
Custom Webhook Send to any HTTP endpoint
Custom Integration
Send traces to your own endpoint:
{
"platform" : "webhook" ,
"config" : {
"url" : "https://your-api.com/traces" ,
"headers" : {
"Authorization" : "Bearer your-token" ,
"X-Custom-Header" : "value"
},
"retry" : {
"maxAttempts" : 3 ,
"backoffMs" : 1000
}
}
}
Payload Structure
Untrace sends standardized trace data:
interface TracePayload {
id : string ;
timestamp : string ;
service : {
name : string ;
version ?: string ;
environment ?: string ;
};
trace : {
model : string ;
provider : string ;
operation : 'chat' | 'completion' | 'embedding' ;
input : {
messages ?: Message [];
prompt ?: string ;
parameters : Record < string , any >;
};
output : {
content ?: string ;
choices ?: Choice [];
embedding ?: number [];
};
metrics : {
latencyMs : number ;
promptTokens : number ;
completionTokens : number ;
totalTokens : number ;
cost ?: {
prompt : number ;
completion : number ;
total : number ;
currency : string ;
};
};
metadata ?: Record < string , any >;
error ?: {
type : string ;
message : string ;
code ?: string ;
};
};
}
Configuration Examples
Route different traces to different platforms:
routing :
rules :
# Development traces to Langfuse
- name : "Dev to Langfuse"
condition :
environment : "development"
destination : "langfuse"
# Production GPT-4 to LangSmith
- name : "Prod GPT-4"
condition :
environment : "production"
model : "gpt-4*"
destination : "langsmith"
# Errors to custom webhook
- name : "Error Handler"
condition :
error : true
destinations :
- "langsmith"
- platform : "webhook"
url : "https://alerts.company.com"
Provider-Specific Settings
Customize behavior per provider:
init ({
apiKey: 'your-key' ,
providers: {
openai: {
captureStreaming: true ,
includePromptTemplates: true
},
anthropic: {
captureSystemPrompts: true ,
maskSensitiveData: true
},
langchain: {
traceFullChain: true ,
includeIntermediateSteps: true
}
}
});
Best Practices
Start Simple : Begin with one provider and one destination
Test Thoroughly : Use test API keys before production
Monitor Costs : Set up sampling for expensive models
Secure Keys : Never commit API keys to version control
Use Environments : Separate dev/staging/production configs
Troubleshooting
Provider Not Traced?
// Check if provider is loaded
console . log ( untrace . getInstrumentedProviders ());
// Force instrumentation
const client = untrace . instrument ( 'openai' , openaiClient );
Integration Not Receiving?
Check API keys and permissions
Verify network connectivity
Review error logs in dashboard
Test with minimal payload
Need Help?