LLM Providers
Untrace automatically captures traces from all major LLM providers. No special configuration needed - just use our proxy or SDK.OpenAI
Anthropic
Google AI
Additional Providers
Mistral
Models: Large, Medium, Small, Embed
Cohere
Models: Command, Embed, Rerank
AWS Bedrock
All Bedrock models supported
Azure OpenAI
Enterprise OpenAI deployments
Together.ai
Open source model hosting
Replicate
Model marketplace
Hugging Face
Inference API & Endpoints
Groq
Ultra-fast inference
Framework Support
LangChain
LlamaIndex
Vercel AI SDK
Observability Platforms
Configure where Untrace sends your traces:LangSmith
Langfuse
Keywords.ai
More Integrations
Helicone
LLM observability & caching
Arize Phoenix
ML observability platform
LangWatch
LLM quality monitoring
Custom Webhook
Send to any HTTP endpoint
Custom Integration
Webhook Format
Send traces to your own endpoint:Payload Structure
Untrace sends standardized trace data:Configuration Examples
Multi-Platform Routing
Route different traces to different platforms:Provider-Specific Settings
Customize behavior per provider:Best Practices
- Start Simple: Begin with one provider and one destination
- Test Thoroughly: Use test API keys before production
- Monitor Costs: Set up sampling for expensive models
- Secure Keys: Never commit API keys to version control
- Use Environments: Separate dev/staging/production configs
Troubleshooting
Provider Not Traced?
Integration Not Receiving?
- Check API keys and permissions
- Verify network connectivity
- Review error logs in dashboard
- Test with minimal payload