LLM Providers

Untrace automatically captures traces from all major LLM providers. No special configuration needed - just use our proxy or SDK.

OpenAI

Anthropic

Google AI

Additional Providers

Mistral

Models: Large, Medium, Small, Embed

Cohere

Models: Command, Embed, Rerank

AWS Bedrock

All Bedrock models supported

Azure OpenAI

Enterprise OpenAI deployments

Together.ai

Open source model hosting

Replicate

Model marketplace

Hugging Face

Inference API & Endpoints

Groq

Ultra-fast inference

Framework Support

LangChain

LlamaIndex

Vercel AI SDK

Observability Platforms

Configure where Untrace sends your traces:

LangSmith

Langfuse

Keywords.ai

More Integrations

Helicone

LLM observability & caching

Arize Phoenix

ML observability platform

LangWatch

LLM quality monitoring

Custom Webhook

Send to any HTTP endpoint

Custom Integration

Webhook Format

Send traces to your own endpoint:

{
  "platform": "webhook",
  "config": {
    "url": "https://your-api.com/traces",
    "headers": {
      "Authorization": "Bearer your-token",
      "X-Custom-Header": "value"
    },
    "retry": {
      "maxAttempts": 3,
      "backoffMs": 1000
    }
  }
}

Payload Structure

Untrace sends standardized trace data:

interface TracePayload {
  id: string;
  timestamp: string;
  service: {
    name: string;
    version?: string;
    environment?: string;
  };
  trace: {
    model: string;
    provider: string;
    operation: 'chat' | 'completion' | 'embedding';
    
    input: {
      messages?: Message[];
      prompt?: string;
      parameters: Record<string, any>;
    };
    
    output: {
      content?: string;
      choices?: Choice[];
      embedding?: number[];
    };
    
    metrics: {
      latencyMs: number;
      promptTokens: number;
      completionTokens: number;
      totalTokens: number;
      cost?: {
        prompt: number;
        completion: number;
        total: number;
        currency: string;
      };
    };
    
    metadata?: Record<string, any>;
    error?: {
      type: string;
      message: string;
      code?: string;
    };
  };
}

Configuration Examples

Multi-Platform Routing

Route different traces to different platforms:

routing:
  rules:
    # Development traces to Langfuse
    - name: "Dev to Langfuse"
      condition:
        environment: "development"
      destination: "langfuse"
    
    # Production GPT-4 to LangSmith
    - name: "Prod GPT-4"
      condition:
        environment: "production"
        model: "gpt-4*"
      destination: "langsmith"
    
    # Errors to custom webhook
    - name: "Error Handler"
      condition:
        error: true
      destinations:
        - "langsmith"
        - platform: "webhook"
          url: "https://alerts.company.com"

Provider-Specific Settings

Customize behavior per provider:

init({
  apiKey: 'your-key',
  providers: {
    openai: {
      captureStreaming: true,
      includePromptTemplates: true
    },
    anthropic: {
      captureSystemPrompts: true,
      maskSensitiveData: true
    },
    langchain: {
      traceFullChain: true,
      includeIntermediateSteps: true
    }
  }
});

Best Practices

  1. Start Simple: Begin with one provider and one destination
  2. Test Thoroughly: Use test API keys before production
  3. Monitor Costs: Set up sampling for expensive models
  4. Secure Keys: Never commit API keys to version control
  5. Use Environments: Separate dev/staging/production configs

Troubleshooting

Provider Not Traced?

// Check if provider is loaded
console.log(untrace.getInstrumentedProviders());

// Force instrumentation
const client = untrace.instrument('openai', openaiClient);

Integration Not Receiving?

  1. Check API keys and permissions
  2. Verify network connectivity
  3. Review error logs in dashboard
  4. Test with minimal payload

Need Help?