Overview
The Untrace Dashboard provides a powerful web interface for monitoring your LLM traces, configuring routing rules, analyzing performance metrics, and managing your observability integrations. Access it at https://app.untrace.dev .
Getting Started
Accessing the Dashboard
Navigate to https://app.untrace.dev
Sign in with your Untrace account
You’ll land on the overview page showing your LLM trace activity
Dashboard Layout
The dashboard is organized into several key sections:
Navigation Sidebar : Quick access to all dashboard features
Main Content Area : Displays your selected view (traces, analytics, settings, etc.)
Activity Feed : Real-time trace activity stream
Status Bar : Connection status and platform health indicators
Real-time Monitoring
Live Trace Feed
Monitor LLM traces as they flow through Untrace:
View traces in a chronological list with key information:
Timestamp
Model (GPT-4, Claude, etc.)
Provider (OpenAI, Anthropic, etc.)
Token usage
Cost
Latency
Routing destinations
Status (success/error)
Filtering and Search
Filter the trace feed to focus on what matters:
// Example filters
{
model : "gpt-4" ,
provider : "openai" ,
status : "success" ,
costRange : { min : 0.01 , max : 0.10 },
timeRange : "last-hour"
}
Available filter options:
Model : Filter by specific models (GPT-4, Claude-3, etc.)
Provider : OpenAI, Anthropic, Google, etc.
Status : Success, Failed, Rate Limited
Cost Range : Filter by token cost
Latency : Response time thresholds
Destinations : Filter by routing destinations
Time Range : Last hour, 24 hours, 7 days, custom range
Tags : Custom tags and metadata
Routing Configuration
Creating Routing Rules
Configure how traces are routed to different platforms:
Create Rule
Click New Routing Rule and configure:
Rule name and description
Matching conditions
Destination platforms
Priority order
Define Conditions
Set up matching conditions:
Model type (GPT-4, Claude, etc.)
Cost thresholds
Error conditions
Custom metadata
Environment tags
Select Destinations
Choose where to send matching traces:
Primary destination
Fallback destinations
Multi-destination routing
Sampling rates
Test Rule
Test your routing rule:
Send test traces
Verify routing behavior
Check destination delivery
Routing Examples
Common routing patterns:
name : "Route GPT-4 to LangSmith"
conditions :
model : "gpt-4*"
destinations :
- platform : "langsmith"
sample_rate : 1.0
name : "High-cost trace analysis"
conditions :
cost : "> 0.10"
destinations :
- platform : "langfuse"
tags : [ "high-cost" , "analyze" ]
name : "Failed request debugging"
conditions :
status : "error"
destinations :
- platform : "keywords-ai"
- platform : "custom-webhook"
url : "https://api.yourapp.com/errors"
name : "Platform comparison"
conditions :
model : "claude-3-opus"
destinations :
- platform : "langsmith"
sample_rate : 0.5
- platform : "langfuse"
sample_rate : 0.5
Analytics
Overview Metrics
View key metrics at a glance:
Total Traces : Daily, weekly, monthly counts
Token Usage : Total tokens processed
Total Cost : Aggregate costs across all models
Average Latency : P50, P95, P99 response times
Error Rate : Failed requests and error types
Model Distribution : Usage by model type
Cost Analysis
Deep dive into your LLM costs:
By Model By Provider By Application
Cost breakdown by model
Token usage per model
Average cost per request
Cost trends over time
Monitor LLM performance:
Latency Distribution : Response time histograms
Token/Second : Throughput metrics
Queue Depth : Pending requests
Error Analysis : Error types and frequencies
Rate Limit Tracking : Provider limit utilization
Custom Reports
Generate custom analytics reports:
Select metrics to include
Choose aggregation period
Apply filters
Export as CSV or PDF
Schedule automated reports
Integrations
Configure connections to observability platforms:
Add Integration
Click New Integration and select platform:
LangSmith
Langfuse
Keywords.ai
Helicone
Custom webhook
Configure Authentication
Provide platform credentials:
API keys
OAuth tokens
Webhook URLs
Custom headers
Set Defaults
Configure default settings:
Default tags
Metadata mapping
Retry policies
Timeout settings
Test Connection
Verify the integration:
Send test trace
Check delivery status
Verify data format
Configure platform-specific features:
LangSmith
Project mapping
Environment tags
Custom metadata fields
Feedback integration
Langfuse
Session tracking
User identification
Score mappings
Public link generation
Keywords.ai
Cost tracking settings
Alert thresholds
Custom dashboards
API quota management
Custom Webhooks
Payload transformation
Authentication headers
Retry configuration
Response validation
Team Management
Access Control
Manage team access and permissions:
API Keys
Manage API keys for different environments:
# Production key with full access
UNTRACE_API_KEY = utr_prod_xxx
# Development key with limited access
UNTRACE_API_KEY = utr_dev_xxx
# CI/CD key for automated testing
UNTRACE_API_KEY = utr_ci_xxx
Advanced Features
Trace Sampling
Configure intelligent sampling to reduce costs:
// Sampling configuration
{
"default_sample_rate" : 0.1 , // 10% default
"rules" : [
{
"condition" : "model == 'gpt-4'" ,
"sample_rate" : 0.05 // 5% for expensive models
},
{
"condition" : "error == true" ,
"sample_rate" : 1.0 // 100% for errors
},
{
"condition" : "cost > 0.50" ,
"sample_rate" : 1.0 // 100% for high-cost requests
}
]
}
PII Detection
Configure privacy protection:
Automatic Detection : Identify potential PII
Redaction Rules : Define what to redact
Allowlist : Specify safe patterns
Audit Trail : Track redaction events
Alerting
Set up alerts for important events:
Cost Alerts Performance Alerts Custom Alerts
Daily spend thresholds
Unusual cost spikes
Budget warnings
Model-specific limits
Troubleshooting
Common Issues
Verify your API key is correct
Check network connectivity
Ensure proper SDK initialization
Verify routing rules are active
Integration delivery failures
Check platform credentials
Verify network access
Review error logs
Test with minimal payload
Check routing rule complexity
Review sampling configuration
Monitor platform status
Consider regional deployment
Debug Mode
Enable debug mode for detailed diagnostics:
Go to Settings → Advanced
Toggle Debug Mode
View detailed trace logs
Export diagnostic bundle
API Access
Access dashboard functionality programmatically:
# Get trace history
curl -X GET https://api.untrace.dev/v1/traces \
-H "Authorization: Bearer YOUR_API_KEY"
# Get analytics data
curl -X GET https://api.untrace.dev/v1/analytics \
-H "Authorization: Bearer YOUR_API_KEY"
# Update routing rules
curl -X PUT https://api.untrace.dev/v1/routing/rules \
-H "Authorization: Bearer YOUR_API_KEY" \
-d @routing-rules.json
See the API Reference for complete documentation.
Next Steps