
Overview
The Untrace Dashboard provides a powerful web interface for monitoring your LLM traces, configuring routing rules, analyzing performance metrics, and managing your observability integrations. Access it at https://untrace.dev/app.Real-time Monitoring
View LLM traces as they flow through your system
Routing Configuration
Set up intelligent routing rules for your traces
Analytics
Analyze costs, performance, and usage patterns
Integrations
Configure connections to observability platforms
Getting Started
Accessing the Dashboard
- Navigate to https://untrace.dev/app
- Sign in with your Untrace account
- You’ll land on the overview page showing your LLM trace activity
Dashboard Layout
The dashboard is organized into several key sections:- Navigation Sidebar: Quick access to all dashboard features
- Main Content Area: Displays your selected view (traces, analytics, settings, etc.)
- Activity Feed: Real-time trace activity stream
- Status Bar: Connection status and platform health indicators
Real-time Monitoring
Live Trace Feed
Monitor LLM traces as they flow through Untrace:- List View
- Detail View
View traces in a chronological list with key information:
- Timestamp
- Model (GPT-4, Claude, etc.)
- Provider (OpenAI, Anthropic, etc.)
- Token usage
- Cost
- Latency
- Routing destinations
- Status (success/error)
Filtering and Search
Filter the trace feed to focus on what matters:- Model: Filter by specific models (GPT-4, Claude-3, etc.)
- Provider: OpenAI, Anthropic, Google, etc.
- Status: Success, Failed, Rate Limited
- Cost Range: Filter by token cost
- Latency: Response time thresholds
- Destinations: Filter by routing destinations
- Time Range: Last hour, 24 hours, 7 days, custom range
- Tags: Custom tags and metadata
Routing Configuration
Creating Routing Rules
Configure how traces are routed to different platforms:1
Create Rule
Click New Routing Rule and configure:
- Rule name and description
- Matching conditions
- Destination platforms
- Priority order
2
Define Conditions
Set up matching conditions:
- Model type (GPT-4, Claude, etc.)
- Cost thresholds
- Error conditions
- Custom metadata
- Environment tags
3
Select Destinations
Choose where to send matching traces:
- Primary destination
- Fallback destinations
- Multi-destination routing
- Sampling rates
4
Test Rule
Test your routing rule:
- Send test traces
- Verify routing behavior
- Check destination delivery
Routing Examples
Common routing patterns:Route by Model
Route by Model
Cost-based Routing
Cost-based Routing
Error Routing
Error Routing
A/B Testing
A/B Testing
Analytics
Overview Metrics
View key metrics at a glance:- Total Traces: Daily, weekly, monthly counts
- Token Usage: Total tokens processed
- Total Cost: Aggregate costs across all models
- Average Latency: P50, P95, P99 response times
- Error Rate: Failed requests and error types
- Model Distribution: Usage by model type
Cost Analysis
Deep dive into your LLM costs:- By Model
- By Provider
- By Application
- Cost breakdown by model
- Token usage per model
- Average cost per request
- Cost trends over time
Performance Metrics
Monitor LLM performance:- Latency Distribution: Response time histograms
- Token/Second: Throughput metrics
- Queue Depth: Pending requests
- Error Analysis: Error types and frequencies
- Rate Limit Tracking: Provider limit utilization
Custom Reports
Generate custom analytics reports:- Select metrics to include
- Choose aggregation period
- Apply filters
- Export as CSV or PDF
- Schedule automated reports
Integrations
Managing Platform Connections
Configure connections to observability platforms:1
Add Integration
Click New Integration and select platform:
- LangSmith
- Langfuse
- Keywords.ai
- Helicone
- Custom webhook
2
Configure Authentication
Provide platform credentials:
- API keys
- OAuth tokens
- Webhook URLs
- Custom headers
3
Set Defaults
Configure default settings:
- Default tags
- Metadata mapping
- Retry policies
- Timeout settings
4
Test Connection
Verify the integration:
- Send test trace
- Check delivery status
- Verify data format
Platform-Specific Settings
Configure platform-specific features:LangSmith
- Project mapping
- Environment tags
- Custom metadata fields
- Feedback integration
Langfuse
- Session tracking
- User identification
- Score mappings
- Public link generation
Keywords.ai
- Cost tracking settings
- Alert thresholds
- Custom dashboards
- API quota management
Custom Webhooks
- Payload transformation
- Authentication headers
- Retry configuration
- Response validation
Team Management
Access Control
Manage team access and permissions:API Keys
Manage API keys for different environments:Advanced Features
Trace Sampling
Configure intelligent sampling to reduce costs:PII Detection
Configure privacy protection:- Automatic Detection: Identify potential PII
- Redaction Rules: Define what to redact
- Allowlist: Specify safe patterns
- Audit Trail: Track redaction events
Alerting
Set up alerts for important events:- Cost Alerts
- Performance Alerts
- Custom Alerts
- Daily spend thresholds
- Unusual cost spikes
- Budget warnings
- Model-specific limits
Troubleshooting
Common Issues
Traces not appearing
Traces not appearing
- Verify your API key is correct
- Check network connectivity
- Ensure proper SDK initialization
- Verify routing rules are active
Integration delivery failures
Integration delivery failures
- Check platform credentials
- Verify network access
- Review error logs
- Test with minimal payload
High latency
High latency
- Check routing rule complexity
- Review sampling configuration
- Monitor platform status
- Consider regional deployment
Debug Mode
Enable debug mode for detailed diagnostics:- Go to Settings → Advanced
- Toggle Debug Mode
- View detailed trace logs
- Export diagnostic bundle
