LLM Observability Stack Setup

U

@

·

Set up a comprehensive LLM observability stack covering logging, metrics, tracing, and alerting with tool recommendations and configurations.

64 copies0 forks
Set up a comprehensive observability stack for LLM applications.

## Application Profile
{{application_profile}}

## Existing Infrastructure
{{existing_infra}}

## Observability Goals
{{observability_goals}}

Design the observability stack:

**Logging Layer**
- Request/response logging (PII-safe)
- Token usage tracking
- Error categorization

**Metrics Layer**
- Latency distributions
- Throughput metrics
- Cost metrics
- Quality proxies

**Tracing Layer**
- End-to-end request tracing
- LLM call attribution
- Dependency mapping

**Alerting Layer**
- SLO-based alerts
- Anomaly detection
- Cost overrun alerts

Provide:
- Tool selection recommendations
- Configuration templates
- Dashboard specifications
- Runbook templates

Details

Category

Coding

Use Cases

Observability setupMonitoring infrastructureLLM operations

Works Best With

claude-sonnet-4-20250514gpt-4o
Created Shared

Create your own prompt vault and start sharing

LLM Observability Stack Setup | Promptsy