LLM Response Streaming Handler

U

@

·

Build a production-ready LLM streaming response handler with backpressure management, error recovery, and real-time metrics collection.

24 copies0 forks
Implement a robust streaming response handler for LLM APIs.

## Requirements
{{streaming_requirements}}

## Client Framework
{{client_framework}}

## Error Handling Needs
{{error_handling_needs}}

Implement a complete streaming solution:

```typescript
class StreamHandler {
  // Handle SSE/chunked responses
  // Implement backpressure
  // Handle partial token assembly
  // Manage connection lifecycle
  // Collect usage metrics
}
```

Include:
- Reconnection logic
- Timeout handling
- Progress tracking
- Token counting during stream
- Error recovery strategies
- Unit and integration tests

Details

Category

Coding

Use Cases

Streaming implementationReal-time responsesAPI integration

Works Best With

claude-sonnet-4-20250514gpt-4o
Created Shared

Create your own prompt vault and start sharing