Implement a smart context window manager for LLM applications. ## Model Context Limit {{context_limit}} tokens ## Content Types {{content_types}} ## Priority Rules {{priority_rules}} ```python class ContextWindowManager: """ Implement: - Token counting per content type - Priority-based content selection - Compression strategies - Overflow handling - History management """ def build_context( self, system_prompt: str, retrieved_docs: List[Document], conversation_history: List[Message], user_query: str ) -> str: # Fit everything within context limit pass ``` Include: - Sliding window for history - Document summarization triggers - Token budget allocation - Metrics for context utilization
Context Window Manager
Build a smart context window manager handling token budgets, content prioritization, and overflow strategies for optimal LLM context utilization.
61 copies0 forks
Share this prompt:
Details
Category
CodingUse Cases
Context managementToken optimizationLLM integration
Works Best With
claude-sonnet-4-20250514gpt-4o
Created Updated Shared