MiraBridge carefully manages context to give the AI the best understanding of your project.
Context Window
Each conversation maintains up to:
- 40 messages of history
- Model-aware token budgeting based on the selected provider and model
When limits are reached, MiraBridge preserves the most recent turns, summarizes older ones, and injects that summary back into the prompt so the conversation can continue without losing key decisions.
Prompt Caching
MiraBridge uses Anthropic's prompt caching feature to achieve a 90% cache hit rate. This means:
- 4x faster time to first token
- Lower cost per request
- Consistent performance across conversations
The system prompt is 100% static to maximize caching efficiency, while dynamic context is injected into the user prompt in a structured XML-like format.
Context Enrichment
Before each AI request, MiraBridge enriches the context with:
- Open files â contents of files you are editing
- Git context â current branch, recent changes, status
- Diagnostics â compiler errors, linting warnings
- Project type â detected framework and language
- Workspace rules â content from MIRABRIDGE.md and similar files
- Conversation summary â condensed state from older truncated turns
Token Estimation
MiraBridge uses content-aware token estimation to ensure requests fit within provider limits. The estimation accounts for code density, language, formatting, and multilingual text. File context is injected once to avoid duplicate token spend.
Next Steps
- Workspace Rules â customize AI behavior
- Supported Models â model context windows