MiraBridge carefully manages context to give the AI the best understanding of your project.
Context Window
Each conversation maintains up to:
- 40 messages of history
- 64K tokens of context
When limits are reached, older messages are summarized to preserve the most relevant context.
Prompt Caching
MiraBridge uses Anthropic's prompt caching feature to achieve a 90% cache hit rate. This means:
- 4x faster time to first token
- Lower cost per request
- Consistent performance across conversations
The system prompt is 100% static to maximize caching efficiency.
Context Enrichment
Before each AI request, MiraBridge enriches the context with:
- Open files — contents of files you are editing
- Git context — current branch, recent changes, status
- Diagnostics — compiler errors, linting warnings
- Project type — detected framework and language
- Workspace rules — content from MIRABRIDGE.md and similar files
Token Estimation
MiraBridge uses content-aware token estimation to ensure requests fit within provider limits. The estimation accounts for code density, language, and formatting.
Next Steps
- Workspace Rules — customize AI behavior
- Supported Models — model context windows