MiraBridge carefully manages context to give the AI the best understanding of your project.
Context Window
Each conversation maintains up to:
- 40 messages of history
- 64K tokens of context
When limits are reached, older messages are summarized to preserve the most relevant context.
Prompt Caching
MiraBridge uses Anthropic's prompt caching feature to achieve a 90% cache hit rate. This means:
- 4x faster time to first token
- Lower cost per request
- Consistent performance across conversations
The system prompt is 100% static to maximize caching efficiency.
Context Enrichment
Before each AI request, MiraBridge enriches the context with:
- Open files β contents of files you are editing
- Git context β current branch, recent changes, status
- Diagnostics β compiler errors, linting warnings
- Project type β detected framework and language
- Workspace rules β content from MIRABRIDGE.md and similar files
Token Estimation
MiraBridge uses content-aware token estimation to ensure requests fit within provider limits. The estimation accounts for code density, language, and formatting.
Next Steps
- Workspace Rules β customize AI behavior
- Supported Models β model context windows