## Major Features Added
### 🎯 Dynamic Configuration System
- **Environment-aware model selection**: DEFAULT_MODEL with 'pro'/'flash' shortcuts
- **Configurable thinking modes**: DEFAULT_THINKING_MODE_THINKDEEP for extended reasoning
- **All tool schemas now dynamic**: Show actual current defaults instead of hardcoded values
- **Enhanced setup workflow**: Copy from .env.example with smart customization
### 🔧 Model & Thinking Configuration
- **Smart model resolution**: Support both shortcuts ('pro', 'flash') and full model names
- **Thinking mode optimization**: Only apply thinking budget to models that support it
- **Flash model compatibility**: Works without thinking config, still beneficial via system prompts
- **Dynamic schema descriptions**: Tool parameters show current environment values
### 🚀 Enhanced Developer Experience
- **Fail-fast Docker setup**: GEMINI_API_KEY required upfront in docker-compose
- **Comprehensive startup logging**: Shows current model and thinking mode defaults
- **Enhanced get_version tool**: Reports all dynamic configuration values
- **Better .env documentation**: Clear token consumption details and model options
### 🧪 Comprehensive Testing
- **Live model validation**: New simulator test validates Pro vs Flash thinking behavior
- **Dynamic configuration tests**: Verify environment variable overrides work correctly
- **Complete test coverage**: All 139 unit tests pass, including new model config tests
### 📋 Configuration Files Updated
- **docker-compose.yml**: Fail-fast API key validation, thinking mode support
- **setup-docker.sh**: Copy from .env.example instead of manual creation
- **.env.example**: Detailed documentation with token consumption per thinking mode
- **.gitignore**: Added test-setup/ for cleanup
### 🛠 Technical Improvements
- **Removed setup.py**: Fully Docker-based deployment (no longer needed)
- **REDIS_URL smart defaults**: Auto-configured for Docker, still configurable for dev
- **All tools updated**: Consistent dynamic model parameter descriptions
- **Enhanced error handling**: Better model resolution and validation
## Breaking Changes
- Removed setup.py (Docker-only deployment)
- Model parameter descriptions now show actual defaults (dynamic)
## Migration Guide
- Update .env files using new .env.example format
- Use 'pro'/'flash' shortcuts or full model names
- Set DEFAULT_THINKING_MODE_THINKDEEP for custom thinking depth
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
This major refactoring addresses critical bugs in conversation history management
and significantly improves token efficiency through intelligent file embedding:
**Key Improvements:**
• Fixed conversation history duplication bug by centralizing reconstruction in server.py
• Added intelligent file filtering to prevent re-embedding files already in conversation history
• Centralized file processing logic in BaseTool._prepare_file_content_for_prompt()
• Enhanced log monitoring with better categorization and file embedding visibility
• Updated comprehensive test suite to verify new architecture and edge cases
**Architecture Changes:**
• Removed duplicate conversation history reconstruction from tools/base.py
• Conversation history now handled exclusively by server.py:reconstruct_thread_context
• All tools now use centralized file processing with automatic deduplication
• Improved token efficiency by embedding unique files only once per conversation
**Performance Benefits:**
• Reduced token usage through smart file filtering
• Eliminated redundant file embeddings in continued conversations
• Better observability with detailed debug logging for file operations
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
This commit addresses several critical issues and improvements:
🔧 Critical Fixes:
- Fixed conversation history not being included when using continuation_id in AI-to-AI conversations
- Fixed test mock targeting issues preventing proper conversation memory validation
- Fixed Docker debug logging functionality with Gemini tools
🐛 Bug Fixes:
- Docker compose configuration for proper container command execution
- Test mock import targeting from utils.conversation_memory.* to tools.base.*
- Version bump to 3.1.0 reflecting significant improvements
🚀 Improvements:
- Enhanced Docker environment configuration with comprehensive logging setup
- Added cross-tool continuation documentation and examples in README
- Improved error handling and validation across all tools
- Better logging configuration with LOG_LEVEL environment variable support
- Enhanced conversation memory system documentation
🧪 Testing:
- Added comprehensive conversation history bug fix tests
- Added cross-tool continuation functionality tests
- All 132 tests now pass with proper conversation history validation
- Improved test coverage for AI-to-AI conversation threading
✨ Code Quality:
- Applied black, isort, and ruff formatting across entire codebase
- Enhanced inline documentation for conversation memory system
- Cleaned up temporary files and improved repository hygiene
- Better test descriptions and coverage for critical functionality
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
• Moved AI-to-AI Conversation Threading section after "Start Using It\!"
• Now appears before Available Tools for logical progression
• Users learn basics first, then discover advanced collaboration features
• Improves documentation flow and user experience
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
Major release introducing true AI collaboration capabilities:
• AI-to-AI conversation threading with Redis persistence
• Asynchronous workflow support - Claude can work between exchanges
• Incremental updates automatically bypass MCP's 25K token limit
• Cross-questioning and coordinated problem-solving between AIs
• Enhanced documentation with SwiftUI vs UIKit debate example
• Comprehensive test coverage and linting compliance
• Docker-first architecture with simplified setup
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
Remove Windows batch scripts and native setup instructions since Claude Code CLI
requires WSL on Windows. Consolidate Docker as primary recommendation across all platforms.
Changes:
- Remove setup.bat, run_gemini.bat, setup-docker-env.bat
- Remove examples/claude_config_windows.json
- Update README to clarify WSL requirement for Windows users
- Promote Docker as recommended setup for all platforms
- Update troubleshooting section for WSL-only support
- Apply code formatting fixes from ruff/black
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Update DEBUG_ISSUE_PROMPT to request files when diagnostics are irrelevant/incomplete
- Clarify question field to specify what information is needed from Claude/user
- Update chat tool format_response to ask Claude to continue with user's request
- Update debug tool format_response to encourage implementing fixes when root cause identified
- Fix line length formatting to comply with 120 char limit
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add file request capability using JSON format like THINKDEEP_PROMPT
- Include technology stack awareness and grounding instructions
- Ensure collaboration stays within project scope and constraints
- Fix trailing whitespace issues identified by ruff linter
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add technology-specific considerations to all tool prompts (THINKDEEP, CODEREVIEW, DEBUG, ANALYZE, PRECOMMIT)
- Replace specific technology lists with generic key areas to prevent misdirection
- Add comprehensive grounding statements to prevent overstepping scope
- Enhance DEBUG_ISSUE_PROMPT with regression prevention guidance
- Add scope discipline and focus requirements to all prompts
- Fix trailing whitespace issues
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
Add comprehensive technology-specific review categories and instructions to analyze codebase technology stack before reviewing. This enhancement will provide more targeted and relevant code reviews by adapting to different frameworks and patterns.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Fixed code formatting in server.py using black
- Only formatting changes, no functional modifications
- Ensures consistent code style across the project
All tests pass (96/96) and linting is clean:
- ✅ pytest: 96 tests passed
- ✅ ruff: all checks passed
- ✅ black: formatting applied
- ✅ isort: import sorting correct
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Replaced plain dict capabilities with proper ServerCapabilities object
- Added proper imports for ServerCapabilities and ToolsCapability from mcp.types
- Now correctly declares tools capability using ToolsCapability() instead of empty dict
- This follows the MCP protocol specification for server capability declaration
The previous implementation used {"tools": {}} which was not the correct way
to declare tool capabilities according to the MCP Python SDK. This fix ensures
proper capability negotiation during MCP client connection initialization.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Corrected thinking modes table to show `high` as default for thinkdeep (not `max`)
- Moved the default notation from `max` row to `high` row where it belongs
- This aligns with the actual tool implementation which uses "high" as default
The thinkdeep tool uses "high" (16,384 tokens) as its default thinking mode,
not "max" (32,768 tokens) as previously documented.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Removed all "Triggers:" lines from tool descriptions in README
- The trigger words were redundant since prompt examples already show usage
- Makes documentation cleaner and more focused on actual usage patterns
The prompt examples throughout the documentation already demonstrate
how to invoke each tool, making the trigger word lists unnecessary.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>