Encourage Claude to pick the best model for the job automatically in auto mode
Lots of new tests to ensure automatic model picking works reliably based on user preference or when a matching model is not found or ambiguous
Improved error reporting when bogus model is requested and is not configured or available
## Major Features Added
### 🎯 Dynamic Configuration System
- **Environment-aware model selection**: DEFAULT_MODEL with 'pro'/'flash' shortcuts
- **Configurable thinking modes**: DEFAULT_THINKING_MODE_THINKDEEP for extended reasoning
- **All tool schemas now dynamic**: Show actual current defaults instead of hardcoded values
- **Enhanced setup workflow**: Copy from .env.example with smart customization
### 🔧 Model & Thinking Configuration
- **Smart model resolution**: Support both shortcuts ('pro', 'flash') and full model names
- **Thinking mode optimization**: Only apply thinking budget to models that support it
- **Flash model compatibility**: Works without thinking config, still beneficial via system prompts
- **Dynamic schema descriptions**: Tool parameters show current environment values
### 🚀 Enhanced Developer Experience
- **Fail-fast Docker setup**: GEMINI_API_KEY required upfront in docker-compose
- **Comprehensive startup logging**: Shows current model and thinking mode defaults
- **Enhanced get_version tool**: Reports all dynamic configuration values
- **Better .env documentation**: Clear token consumption details and model options
### 🧪 Comprehensive Testing
- **Live model validation**: New simulator test validates Pro vs Flash thinking behavior
- **Dynamic configuration tests**: Verify environment variable overrides work correctly
- **Complete test coverage**: All 139 unit tests pass, including new model config tests
### 📋 Configuration Files Updated
- **docker-compose.yml**: Fail-fast API key validation, thinking mode support
- **setup-docker.sh**: Copy from .env.example instead of manual creation
- **.env.example**: Detailed documentation with token consumption per thinking mode
- **.gitignore**: Added test-setup/ for cleanup
### 🛠 Technical Improvements
- **Removed setup.py**: Fully Docker-based deployment (no longer needed)
- **REDIS_URL smart defaults**: Auto-configured for Docker, still configurable for dev
- **All tools updated**: Consistent dynamic model parameter descriptions
- **Enhanced error handling**: Better model resolution and validation
## Breaking Changes
- Removed setup.py (Docker-only deployment)
- Model parameter descriptions now show actual defaults (dynamic)
## Migration Guide
- Update .env files using new .env.example format
- Use 'pro'/'flash' shortcuts or full model names
- Set DEFAULT_THINKING_MODE_THINKDEEP for custom thinking depth
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
This major refactoring addresses critical bugs in conversation history management
and significantly improves token efficiency through intelligent file embedding:
**Key Improvements:**
• Fixed conversation history duplication bug by centralizing reconstruction in server.py
• Added intelligent file filtering to prevent re-embedding files already in conversation history
• Centralized file processing logic in BaseTool._prepare_file_content_for_prompt()
• Enhanced log monitoring with better categorization and file embedding visibility
• Updated comprehensive test suite to verify new architecture and edge cases
**Architecture Changes:**
• Removed duplicate conversation history reconstruction from tools/base.py
• Conversation history now handled exclusively by server.py:reconstruct_thread_context
• All tools now use centralized file processing with automatic deduplication
• Improved token efficiency by embedding unique files only once per conversation
**Performance Benefits:**
• Reduced token usage through smart file filtering
• Eliminated redundant file embeddings in continued conversations
• Better observability with detailed debug logging for file operations
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
This commit addresses several critical issues and improvements:
🔧 Critical Fixes:
- Fixed conversation history not being included when using continuation_id in AI-to-AI conversations
- Fixed test mock targeting issues preventing proper conversation memory validation
- Fixed Docker debug logging functionality with Gemini tools
🐛 Bug Fixes:
- Docker compose configuration for proper container command execution
- Test mock import targeting from utils.conversation_memory.* to tools.base.*
- Version bump to 3.1.0 reflecting significant improvements
🚀 Improvements:
- Enhanced Docker environment configuration with comprehensive logging setup
- Added cross-tool continuation documentation and examples in README
- Improved error handling and validation across all tools
- Better logging configuration with LOG_LEVEL environment variable support
- Enhanced conversation memory system documentation
🧪 Testing:
- Added comprehensive conversation history bug fix tests
- Added cross-tool continuation functionality tests
- All 132 tests now pass with proper conversation history validation
- Improved test coverage for AI-to-AI conversation threading
✨ Code Quality:
- Applied black, isort, and ruff formatting across entire codebase
- Enhanced inline documentation for conversation memory system
- Cleaned up temporary files and improved repository hygiene
- Better test descriptions and coverage for critical functionality
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Changed use_websearch default from False to True for all tools
- Redesigned websearch behavior to focus on reasoning rather than direct search
- Tools now analyze when web searches would be helpful and recommend specific searches
- Updated all tool-specific websearch instructions to be more thoughtful
- Gemini will now suggest what Claude should search for and explain why
Key improvements:
- Web search is now enabled by default for better documentation awareness
- Instead of performing searches, Gemini reasons about what searches would help
- Provides specific search recommendations for Claude to execute
- More collaborative approach between Gemini and Claude
This change makes the tools more intelligent about when external information
would be valuable while keeping Claude in control of actual web searches.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Renamed `review_code` tool to `codereview` for better naming convention
- Renamed `review_changes` tool to `precommit` to better reflect its purpose
- Updated all tool descriptions to remove "Triggers:" sections and improve clarity
- Updated all imports and references throughout the codebase
- Renamed test files to match new tool names
- Updated server.py tool registrations
- All existing functionality preserved with improved naming
This refactoring improves code organization and makes tool purposes clearer.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add use_websearch parameter to base ToolRequest class
- All tools now inherit web search capability automatically
- Enhanced description emphasizes brainstorming and architectural design
- Add get_websearch_instruction() helper method to base class
- Update all tools to include web search in their prompts when enabled
- Update README documentation with web search examples
- Fix linting issues (ruff, black formatting)
Web search is particularly useful for:
- Brainstorming sessions and architectural design discussions
- Exploring industry best practices
- Working with specific frameworks/technologies
- Researching solutions to complex problems
- Getting current documentation and community insights
🤖 Generated with Claude Code
Co-Authored-By: Claude <noreply@anthropic.com>
- Fixed review_changes tool to properly translate host paths to container paths in Docker
- Prevents "No such file or directory" errors when running in Docker containers
- Added proper error handling with clear messages when paths are inaccessible
refactor: Centralized token limit validation across all tools
- Added _validate_token_limit method to BaseTool to eliminate code duplication
- Reduced ~25 lines of duplicated code across 5 tools (analyze, chat, debug_issue, review_code, think_deeper)
- Maintains exact same error messages and behavior
feat: Enhanced large prompt handling
- Added support for prompts >50K chars by requesting file-based input
- Preserves MCP's ~25K token capacity for responses
- All tools now check prompt size before processing
test: Added comprehensive Docker path integration tests
- Tests for path translation, security validation, and error handling
- Tests for review_changes tool specifically with Docker paths
- Fixed failing think_deeper test (updated default from "max" to "high")
chore: Code quality improvements
- Applied black formatting across all files
- Fixed import sorting with isort
- All tests passing (96 tests)
- Standardized error handling follows MCP TextContent format
The changes ensure consistent behavior across all environments while reducing code duplication and improving maintainability.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add new review_pending_changes tool for comprehensive pre-commit reviews
- Implement filesystem sandboxing with MCP_PROJECT_ROOT
- Enforce absolute paths for all file/directory operations
- Add comprehensive git utilities for repository management
- Update all tools to use centralized path validation
- Add extensive test coverage for new features and security model
- Update documentation with new tool and path requirements
- Remove obsolete demo and guide files
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
Major improvements to thinking capabilities and API integration:
- Remove all output token limits for future-proof responses
- Add 5-level thinking mode system: minimal, low, medium, high, max
- Migrate from google-generativeai to google-genai library
- Implement native thinkingBudget support for Gemini 2.5 Pro
- Set medium thinking as default for all tools, max for think_deeper
🧠 Thinking Modes:
- minimal (128 tokens) - simple tasks
- low (2048 tokens) - basic reasoning
- medium (8192 tokens) - default for most tools
- high (16384 tokens) - complex analysis
- max (32768 tokens) - default for think_deeper
🔧 Technical Changes:
- Complete migration to google-genai>=1.19.0
- Remove google-generativeai dependency
- Add ThinkingConfig with thinking_budget parameter
- Update all tools to support thinking_mode parameter
- Comprehensive test suite with 37 passing unit tests
- CI-friendly testing (no API key required for unit tests)
- Live integration tests for API verification
🧪 Testing & CI:
- Add GitHub Actions workflow with multi-Python support
- Unit tests use mocks, no API key required
- Live integration tests optional with API key
- Contributing guide with development setup
- All tests pass without external dependencies
🐛 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
Major improvements to file handling capabilities:
- Add directory traversal support to all file-processing tools
- Tools now accept both individual files and entire directories
- Automatically expand directories and discover code files recursively
- Smart filtering: skip hidden files, __pycache__, and non-code files
- Progressive token loading: read as many files as possible within limits
- Clear file separation markers with full paths for Gemini
Key changes:
- Rewrite file_utils.py with expand_paths() and improved read_files()
- Update all tool descriptions to indicate directory support
- Add comprehensive tests for directory handling and token limits
- Document tool parameters and examples in README
- Bump version to 2.4.2
All tools (analyze, review_code, debug_issue, think_deeper) now support:
- Single files: "analyze main.py"
- Directories: "review src/"
- Mixed paths: "analyze config.py, src/, tests/"
This enables analyzing entire projects or specific subsystems efficiently
while respecting token limits and providing clear file boundaries.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
Applied isort to properly sort all imports according to PEP8:
- Standard library imports first
- Third-party imports second
- Local imports last
- Alphabetical ordering within each group
All tests still passing after import reordering.
Major redesign of Gemini MCP Server with modular architecture:
- Removed all emoji characters from tool outputs for clean terminal display
- Kept review category emojis (🔴🟠🟡🟢) per user preference
- Added 4 specialized tools:
- think_deeper: Extended reasoning and problem-solving (temp 0.7)
- review_code: Professional code review with severity levels (temp 0.2)
- debug_issue: Root cause analysis and debugging (temp 0.2)
- analyze: General-purpose file analysis (temp 0.2)
- Modular architecture with base tool class and Pydantic models
- Verbose tool descriptions with natural language triggers
- Updated README with comprehensive examples and real-world use cases
- All 25 tests passing, type checking clean, critical linting clean
BREAKING CHANGE: Removed analyze_code tool in favor of specialized tools
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>