- Apply black formatting for consistent code style
- Fix line length issues for linting compliance
- All 26 tests passing with 85% coverage
- No unused functions or variables detected
- Code is clean and ready for production
Final validation complete - implementation is robust and follows best practices.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
BREAKING CHANGES:
- Remove verbose_output from tool schema (Claude Code can't accidentally use it)
- Always show minimal terminal output with file previews
- Improved file content formatting for Gemini with clear delimiters
Key improvements:
- Files formatted as "--- BEGIN FILE: path --- content --- END FILE: path ---"
- Direct code formatted as "--- BEGIN DIRECT CODE --- code --- END DIRECT CODE ---"
- Terminal shows file paths, sizes, and small previews (not full content)
- Clear prompt structure for Gemini: USER REQUEST | CODE TO ANALYZE sections
- Prevents terminal hangs/glitches with large files in Claude Code
- All tests updated and passing
This ensures Claude Code stays responsive while Gemini gets properly formatted content.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Replace tuple[str, str] with Tuple[str, str] for Python 3.8 compatibility
- Remove unused imports (Union, NotificationOptions)
- Fix line length issues by breaking long lines
- Add verbose_output field to analyze_code tool schema
- Apply black and isort formatting
- All tests pass and linting issues resolved
This should fix the GitHub Actions failures on Python 3.8 and 3.9.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add verbose_output parameter (default: False) to CodeAnalysisRequest
- Modify prepare_code_context to return both full context and summary
- Show only file paths and sizes in terminal by default, not full content
- Full file content is still sent to Gemini for analysis
- Add comprehensive tests for verbose output functionality
This prevents terminal hangs when analyzing large files while still providing
Gemini with complete file contents for analysis.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
Major enhancements for Claude Code integration:
Temperature Optimization:
- Chat: 0.5 (balanced accuracy/creativity for development discussions)
- Code Analysis: 0.2 (high precision for code reviews and debugging)
Enhanced Developer Context:
- Rewritten system prompt focusing on Claude Code augmentation
- Emphasizes precision, best practices, and actionable solutions
- Positions Gemini as an extension for large context tasks
Claude Code-Centric Documentation:
- README completely rewritten for Claude Code users
- Clear configuration instructions with file paths
- Practical examples for common development scenarios
- Quick start guide with natural language usage
Key improvements:
- Lower temperatures for more accurate, deterministic responses
- Developer-first approach in all interactions
- Clear positioning as Claude's extended context handler
- Comprehensive setup guide for Claude Desktop integration
The server is now fully optimized to act as a specialized developer
assistant that seamlessly extends Claude Code's capabilities.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
When using the Gemini MCP server from Claude Code, it now automatically
injects a developer-focused system prompt similar to Claude Code's own
behavior. This ensures Gemini responds with the same developer mindset:
- Expert software development knowledge
- Clean code practices
- Debugging and problem-solving focus
- Clear technical explanations
- Architecture and design understanding
- Performance optimization expertise
The system prompt is automatically applied when no custom system prompt
is provided, making the integration seamless for Claude Code users.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
Changes:
- Restored Gemini 2.5 Pro Preview as the default model
- Removed hardcoded paths from claude_config_example.json
- Added MCP_DISCOVERY.md explaining how Claude discovers MCP servers
- Updated README with natural language usage examples
The server now defaults to the most capable Gemini 2.5 Pro Preview model
as requested, and all paths are now relative for better portability.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
Major improvements:
- Default model set to Gemini 1.5 Pro (more reliable than 2.5 Preview)
- Added analyze_code tool for processing large files and codebases
- Support for 1M token context window
- File reading capabilities for automatic code ingestion
- Enhanced documentation with usage examples
- Added USAGE.md guide for Claude Code users
Changes:
- Updated default model configuration with fallback note
- Increased default max_tokens to 8192 for better responses
- Added CodeAnalysisRequest model for structured code analysis
- Implemented file reading with proper error handling
- Added token estimation (~4 chars per token)
- Created comprehensive test suite for new features
This update makes the server ideal for handling large files that exceed
Claude's token limits, enabling seamless handoff to Gemini for extended
analysis and thinking.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- MCP server implementation for Google Gemini models
- Support for multiple Gemini models including 1.5 Pro and 2.5 Pro preview
- Chat tool with configurable parameters (temperature, max_tokens, model)
- List models tool to view available Gemini models
- System prompt support
- Comprehensive error handling for blocked responses
- Test suite included
- Documentation and examples
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>