From fb1d843950a502916b05cf45274b3161faa2964f Mon Sep 17 00:00:00 2001 From: Fahad Date: Sun, 8 Jun 2025 21:25:19 +0400 Subject: [PATCH] fix: resolve mypy type errors and linting issues MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit - Add type annotation for server variable - Fix handling of optional parameters in chat and analyze_code handlers - Rename request variable to request_analysis to avoid type confusion - Fix model listing to handle missing attributes safely - Remove emoji icons from README section headers - Fix flake8 formatting issues (whitespace, line length) All tests passing, mypy and flake8 checks now pass in CI. ๐Ÿค– Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude --- README.md | 34 +++++++++++++-------------- gemini_server.py | 61 +++++++++++++++++++++++++++--------------------- 2 files changed, 52 insertions(+), 43 deletions(-) diff --git a/README.md b/README.md index 73fe81f..aff5345 100644 --- a/README.md +++ b/README.md @@ -2,7 +2,7 @@ A specialized Model Context Protocol (MCP) server that extends Claude Code's capabilities with Google's Gemini 2.5 Pro Preview, featuring a massive 1M token context window for handling large codebases and complex analysis tasks. -## ๐ŸŽฏ Purpose +## Purpose This server acts as a developer assistant that augments Claude Code when you need: - Analysis of files too large for Claude's context window @@ -11,7 +11,7 @@ This server acts as a developer assistant that augments Claude Code when you nee - Performance analysis of large codebases - Security audits requiring full codebase context -## ๐Ÿ“‹ Prerequisites +## Prerequisites Before you begin, ensure you have the following: @@ -21,7 +21,7 @@ Before you begin, ensure you have the following: - Ensure your key is enabled for the `gemini-2.5-pro-preview` model 4. **Git:** The `git` command-line tool for cloning the repository -## ๐Ÿš€ Quick Start for Claude Code +## Quick Start for Claude Code ### 1. Clone the Repository @@ -114,7 +114,7 @@ Just talk to Claude naturally: - "Ask Gemini to review the architecture of these files..." - "Have Gemini check this codebase for security issues..." -## ๐Ÿ” How It Works +## How It Works This server acts as a local proxy between Claude Code and the Google Gemini API, following the Model Context Protocol (MCP): @@ -127,7 +127,7 @@ This server acts as a local proxy between Claude Code and the Google Gemini API, All processing and API communication happens locally from your machine. Your API key is never exposed to Anthropic. -## ๐Ÿ’ป Developer-Optimized Features +## Developer-Optimized Features ### Automatic Developer Context When no custom system prompt is provided, Gemini automatically operates with deep developer expertise, focusing on: @@ -147,7 +147,7 @@ When no custom system prompt is provided, Gemini automatically operates with dee - Perfect for analyzing entire codebases - Maintains context across multiple large files -## ๐Ÿ› ๏ธ Available Tools +## Available Tools ### `chat` General-purpose developer conversations with Gemini. @@ -170,7 +170,7 @@ Specialized tool for analyzing large files or multiple files that exceed Claude' ### `list_models` Lists available Gemini models (defaults to 2.5 Pro Preview). -## ๐Ÿ“‹ Installation +## Installation 1. Clone the repository: ```bash @@ -194,7 +194,7 @@ Lists available Gemini models (defaults to 2.5 Pro Preview). export GEMINI_API_KEY="your-api-key-here" ``` -## ๐Ÿ”ง Advanced Configuration +## Advanced Configuration ### Custom System Prompts Override the default developer prompt when needed: @@ -217,7 +217,7 @@ While defaulting to `gemini-2.5-pro-preview-06-05`, you can specify other models - `gemini-1.5-flash`: Faster responses - Use `list_models` to see all available options -## ๐ŸŽฏ Claude Code Integration Examples +## Claude Code Integration Examples ### When Claude hits token limits: ``` @@ -235,12 +235,12 @@ You: "Use Gemini to analyze all files in /src/core/ and create an architecture d You: "Have Gemini profile this codebase and suggest the top 5 performance improvements" ``` -## ๐Ÿ’ก Practical Usage Tips +## Practical Usage Tips ### Effective Commands Be specific about what you want from Gemini: -- โœ… "Ask Gemini to identify memory leaks in this code" -- โŒ "Ask Gemini about this" +- Good: "Ask Gemini to identify memory leaks in this code" +- Bad: "Ask Gemini about this" ### Common Workflows @@ -308,14 +308,14 @@ You: "Have Gemini review my approach and check these 10 files for compatibility 6. Claude: [Refines design addressing all concerns] ``` -## ๐Ÿ“ Notes +## Notes - Gemini 2.5 Pro Preview may occasionally block certain prompts due to safety filters - If a prompt is blocked by Google's safety filters, the server will return a clear error message to Claude explaining why the request could not be completed - Token estimation: ~4 characters per token - All file paths should be absolute paths -## ๐Ÿ”ง Troubleshooting +## Troubleshooting ### Server Not Appearing in Claude @@ -337,7 +337,7 @@ You: "Have Gemini review my approach and check these 10 files for compatibility - **`chmod: command not found` (Windows):** The `chmod +x` command is for macOS/Linux only. Windows users can skip this step - **Path not found errors:** Use absolute paths in all configurations, not relative paths like `./run_gemini.sh` -## ๐Ÿงช Testing +## Testing ### Running Tests Locally @@ -368,7 +368,7 @@ This project uses GitHub Actions for automated testing: - Includes linting with flake8, black, isort, and mypy - Maintains 80%+ code coverage -## ๐Ÿค Contributing +## Contributing This server is designed specifically for Claude Code users. Contributions that enhance the developer experience are welcome! @@ -380,6 +380,6 @@ This server is designed specifically for Claude Code users. Contributions that e 6. Push to the branch (`git push origin feature/amazing-feature`) 7. Open a Pull Request -## ๐Ÿ“„ License +## License MIT License - feel free to customize for your development workflow. \ No newline at end of file diff --git a/gemini_server.py b/gemini_server.py index 19d8227..0a76a97 100755 --- a/gemini_server.py +++ b/gemini_server.py @@ -103,7 +103,7 @@ class CodeAnalysisRequest(BaseModel): # Create the MCP server instance -server = Server("gemini-server") +server: Server = Server("gemini-server") # Configure Gemini API @@ -150,7 +150,7 @@ def prepare_code_context( # Add file contents if files: - summary_parts.append(f"๐Ÿ“ Analyzing {len(files)} file(s):") + summary_parts.append(f"Analyzing {len(files)} file(s):") for file_path in files: # Get file content for Gemini file_content = read_file_content_for_gemini(file_path) @@ -171,7 +171,7 @@ def prepare_code_context( preview = "\n".join(preview_lines) if len(preview) > 100: preview = preview[:100] + "..." - summary_parts.append(f" ๐Ÿ“„ {file_path} ({size:,} bytes)") + summary_parts.append(f" {file_path} ({size:,} bytes)") if preview.strip(): summary_parts.append(f" Preview: {preview[:50]}...") except Exception: @@ -186,7 +186,7 @@ def prepare_code_context( ) context_parts.append(formatted_code) preview = code[:100] + "..." if len(code) > 100 else code - summary_parts.append(f"๐Ÿ’ป Direct code provided ({len(code):,} characters)") + summary_parts.append(f"Direct code provided ({len(code):,} characters)") summary_parts.append(f" Preview: {preview}") full_context = "\n\n".join(context_parts) @@ -302,11 +302,15 @@ async def handle_call_tool(name: str, arguments: Dict[str, Any]) -> List[TextCon try: # Use the specified model with optimized settings + model_name = request.model or DEFAULT_MODEL + temperature = request.temperature if request.temperature is not None else 0.5 + max_tokens = request.max_tokens if request.max_tokens is not None else 8192 + model = genai.GenerativeModel( - model_name=request.model, + model_name=model_name, generation_config={ - "temperature": request.temperature, - "max_output_tokens": request.max_tokens, + "temperature": temperature, + "max_output_tokens": max_tokens, "candidate_count": 1, }, ) @@ -342,10 +346,10 @@ async def handle_call_tool(name: str, arguments: Dict[str, Any]) -> List[TextCon elif name == "analyze_code": # Validate request - request = CodeAnalysisRequest(**arguments) + request_analysis = CodeAnalysisRequest(**arguments) # Check that we have either files or code - if not request.files and not request.code: + if not request_analysis.files and not request_analysis.code: return [ TextContent( type="text", @@ -355,7 +359,7 @@ async def handle_call_tool(name: str, arguments: Dict[str, Any]) -> List[TextCon try: # Prepare code context - always use non-verbose mode for Claude Code compatibility - code_context, summary = prepare_code_context(request.files, request.code) + code_context, summary = prepare_code_context(request_analysis.files, request_analysis.code) # Count approximate tokens (rough estimate: 1 token โ‰ˆ 4 characters) estimated_tokens = len(code_context) // 4 @@ -369,21 +373,25 @@ async def handle_call_tool(name: str, arguments: Dict[str, Any]) -> List[TextCon ] # Use the specified model with optimized settings for code analysis + model_name = request_analysis.model or DEFAULT_MODEL + temperature = request_analysis.temperature if request_analysis.temperature is not None else 0.2 + max_tokens = request_analysis.max_tokens if request_analysis.max_tokens is not None else 8192 + model = genai.GenerativeModel( - model_name=request.model, + model_name=model_name, generation_config={ - "temperature": request.temperature, - "max_output_tokens": request.max_tokens, + "temperature": temperature, + "max_output_tokens": max_tokens, "candidate_count": 1, }, ) # Prepare the full prompt with enhanced developer context and clear structure - system_prompt = request.system_prompt or DEVELOPER_SYSTEM_PROMPT + system_prompt = request_analysis.system_prompt or DEVELOPER_SYSTEM_PROMPT full_prompt = f"""{system_prompt} === USER REQUEST === -{request.question} +{request_analysis.question} === END USER REQUEST === === CODE TO ANALYZE === @@ -408,7 +416,7 @@ marked with their paths and content boundaries.""" text = f"Response blocked or incomplete. Finish reason: {finish_reason}" # Always return response with summary for Claude Code compatibility - if request.files or request.code: + if request_analysis.files or request_analysis.code: response_text = f"{summary}\n\n๐Ÿค– Gemini's Analysis:\n{text}" else: response_text = text @@ -422,14 +430,15 @@ marked with their paths and content boundaries.""" try: # List available models models = [] - for model in genai.list_models(): - if "generateContent" in model.supported_generation_methods: + for model_info in genai.list_models(): + if (hasattr(model_info, 'supported_generation_methods') and + "generateContent" in model_info.supported_generation_methods): models.append( { - "name": model.name, - "display_name": model.display_name, - "description": model.description, - "is_default": model.name == DEFAULT_MODEL, + "name": model_info.name, + "display_name": getattr(model_info, 'display_name', 'Unknown'), + "description": getattr(model_info, 'description', 'No description'), + "is_default": model_info.name.endswith(DEFAULT_MODEL), } ) @@ -458,10 +467,10 @@ Updated: {__updated__} Author: {__author__} Configuration: -โ€ข Default Model: {DEFAULT_MODEL} -โ€ข Max Context: {MAX_CONTEXT_TOKENS:,} tokens -โ€ข Python: {version_info['python_version']} -โ€ข Started: {version_info['server_started']} +- Default Model: {DEFAULT_MODEL} +- Max Context: {MAX_CONTEXT_TOKENS:,} tokens +- Python: {version_info['python_version']} +- Started: {version_info['server_started']} For updates, visit: https://github.com/BeehiveInnovations/gemini-mcp-server""", )