fix: resolve mypy type errors and linting issues
- Add type annotation for server variable - Fix handling of optional parameters in chat and analyze_code handlers - Rename request variable to request_analysis to avoid type confusion - Fix model listing to handle missing attributes safely - Remove emoji icons from README section headers - Fix flake8 formatting issues (whitespace, line length) All tests passing, mypy and flake8 checks now pass in CI. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
34
README.md
34
README.md
@@ -2,7 +2,7 @@
|
||||
|
||||
A specialized Model Context Protocol (MCP) server that extends Claude Code's capabilities with Google's Gemini 2.5 Pro Preview, featuring a massive 1M token context window for handling large codebases and complex analysis tasks.
|
||||
|
||||
## 🎯 Purpose
|
||||
## Purpose
|
||||
|
||||
This server acts as a developer assistant that augments Claude Code when you need:
|
||||
- Analysis of files too large for Claude's context window
|
||||
@@ -11,7 +11,7 @@ This server acts as a developer assistant that augments Claude Code when you nee
|
||||
- Performance analysis of large codebases
|
||||
- Security audits requiring full codebase context
|
||||
|
||||
## 📋 Prerequisites
|
||||
## Prerequisites
|
||||
|
||||
Before you begin, ensure you have the following:
|
||||
|
||||
@@ -21,7 +21,7 @@ Before you begin, ensure you have the following:
|
||||
- Ensure your key is enabled for the `gemini-2.5-pro-preview` model
|
||||
4. **Git:** The `git` command-line tool for cloning the repository
|
||||
|
||||
## 🚀 Quick Start for Claude Code
|
||||
## Quick Start for Claude Code
|
||||
|
||||
### 1. Clone the Repository
|
||||
|
||||
@@ -114,7 +114,7 @@ Just talk to Claude naturally:
|
||||
- "Ask Gemini to review the architecture of these files..."
|
||||
- "Have Gemini check this codebase for security issues..."
|
||||
|
||||
## 🔍 How It Works
|
||||
## How It Works
|
||||
|
||||
This server acts as a local proxy between Claude Code and the Google Gemini API, following the Model Context Protocol (MCP):
|
||||
|
||||
@@ -127,7 +127,7 @@ This server acts as a local proxy between Claude Code and the Google Gemini API,
|
||||
|
||||
All processing and API communication happens locally from your machine. Your API key is never exposed to Anthropic.
|
||||
|
||||
## 💻 Developer-Optimized Features
|
||||
## Developer-Optimized Features
|
||||
|
||||
### Automatic Developer Context
|
||||
When no custom system prompt is provided, Gemini automatically operates with deep developer expertise, focusing on:
|
||||
@@ -147,7 +147,7 @@ When no custom system prompt is provided, Gemini automatically operates with dee
|
||||
- Perfect for analyzing entire codebases
|
||||
- Maintains context across multiple large files
|
||||
|
||||
## 🛠️ Available Tools
|
||||
## Available Tools
|
||||
|
||||
### `chat`
|
||||
General-purpose developer conversations with Gemini.
|
||||
@@ -170,7 +170,7 @@ Specialized tool for analyzing large files or multiple files that exceed Claude'
|
||||
### `list_models`
|
||||
Lists available Gemini models (defaults to 2.5 Pro Preview).
|
||||
|
||||
## 📋 Installation
|
||||
## Installation
|
||||
|
||||
1. Clone the repository:
|
||||
```bash
|
||||
@@ -194,7 +194,7 @@ Lists available Gemini models (defaults to 2.5 Pro Preview).
|
||||
export GEMINI_API_KEY="your-api-key-here"
|
||||
```
|
||||
|
||||
## 🔧 Advanced Configuration
|
||||
## Advanced Configuration
|
||||
|
||||
### Custom System Prompts
|
||||
Override the default developer prompt when needed:
|
||||
@@ -217,7 +217,7 @@ While defaulting to `gemini-2.5-pro-preview-06-05`, you can specify other models
|
||||
- `gemini-1.5-flash`: Faster responses
|
||||
- Use `list_models` to see all available options
|
||||
|
||||
## 🎯 Claude Code Integration Examples
|
||||
## Claude Code Integration Examples
|
||||
|
||||
### When Claude hits token limits:
|
||||
```
|
||||
@@ -235,12 +235,12 @@ You: "Use Gemini to analyze all files in /src/core/ and create an architecture d
|
||||
You: "Have Gemini profile this codebase and suggest the top 5 performance improvements"
|
||||
```
|
||||
|
||||
## 💡 Practical Usage Tips
|
||||
## Practical Usage Tips
|
||||
|
||||
### Effective Commands
|
||||
Be specific about what you want from Gemini:
|
||||
- ✅ "Ask Gemini to identify memory leaks in this code"
|
||||
- ❌ "Ask Gemini about this"
|
||||
- Good: "Ask Gemini to identify memory leaks in this code"
|
||||
- Bad: "Ask Gemini about this"
|
||||
|
||||
### Common Workflows
|
||||
|
||||
@@ -308,14 +308,14 @@ You: "Have Gemini review my approach and check these 10 files for compatibility
|
||||
6. Claude: [Refines design addressing all concerns]
|
||||
```
|
||||
|
||||
## 📝 Notes
|
||||
## Notes
|
||||
|
||||
- Gemini 2.5 Pro Preview may occasionally block certain prompts due to safety filters
|
||||
- If a prompt is blocked by Google's safety filters, the server will return a clear error message to Claude explaining why the request could not be completed
|
||||
- Token estimation: ~4 characters per token
|
||||
- All file paths should be absolute paths
|
||||
|
||||
## 🔧 Troubleshooting
|
||||
## Troubleshooting
|
||||
|
||||
### Server Not Appearing in Claude
|
||||
|
||||
@@ -337,7 +337,7 @@ You: "Have Gemini review my approach and check these 10 files for compatibility
|
||||
- **`chmod: command not found` (Windows):** The `chmod +x` command is for macOS/Linux only. Windows users can skip this step
|
||||
- **Path not found errors:** Use absolute paths in all configurations, not relative paths like `./run_gemini.sh`
|
||||
|
||||
## 🧪 Testing
|
||||
## Testing
|
||||
|
||||
### Running Tests Locally
|
||||
|
||||
@@ -368,7 +368,7 @@ This project uses GitHub Actions for automated testing:
|
||||
- Includes linting with flake8, black, isort, and mypy
|
||||
- Maintains 80%+ code coverage
|
||||
|
||||
## 🤝 Contributing
|
||||
## Contributing
|
||||
|
||||
This server is designed specifically for Claude Code users. Contributions that enhance the developer experience are welcome!
|
||||
|
||||
@@ -380,6 +380,6 @@ This server is designed specifically for Claude Code users. Contributions that e
|
||||
6. Push to the branch (`git push origin feature/amazing-feature`)
|
||||
7. Open a Pull Request
|
||||
|
||||
## 📄 License
|
||||
## License
|
||||
|
||||
MIT License - feel free to customize for your development workflow.
|
||||
@@ -103,7 +103,7 @@ class CodeAnalysisRequest(BaseModel):
|
||||
|
||||
|
||||
# Create the MCP server instance
|
||||
server = Server("gemini-server")
|
||||
server: Server = Server("gemini-server")
|
||||
|
||||
|
||||
# Configure Gemini API
|
||||
@@ -150,7 +150,7 @@ def prepare_code_context(
|
||||
|
||||
# Add file contents
|
||||
if files:
|
||||
summary_parts.append(f"📁 Analyzing {len(files)} file(s):")
|
||||
summary_parts.append(f"Analyzing {len(files)} file(s):")
|
||||
for file_path in files:
|
||||
# Get file content for Gemini
|
||||
file_content = read_file_content_for_gemini(file_path)
|
||||
@@ -171,7 +171,7 @@ def prepare_code_context(
|
||||
preview = "\n".join(preview_lines)
|
||||
if len(preview) > 100:
|
||||
preview = preview[:100] + "..."
|
||||
summary_parts.append(f" 📄 {file_path} ({size:,} bytes)")
|
||||
summary_parts.append(f" {file_path} ({size:,} bytes)")
|
||||
if preview.strip():
|
||||
summary_parts.append(f" Preview: {preview[:50]}...")
|
||||
except Exception:
|
||||
@@ -186,7 +186,7 @@ def prepare_code_context(
|
||||
)
|
||||
context_parts.append(formatted_code)
|
||||
preview = code[:100] + "..." if len(code) > 100 else code
|
||||
summary_parts.append(f"💻 Direct code provided ({len(code):,} characters)")
|
||||
summary_parts.append(f"Direct code provided ({len(code):,} characters)")
|
||||
summary_parts.append(f" Preview: {preview}")
|
||||
|
||||
full_context = "\n\n".join(context_parts)
|
||||
@@ -302,11 +302,15 @@ async def handle_call_tool(name: str, arguments: Dict[str, Any]) -> List[TextCon
|
||||
|
||||
try:
|
||||
# Use the specified model with optimized settings
|
||||
model_name = request.model or DEFAULT_MODEL
|
||||
temperature = request.temperature if request.temperature is not None else 0.5
|
||||
max_tokens = request.max_tokens if request.max_tokens is not None else 8192
|
||||
|
||||
model = genai.GenerativeModel(
|
||||
model_name=request.model,
|
||||
model_name=model_name,
|
||||
generation_config={
|
||||
"temperature": request.temperature,
|
||||
"max_output_tokens": request.max_tokens,
|
||||
"temperature": temperature,
|
||||
"max_output_tokens": max_tokens,
|
||||
"candidate_count": 1,
|
||||
},
|
||||
)
|
||||
@@ -342,10 +346,10 @@ async def handle_call_tool(name: str, arguments: Dict[str, Any]) -> List[TextCon
|
||||
|
||||
elif name == "analyze_code":
|
||||
# Validate request
|
||||
request = CodeAnalysisRequest(**arguments)
|
||||
request_analysis = CodeAnalysisRequest(**arguments)
|
||||
|
||||
# Check that we have either files or code
|
||||
if not request.files and not request.code:
|
||||
if not request_analysis.files and not request_analysis.code:
|
||||
return [
|
||||
TextContent(
|
||||
type="text",
|
||||
@@ -355,7 +359,7 @@ async def handle_call_tool(name: str, arguments: Dict[str, Any]) -> List[TextCon
|
||||
|
||||
try:
|
||||
# Prepare code context - always use non-verbose mode for Claude Code compatibility
|
||||
code_context, summary = prepare_code_context(request.files, request.code)
|
||||
code_context, summary = prepare_code_context(request_analysis.files, request_analysis.code)
|
||||
|
||||
# Count approximate tokens (rough estimate: 1 token ≈ 4 characters)
|
||||
estimated_tokens = len(code_context) // 4
|
||||
@@ -369,21 +373,25 @@ async def handle_call_tool(name: str, arguments: Dict[str, Any]) -> List[TextCon
|
||||
]
|
||||
|
||||
# Use the specified model with optimized settings for code analysis
|
||||
model_name = request_analysis.model or DEFAULT_MODEL
|
||||
temperature = request_analysis.temperature if request_analysis.temperature is not None else 0.2
|
||||
max_tokens = request_analysis.max_tokens if request_analysis.max_tokens is not None else 8192
|
||||
|
||||
model = genai.GenerativeModel(
|
||||
model_name=request.model,
|
||||
model_name=model_name,
|
||||
generation_config={
|
||||
"temperature": request.temperature,
|
||||
"max_output_tokens": request.max_tokens,
|
||||
"temperature": temperature,
|
||||
"max_output_tokens": max_tokens,
|
||||
"candidate_count": 1,
|
||||
},
|
||||
)
|
||||
|
||||
# Prepare the full prompt with enhanced developer context and clear structure
|
||||
system_prompt = request.system_prompt or DEVELOPER_SYSTEM_PROMPT
|
||||
system_prompt = request_analysis.system_prompt or DEVELOPER_SYSTEM_PROMPT
|
||||
full_prompt = f"""{system_prompt}
|
||||
|
||||
=== USER REQUEST ===
|
||||
{request.question}
|
||||
{request_analysis.question}
|
||||
=== END USER REQUEST ===
|
||||
|
||||
=== CODE TO ANALYZE ===
|
||||
@@ -408,7 +416,7 @@ marked with their paths and content boundaries."""
|
||||
text = f"Response blocked or incomplete. Finish reason: {finish_reason}"
|
||||
|
||||
# Always return response with summary for Claude Code compatibility
|
||||
if request.files or request.code:
|
||||
if request_analysis.files or request_analysis.code:
|
||||
response_text = f"{summary}\n\n🤖 Gemini's Analysis:\n{text}"
|
||||
else:
|
||||
response_text = text
|
||||
@@ -422,14 +430,15 @@ marked with their paths and content boundaries."""
|
||||
try:
|
||||
# List available models
|
||||
models = []
|
||||
for model in genai.list_models():
|
||||
if "generateContent" in model.supported_generation_methods:
|
||||
for model_info in genai.list_models():
|
||||
if (hasattr(model_info, 'supported_generation_methods') and
|
||||
"generateContent" in model_info.supported_generation_methods):
|
||||
models.append(
|
||||
{
|
||||
"name": model.name,
|
||||
"display_name": model.display_name,
|
||||
"description": model.description,
|
||||
"is_default": model.name == DEFAULT_MODEL,
|
||||
"name": model_info.name,
|
||||
"display_name": getattr(model_info, 'display_name', 'Unknown'),
|
||||
"description": getattr(model_info, 'description', 'No description'),
|
||||
"is_default": model_info.name.endswith(DEFAULT_MODEL),
|
||||
}
|
||||
)
|
||||
|
||||
@@ -458,10 +467,10 @@ Updated: {__updated__}
|
||||
Author: {__author__}
|
||||
|
||||
Configuration:
|
||||
• Default Model: {DEFAULT_MODEL}
|
||||
• Max Context: {MAX_CONTEXT_TOKENS:,} tokens
|
||||
• Python: {version_info['python_version']}
|
||||
• Started: {version_info['server_started']}
|
||||
- Default Model: {DEFAULT_MODEL}
|
||||
- Max Context: {MAX_CONTEXT_TOKENS:,} tokens
|
||||
- Python: {version_info['python_version']}
|
||||
- Started: {version_info['server_started']}
|
||||
|
||||
For updates, visit: https://github.com/BeehiveInnovations/gemini-mcp-server""",
|
||||
)
|
||||
|
||||
Reference in New Issue
Block a user