feat: Optimize for Claude Code developer assistant role

Major enhancements for Claude Code integration:

Temperature Optimization:
- Chat: 0.5 (balanced accuracy/creativity for development discussions)
- Code Analysis: 0.2 (high precision for code reviews and debugging)

Enhanced Developer Context:
- Rewritten system prompt focusing on Claude Code augmentation
- Emphasizes precision, best practices, and actionable solutions
- Positions Gemini as an extension for large context tasks

Claude Code-Centric Documentation:
- README completely rewritten for Claude Code users
- Clear configuration instructions with file paths
- Practical examples for common development scenarios
- Quick start guide with natural language usage

Key improvements:
- Lower temperatures for more accurate, deterministic responses
- Developer-first approach in all interactions
- Clear positioning as Claude's extended context handler
- Comprehensive setup guide for Claude Desktop integration

The server is now fully optimized to act as a specialized developer
assistant that seamlessly extends Claude Code's capabilities.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
Fahad
2025-06-08 20:00:29 +04:00
parent 4d2ad48638
commit 50fec40f13
3 changed files with 241 additions and 135 deletions

248
README.md
View File

@@ -1,157 +1,169 @@
# Gemini MCP Server
# Gemini MCP Server for Claude Code
A Model Context Protocol (MCP) server that enables integration with Google's Gemini models, optimized for Gemini 2.5 Pro Preview with 1M token context window.
A specialized Model Context Protocol (MCP) server that extends Claude Code's capabilities with Google's Gemini 2.5 Pro Preview, featuring a massive 1M token context window for handling large codebases and complex analysis tasks.
## How It Works with Claude
## 🎯 Purpose
Once configured, Claude automatically discovers this server's capabilities. You can use natural language to invoke Gemini:
- "Ask Gemini about..."
- "Use Gemini to analyze this file..."
- "Have Gemini review this code..."
This server acts as a developer assistant that augments Claude Code when you need:
- Analysis of files too large for Claude's context window
- Deep architectural reviews across multiple files
- Extended thinking and complex problem solving
- Performance analysis of large codebases
- Security audits requiring full codebase context
See [MCP_DISCOVERY.md](MCP_DISCOVERY.md) for detailed information about how Claude discovers and uses MCP servers.
## 🚀 Quick Start for Claude Code
## Features
### 1. Configure in Claude Desktop
- **Chat with Gemini**: Send prompts to Gemini 2.5 Pro Preview by default
- **Analyze Code**: Process large codebases with Gemini's 1M token context window
- **File Reading**: Automatically read and analyze multiple files
- **List Models**: View all available Gemini models
- **Configurable Parameters**: Adjust temperature, max tokens, and model selection
- **System Prompts**: Support for system prompts to set context
- **Developer Context**: Automatically uses developer-focused system prompt for Claude Code integration
Add to your Claude Desktop configuration file:
## Installation
1. Clone this repository
2. Create a virtual environment:
```bash
python3 -m venv venv
source venv/bin/activate
```
3. Install dependencies:
```bash
pip install -r requirements.txt
```
## Configuration
Set your Gemini API key as an environment variable:
```bash
export GEMINI_API_KEY="your-api-key-here"
```
## Usage
### For Claude Desktop
Add this configuration to your Claude Desktop config file:
**macOS**: `~/Library/Application Support/Claude/claude_desktop_config.json`
**Windows**: `%APPDATA%\Claude\claude_desktop_config.json`
```json
{
"mcpServers": {
"gemini": {
"command": "/path/to/venv/bin/python",
"args": ["/path/to/gemini_server.py"],
"command": "/path/to/gemini-mcp-server/venv/bin/python",
"args": ["/path/to/gemini-mcp-server/gemini_server.py"],
"env": {
"GEMINI_API_KEY": "your-api-key-here"
"GEMINI_API_KEY": "your-gemini-api-key-here"
}
}
}
}
```
### Direct Usage
### 2. Restart Claude Desktop
Run the server:
```bash
source venv/bin/activate
export GEMINI_API_KEY="your-api-key-here"
python gemini_server.py
After adding the configuration, restart Claude Desktop. You'll see "gemini" in the MCP servers list.
### 3. Start Using Natural Language
Just talk to Claude naturally:
- "Use Gemini to analyze this large file..."
- "Ask Gemini to review the architecture of these files..."
- "Have Gemini check this codebase for security issues..."
## 💻 Developer-Optimized Features
### Automatic Developer Context
When no custom system prompt is provided, Gemini automatically operates with deep developer expertise, focusing on:
- Clean code principles
- Performance optimization
- Security best practices
- Architectural patterns
- Testing strategies
- Modern development practices
### Optimized Temperature Settings
- **General chat**: 0.5 (balanced accuracy with some creativity)
- **Code analysis**: 0.2 (high precision for code review)
### Large Context Window
- Handles up to 1M tokens (~4M characters)
- Perfect for analyzing entire codebases
- Maintains context across multiple large files
## 🛠️ Available Tools
### `chat`
General-purpose developer conversations with Gemini.
**Example uses:**
```
"Ask Gemini about the best approach for implementing a distributed cache"
"Use Gemini to explain the tradeoffs between different authentication strategies"
```
## Available Tools
### `analyze_code`
Specialized tool for analyzing large files or multiple files that exceed Claude's limits.
### chat
Send a prompt to Gemini and receive a response.
**Example uses:**
```
"Use Gemini to analyze /src/core/engine.py and identify performance bottlenecks"
"Have Gemini review these files together: auth.py, users.py, permissions.py"
```
Parameters:
- `prompt` (required): The prompt to send to Gemini
- `system_prompt` (optional): System prompt for context
- `max_tokens` (optional): Maximum tokens in response (default: 8192)
- `temperature` (optional): Temperature for randomness 0-1 (default: 0.7)
- `model` (optional): Model to use (default: gemini-2.5-pro-preview-06-05)
### `list_models`
Lists available Gemini models (defaults to 2.5 Pro Preview).
### analyze_code
Analyze code files or snippets with Gemini's massive context window. Perfect for when Claude hits token limits.
## 📋 Installation
Parameters:
- `files` (optional): List of file paths to analyze
- `code` (optional): Direct code content to analyze
- `question` (required): Question or analysis request about the code
- `system_prompt` (optional): System prompt for context
- `max_tokens` (optional): Maximum tokens in response (default: 8192)
- `temperature` (optional): Temperature for randomness 0-1 (default: 0.3 for code)
- `model` (optional): Model to use (default: gemini-2.5-pro-preview-06-05)
Note: You must provide either `files` or `code` (or both).
### list_models
List all available Gemini models that support content generation.
## Usage Examples
### From Claude Code
When working with large files in Claude Code, you can use the Gemini server like this:
1. **Analyze a large file**:
```
Use the gemini tool to analyze this file: /path/to/large/file.py
Question: What are the main design patterns used in this code?
1. Clone the repository:
```bash
git clone https://github.com/BeehiveInnovations/gemini-mcp-server.git
cd gemini-mcp-server
```
2. **Analyze multiple files**:
```
Use gemini to analyze these files together:
- /path/to/file1.py
- /path/to/file2.py
- /path/to/file3.py
Question: How do these components interact with each other?
2. Create virtual environment:
```bash
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
```
3. **Extended thinking with Gemini**:
When Claude hits token limits, you can pass the entire context to Gemini for analysis.
3. Install dependencies:
```bash
pip install -r requirements.txt
```
## Models
4. Set your Gemini API key:
```bash
export GEMINI_API_KEY="your-api-key-here"
```
The server defaults to `gemini-2.5-pro-preview-06-05` (the latest and most capable model) which supports:
- 1 million token context window
- Advanced reasoning capabilities
- Code understanding and analysis
## 🔧 Advanced Configuration
Other available models:
- `gemini-1.5-pro-latest` - Stable Gemini 1.5 Pro
- `gemini-1.5-flash` - Fast Gemini 1.5 Flash model
- `gemini-2.0-flash` - Gemini 2.0 Flash
- And many more (use `list_models` to see all available)
### Custom System Prompts
Override the default developer prompt when needed:
```python
{
"prompt": "Review this code",
"system_prompt": "You are a security expert. Focus only on vulnerabilities."
}
```
## Requirements
### Temperature Control
Adjust for your use case:
- `0.1-0.3`: Maximum precision (debugging, security analysis)
- `0.4-0.6`: Balanced (general development tasks)
- `0.7-0.9`: Creative solutions (architecture design, brainstorming)
- Python 3.8+
- Valid Google Gemini API key
### Model Selection
While defaulting to `gemini-2.5-pro-preview-06-05`, you can specify other models:
- `gemini-1.5-pro-latest`: Stable alternative
- `gemini-1.5-flash`: Faster responses
- Use `list_models` to see all available options
## Notes
## 🎯 Claude Code Integration Examples
- The Gemini 2.5 Pro preview models may have safety restrictions that block certain prompts
- If a model returns a blocked response, the server will indicate the finish reason
- The server estimates tokens as ~4 characters per token
- Maximum context window is 1 million tokens (~4 million characters)
- When no system prompt is provided, the server automatically uses a developer-focused prompt similar to Claude Code
### When Claude hits token limits:
```
Claude: "This file is too large for me to analyze fully..."
You: "Use Gemini to analyze the entire file and identify the main components"
```
## Tips for Claude Code Users
### For architecture reviews:
```
You: "Use Gemini to analyze all files in /src/core/ and create an architecture diagram"
```
1. When Claude says a file is too large, use the `analyze_code` tool with the file path
2. For architectural questions spanning multiple files, pass all relevant files to `analyze_code`
3. Use lower temperatures (0.1-0.3) for code analysis and higher (0.7-0.9) for creative tasks
4. The default model (2.5 Pro Preview) is optimized for large context understanding
### For performance optimization:
```
You: "Have Gemini profile this codebase and suggest the top 5 performance improvements"
```
## 📝 Notes
- Gemini 2.5 Pro Preview may occasionally block certain prompts due to safety filters
- The server automatically falls back gracefully when this happens
- Token estimation: ~4 characters per token
- All file paths should be absolute paths
## 🤝 Contributing
This server is designed specifically for Claude Code users. Contributions that enhance the developer experience are welcome!
## 📄 License
MIT License - feel free to customize for your development workflow.