feat: complete redesign to v2.4.0 - Claude's ultimate development partner
Major redesign of Gemini MCP Server with modular architecture: - Removed all emoji characters from tool outputs for clean terminal display - Kept review category emojis (🔴🟠🟡🟢) per user preference - Added 4 specialized tools: - think_deeper: Extended reasoning and problem-solving (temp 0.7) - review_code: Professional code review with severity levels (temp 0.2) - debug_issue: Root cause analysis and debugging (temp 0.2) - analyze: General-purpose file analysis (temp 0.2) - Modular architecture with base tool class and Pydantic models - Verbose tool descriptions with natural language triggers - Updated README with comprehensive examples and real-world use cases - All 25 tests passing, type checking clean, critical linting clean BREAKING CHANGE: Removed analyze_code tool in favor of specialized tools 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
705
README.md
705
README.md
@@ -1,58 +1,36 @@
|
|||||||
# Gemini MCP Server for Claude Code
|
# Gemini MCP Server
|
||||||
|
|
||||||
A specialized Model Context Protocol (MCP) server that extends Claude Code's capabilities with Google's Gemini 2.5 Pro Preview, featuring a massive 1M token context window for handling large codebases and complex analysis tasks.
|
The ultimate development partner for Claude - a Model Context Protocol server that gives Claude access to Google's Gemini 2.5 Pro for extended thinking, code analysis, and problem-solving.
|
||||||
|
|
||||||
## Purpose
|
## Why This Server?
|
||||||
|
|
||||||
This server acts as a developer assistant that augments Claude Code when you need:
|
Claude is brilliant, but sometimes you need:
|
||||||
- Analysis of files too large for Claude's context window
|
- **Extended thinking** on complex architectural decisions
|
||||||
- Deep architectural reviews across multiple files
|
- **Deep code analysis** across massive codebases
|
||||||
- Extended thinking and complex problem solving
|
- **Expert debugging** for tricky issues
|
||||||
- Performance analysis of large codebases
|
- **Professional code reviews** with actionable feedback
|
||||||
- Security audits requiring full codebase context
|
- **A senior developer partner** to validate and extend ideas
|
||||||
|
|
||||||
## Prerequisites
|
This server makes Gemini your development sidekick, handling what Claude can't or extending what Claude starts.
|
||||||
|
|
||||||
Before you begin, ensure you have the following:
|
## 🚀 Quickstart (5 minutes)
|
||||||
|
|
||||||
1. **Python:** Python 3.10 or newer. Check your version with `python3 --version`
|
### 1. Get a Gemini API Key
|
||||||
2. **Claude Desktop:** A working installation of Claude Desktop and the `claude` command-line tool
|
Visit [Google AI Studio](https://makersuite.google.com/app/apikey) and generate a free API key.
|
||||||
3. **Gemini API Key:** An active API key from [Google AI Studio](https://aistudio.google.com/app/apikey)
|
|
||||||
- Ensure your key is enabled for the `gemini-2.5-pro-preview` model
|
|
||||||
4. **Git:** The `git` command-line tool for cloning the repository
|
|
||||||
|
|
||||||
## Quick Start for Claude Code
|
### 2. Install via Claude Desktop Config
|
||||||
|
|
||||||
### 1. Clone the Repository
|
Add to your `claude_desktop_config.json`:
|
||||||
|
|
||||||
First, clone this repository to your local machine:
|
**macOS**: `~/Library/Application Support/Claude/claude_desktop_config.json`
|
||||||
```bash
|
**Windows**: `%APPDATA%\Claude\claude_desktop_config.json`
|
||||||
git clone https://github.com/BeehiveInnovations/gemini-mcp-server.git
|
|
||||||
cd gemini-mcp-server
|
|
||||||
|
|
||||||
# macOS/Linux only: Make the script executable
|
|
||||||
chmod +x run_gemini.sh
|
|
||||||
```
|
|
||||||
|
|
||||||
Note the full path to this directory - you'll need it for the configuration.
|
|
||||||
|
|
||||||
### 2. Configure in Claude Desktop
|
|
||||||
|
|
||||||
You can access the configuration file in two ways:
|
|
||||||
- **Through Claude Desktop**: Open Claude Desktop → Settings → Developer → Edit Config
|
|
||||||
- **Direct file access**:
|
|
||||||
- **macOS**: `~/Library/Application Support/Claude/claude_desktop_config.json`
|
|
||||||
- **Windows**: `%APPDATA%\Claude\claude_desktop_config.json`
|
|
||||||
- **Linux**: `~/.config/Claude/claude_desktop_config.json`
|
|
||||||
|
|
||||||
Add the following configuration, replacing the path with your actual directory path:
|
|
||||||
|
|
||||||
**macOS**:
|
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
"mcpServers": {
|
"mcpServers": {
|
||||||
"gemini": {
|
"gemini": {
|
||||||
"command": "/path/to/gemini-mcp-server/run_gemini.sh",
|
"command": "python",
|
||||||
|
"args": ["/absolute/path/to/gemini-mcp-server/server.py"],
|
||||||
"env": {
|
"env": {
|
||||||
"GEMINI_API_KEY": "your-gemini-api-key-here"
|
"GEMINI_API_KEY": "your-gemini-api-key-here"
|
||||||
}
|
}
|
||||||
@@ -61,156 +39,310 @@ Add the following configuration, replacing the path with your actual directory p
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Windows**:
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"mcpServers": {
|
|
||||||
"gemini": {
|
|
||||||
"command": "C:\\path\\to\\gemini-mcp-server\\run_gemini.bat",
|
|
||||||
"env": {
|
|
||||||
"GEMINI_API_KEY": "your-gemini-api-key-here"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Linux**:
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"mcpServers": {
|
|
||||||
"gemini": {
|
|
||||||
"command": "/path/to/gemini-mcp-server/run_gemini.sh",
|
|
||||||
"env": {
|
|
||||||
"GEMINI_API_KEY": "your-gemini-api-key-here"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Important**: Replace the path with the actual absolute path where you cloned the repository:
|
|
||||||
- **macOS example**: `/Users/yourname/projects/gemini-mcp-server/run_gemini.sh`
|
|
||||||
- **Windows example**: `C:\\Users\\yourname\\projects\\gemini-mcp-server\\run_gemini.bat`
|
|
||||||
- **Linux example**: `/home/yourname/projects/gemini-mcp-server/run_gemini.sh`
|
|
||||||
|
|
||||||
### 3. Restart Claude Desktop
|
### 3. Restart Claude Desktop
|
||||||
|
|
||||||
After adding the configuration, restart Claude Desktop. You'll see "gemini" in the MCP servers list.
|
### 4. Start Using It!
|
||||||
|
|
||||||
### 4. Add to Claude Code
|
Just ask Claude naturally:
|
||||||
|
- "Think deeper about this architecture design"
|
||||||
|
- "Review this code for security issues"
|
||||||
|
- "Debug why this test is failing"
|
||||||
|
- "Analyze these files to understand the data flow"
|
||||||
|
|
||||||
To make the server available in Claude Code, run:
|
## 🧠 Available Tools
|
||||||
```bash
|
|
||||||
# This command reads your Claude Desktop configuration and makes
|
### `think_deeper` - Extended Reasoning Partner
|
||||||
# the "gemini" server available in your terminal
|
**When Claude needs to go deeper on complex problems**
|
||||||
claude mcp add-from-claude-desktop -s user
|
|
||||||
|
#### Example Prompts:
|
||||||
```
|
```
|
||||||
|
"Think deeper about my authentication design"
|
||||||
### 5. Start Using Natural Language
|
"Ultrathink on this distributed system architecture"
|
||||||
|
"Extend my analysis of this performance issue"
|
||||||
Just talk to Claude naturally:
|
"Challenge my assumptions about this approach"
|
||||||
- "Use gemini analyze_file on main.py to find bugs"
|
"Explore alternative solutions for this caching strategy"
|
||||||
- "Share your analysis with gemini extended_think for deeper insights"
|
"Validate my microservices communication approach"
|
||||||
- "Ask gemini to review the architecture using analyze_file"
|
|
||||||
|
|
||||||
**Key tools:**
|
|
||||||
- `analyze_file` - Clean file analysis without terminal clutter
|
|
||||||
- `extended_think` - Collaborative deep thinking with Claude's analysis
|
|
||||||
- `chat` - General conversations
|
|
||||||
- `analyze_code` - Legacy tool (prefer analyze_file for files)
|
|
||||||
|
|
||||||
## How It Works
|
|
||||||
|
|
||||||
This server acts as a local proxy between Claude Code and the Google Gemini API, following the Model Context Protocol (MCP):
|
|
||||||
|
|
||||||
1. You issue a command to Claude (e.g., "Ask Gemini to...")
|
|
||||||
2. Claude Code sends a request to the local MCP server defined in your configuration
|
|
||||||
3. This server receives the request, formats it for the Gemini API, and includes any file contents
|
|
||||||
4. The request is sent to the Google Gemini API using your API key
|
|
||||||
5. The server receives the response from Gemini
|
|
||||||
6. The response is formatted and streamed back to Claude, who presents it to you
|
|
||||||
|
|
||||||
All processing and API communication happens locally from your machine. Your API key is never exposed to Anthropic.
|
|
||||||
|
|
||||||
## Developer-Optimized Features
|
|
||||||
|
|
||||||
### Automatic Developer Context
|
|
||||||
When no custom system prompt is provided, Gemini automatically operates with deep developer expertise, focusing on:
|
|
||||||
- Clean code principles
|
|
||||||
- Performance optimization
|
|
||||||
- Security best practices
|
|
||||||
- Architectural patterns
|
|
||||||
- Testing strategies
|
|
||||||
- Modern development practices
|
|
||||||
|
|
||||||
### Optimized Temperature Settings
|
|
||||||
- **General chat**: 0.5 (balanced accuracy with some creativity)
|
|
||||||
- **Code analysis**: 0.2 (high precision for code review)
|
|
||||||
|
|
||||||
### Large Context Window
|
|
||||||
- Handles up to 1M tokens (~4M characters)
|
|
||||||
- Perfect for analyzing entire codebases
|
|
||||||
- Maintains context across multiple large files
|
|
||||||
|
|
||||||
## Available Tools
|
|
||||||
|
|
||||||
### `chat`
|
|
||||||
General-purpose developer conversations with Gemini.
|
|
||||||
|
|
||||||
**Example uses:**
|
|
||||||
```
|
|
||||||
"Ask Gemini about the best approach for implementing a distributed cache"
|
|
||||||
"Use Gemini to explain the tradeoffs between different authentication strategies"
|
|
||||||
```
|
|
||||||
|
|
||||||
### `analyze_code` (Legacy)
|
|
||||||
Analyzes code files or snippets. For better terminal output, use `analyze_file` instead.
|
|
||||||
|
|
||||||
### `analyze_file` (Recommended for Files)
|
|
||||||
Clean file analysis - always uses file paths, never shows content in terminal.
|
|
||||||
|
|
||||||
**Example uses:**
|
|
||||||
```
|
|
||||||
"Use gemini analyze_file on README.md to find issues"
|
|
||||||
"Ask gemini to analyze_file main.py for performance problems"
|
|
||||||
"Have gemini analyze_file on auth.py, users.py, and permissions.py together"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Benefits:**
|
|
||||||
- Terminal always stays clean - only shows "Analyzing N file(s)"
|
|
||||||
- Server reads files directly and sends to Gemini
|
|
||||||
- No need to worry about prompt phrasing
|
|
||||||
- Supports multiple files in one request
|
|
||||||
|
|
||||||
### `extended_think`
|
|
||||||
Collaborate with Gemini on complex problems by sharing Claude's analysis for deeper insights.
|
|
||||||
|
|
||||||
**Example uses:**
|
|
||||||
```
|
|
||||||
"Share your analysis with gemini extended_think for deeper insights"
|
|
||||||
"Use gemini extended_think to validate and extend your architectural design"
|
|
||||||
"Ask gemini to extend your thinking on this security analysis"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Advanced usage with focus areas:**
|
|
||||||
```
|
|
||||||
"Use gemini extended_think with focus='performance' to drill into scaling issues"
|
|
||||||
"Share your design with gemini extended_think focusing on security vulnerabilities"
|
|
||||||
"Get gemini to extend your analysis with focus on edge cases"
|
|
||||||
```
|
```
|
||||||
|
|
||||||
**Features:**
|
**Features:**
|
||||||
- Takes Claude's thoughts, plans, or analysis as input
|
- Extends Claude's analysis with alternative approaches
|
||||||
- Optional file context for reference
|
- Finds edge cases and failure modes
|
||||||
- Configurable focus areas (architecture, bugs, performance, security)
|
- Validates architectural decisions
|
||||||
- Higher temperature (0.7) for creative problem-solving
|
- Suggests concrete implementations
|
||||||
- Designed for collaborative thinking, not just code review
|
- Temperature: 0.7 (creative problem-solving)
|
||||||
|
|
||||||
### `list_models`
|
**Key Capabilities:**
|
||||||
Lists available Gemini models (defaults to 2.5 Pro Preview).
|
- Challenge assumptions constructively
|
||||||
|
- Identify overlooked edge cases
|
||||||
|
- Suggest alternative design patterns
|
||||||
|
- Evaluate scalability implications
|
||||||
|
- Consider security vulnerabilities
|
||||||
|
- Assess technical debt impact
|
||||||
|
|
||||||
## Installation
|
**Triggers:** think deeper, ultrathink, extend my analysis, explore alternatives, validate my approach
|
||||||
|
|
||||||
|
### `review_code` - Professional Code Review
|
||||||
|
**Comprehensive code analysis with prioritized feedback**
|
||||||
|
|
||||||
|
#### Example Prompts:
|
||||||
|
```
|
||||||
|
"Review this code for issues"
|
||||||
|
"Security audit of auth.py"
|
||||||
|
"Quick review of my changes"
|
||||||
|
"Check this code against PEP8 standards"
|
||||||
|
"Review the authentication module focusing on OWASP top 10"
|
||||||
|
"Performance review of the database queries in models.py"
|
||||||
|
"Review api/ directory for REST API best practices"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Review Types:**
|
||||||
|
- `full` - Complete review (default)
|
||||||
|
- `security` - Security-focused analysis
|
||||||
|
- `performance` - Performance optimization
|
||||||
|
- `quick` - Critical issues only
|
||||||
|
|
||||||
|
**Output includes:**
|
||||||
|
- Issues by severity with color coding:
|
||||||
|
- 🔴 CRITICAL: Security vulnerabilities, data loss risks
|
||||||
|
- 🟠 HIGH: Bugs, performance issues, bad practices
|
||||||
|
- 🟡 MEDIUM: Code smells, maintainability issues
|
||||||
|
- 🟢 LOW: Style issues, minor improvements
|
||||||
|
- Specific fixes with code examples
|
||||||
|
- Overall quality assessment
|
||||||
|
- Top 3 priority improvements
|
||||||
|
- Positive aspects worth preserving
|
||||||
|
|
||||||
|
**Customization Options:**
|
||||||
|
- `focus_on`: Specific aspects to emphasize
|
||||||
|
- `standards`: Coding standards to enforce (PEP8, ESLint, etc.)
|
||||||
|
- `severity_filter`: Minimum severity to report
|
||||||
|
|
||||||
|
**Triggers:** review code, check for issues, find bugs, security check, code audit
|
||||||
|
|
||||||
|
### `debug_issue` - Expert Debugging Assistant
|
||||||
|
**Root cause analysis for complex problems**
|
||||||
|
|
||||||
|
#### Example Prompts:
|
||||||
|
```
|
||||||
|
"Debug this TypeError in my async function"
|
||||||
|
"Why is this test failing intermittently?"
|
||||||
|
"Trace the root cause of this memory leak"
|
||||||
|
"Debug this race condition"
|
||||||
|
"Help me understand why the API returns 500 errors under load"
|
||||||
|
"Debug why my WebSocket connections are dropping"
|
||||||
|
"Find the root cause of this deadlock in my threading code"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Provides:**
|
||||||
|
- Root cause identification
|
||||||
|
- Step-by-step debugging approach
|
||||||
|
- Immediate fixes
|
||||||
|
- Long-term solutions
|
||||||
|
- Prevention strategies
|
||||||
|
|
||||||
|
**Input Options:**
|
||||||
|
- `error_description`: The error or symptom
|
||||||
|
- `error_context`: Stack traces, logs, error messages
|
||||||
|
- `relevant_files`: Files that might be involved
|
||||||
|
- `runtime_info`: Environment, versions, configuration
|
||||||
|
- `previous_attempts`: What you've already tried
|
||||||
|
|
||||||
|
**Triggers:** debug, error, failing, root cause, trace, not working, why is
|
||||||
|
|
||||||
|
### `analyze` - Smart File Analysis
|
||||||
|
**General-purpose code understanding and exploration**
|
||||||
|
|
||||||
|
#### Example Prompts:
|
||||||
|
```
|
||||||
|
"Analyze main.py to understand the architecture"
|
||||||
|
"Examine these files for circular dependencies"
|
||||||
|
"Look for performance bottlenecks in this module"
|
||||||
|
"Understand how these components interact"
|
||||||
|
"Analyze the data flow through the pipeline modules"
|
||||||
|
"Check if this module follows SOLID principles"
|
||||||
|
"Analyze the API endpoints to create documentation"
|
||||||
|
"Examine the test coverage and suggest missing tests"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Analysis Types:**
|
||||||
|
- `architecture` - Design patterns, structure, dependencies
|
||||||
|
- `performance` - Bottlenecks, optimization opportunities
|
||||||
|
- `security` - Vulnerability assessment, security patterns
|
||||||
|
- `quality` - Code metrics, maintainability, test coverage
|
||||||
|
- `general` - Comprehensive analysis (default)
|
||||||
|
|
||||||
|
**Output Formats:**
|
||||||
|
- `detailed` - Comprehensive analysis (default)
|
||||||
|
- `summary` - High-level overview
|
||||||
|
- `actionable` - Focused on specific improvements
|
||||||
|
|
||||||
|
**Special Features:**
|
||||||
|
- Always uses file paths (not content) = clean terminal output!
|
||||||
|
- Can analyze multiple files to understand relationships
|
||||||
|
- Identifies patterns and anti-patterns
|
||||||
|
- Suggests refactoring opportunities
|
||||||
|
|
||||||
|
**Triggers:** analyze, examine, look at, understand, inspect, check
|
||||||
|
|
||||||
|
### `chat` - General Development Chat
|
||||||
|
**For everything else**
|
||||||
|
|
||||||
|
#### Example Prompts:
|
||||||
|
```
|
||||||
|
"Ask Gemini about the best caching strategy"
|
||||||
|
"Explain how async generators work"
|
||||||
|
"What's the difference between these design patterns?"
|
||||||
|
"Compare Redis vs Memcached for my use case"
|
||||||
|
"Explain the tradeoffs of microservices vs monolith"
|
||||||
|
"Best practices for handling timezone data in Python"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Additional Utility Tools
|
||||||
|
|
||||||
|
#### `list_models` - See Available Gemini Models
|
||||||
|
```
|
||||||
|
"List available Gemini models"
|
||||||
|
"Show me what models I can use"
|
||||||
|
```
|
||||||
|
|
||||||
|
#### `get_version` - Server Information
|
||||||
|
```
|
||||||
|
"Get Gemini server version"
|
||||||
|
"Show server configuration"
|
||||||
|
```
|
||||||
|
|
||||||
|
## 📚 Real-World Examples
|
||||||
|
|
||||||
|
### Example 1: Comprehensive Security Review
|
||||||
|
```
|
||||||
|
You: "Review the auth/ directory for security issues, focus on JWT handling"
|
||||||
|
Gemini:
|
||||||
|
🔴 CRITICAL auth/jwt.py:45 - Secret key hardcoded
|
||||||
|
→ Fix: Use environment variable
|
||||||
|
SECRET_KEY = os.environ.get('JWT_SECRET')
|
||||||
|
|
||||||
|
🟠 HIGH auth/validate.py:23 - No token expiration check
|
||||||
|
→ Fix: Add expiration validation
|
||||||
|
if decoded['exp'] < time.time():
|
||||||
|
raise TokenExpiredError()
|
||||||
|
```
|
||||||
|
|
||||||
|
### Example 2: Performance Debugging
|
||||||
|
```
|
||||||
|
You: "Debug why the API endpoint /users/search is slow"
|
||||||
|
Gemini:
|
||||||
|
ROOT CAUSE: N+1 query problem in user search
|
||||||
|
- Each user triggers separate query for roles
|
||||||
|
- 100 users = 101 database queries
|
||||||
|
|
||||||
|
IMMEDIATE FIX:
|
||||||
|
Add eager loading: User.query.options(joinedload('roles'))
|
||||||
|
|
||||||
|
LONG-TERM SOLUTION:
|
||||||
|
Implement query result caching with Redis
|
||||||
|
```
|
||||||
|
|
||||||
|
### Example 3: Architecture Analysis
|
||||||
|
```
|
||||||
|
You: "Analyze src/ to understand the architecture and identify improvements"
|
||||||
|
Gemini:
|
||||||
|
ARCHITECTURE OVERVIEW:
|
||||||
|
- Layered architecture: Controllers → Services → Repositories
|
||||||
|
- Good separation of concerns
|
||||||
|
- Dependency injection pattern used
|
||||||
|
|
||||||
|
ISSUES FOUND:
|
||||||
|
1. Circular dependency: UserService ↔ AuthService
|
||||||
|
2. Missing abstraction for external APIs
|
||||||
|
3. Business logic leaked into controllers
|
||||||
|
|
||||||
|
RECOMMENDATIONS:
|
||||||
|
1. Extract shared logic to UserAuthService
|
||||||
|
2. Add adapter pattern for external APIs
|
||||||
|
3. Move validation to service layer
|
||||||
|
```
|
||||||
|
|
||||||
|
## 💡 Power User Workflows
|
||||||
|
|
||||||
|
### 1. **Claude + Gemini Deep Thinking**
|
||||||
|
```
|
||||||
|
You: "Design a real-time collaborative editor"
|
||||||
|
Claude: [provides initial design]
|
||||||
|
You: "Think deeper about the conflict resolution"
|
||||||
|
Gemini: [explores CRDTs, operational transforms, edge cases]
|
||||||
|
You: "Update the design based on Gemini's insights"
|
||||||
|
Claude: [refines with deeper understanding]
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. **Comprehensive Code Review**
|
||||||
|
```
|
||||||
|
You: "Review api/auth.py focusing on security"
|
||||||
|
Gemini: [identifies SQL injection risk, suggests prepared statements]
|
||||||
|
You: "Fix the critical issues Gemini found"
|
||||||
|
Claude: [implements secure solution]
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. **Complex Debugging**
|
||||||
|
```
|
||||||
|
Claude: "I see the error but the root cause isn't clear..."
|
||||||
|
You: "Debug this with the error context and relevant files"
|
||||||
|
Gemini: [traces execution, identifies race condition]
|
||||||
|
You: "Implement Gemini's suggested fix"
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. **Architecture Validation**
|
||||||
|
```
|
||||||
|
You: "I've designed a microservices architecture [details]"
|
||||||
|
You: "Think deeper about scalability and failure modes"
|
||||||
|
Gemini: [analyzes bottlenecks, suggests circuit breakers, identifies edge cases]
|
||||||
|
```
|
||||||
|
|
||||||
|
## 🎯 Pro Tips
|
||||||
|
|
||||||
|
### Natural Language Triggers
|
||||||
|
The server recognizes natural phrases. Just talk normally:
|
||||||
|
- ❌ "Use the think_deeper tool with current_analysis parameter..."
|
||||||
|
- ✅ "Think deeper about this approach"
|
||||||
|
|
||||||
|
### Automatic Tool Selection
|
||||||
|
Claude will automatically pick the right tool based on your request:
|
||||||
|
- "review" → `review_code`
|
||||||
|
- "debug" → `debug_issue`
|
||||||
|
- "analyze" → `analyze`
|
||||||
|
- "think deeper" → `think_deeper`
|
||||||
|
|
||||||
|
### Clean Terminal Output
|
||||||
|
All file operations use paths, not content, so your terminal stays readable even with large files.
|
||||||
|
|
||||||
|
### Context Awareness
|
||||||
|
Tools can reference files for additional context:
|
||||||
|
```
|
||||||
|
"Debug this error with context from app.py and config.py"
|
||||||
|
"Think deeper about my design, reference the current architecture.md"
|
||||||
|
```
|
||||||
|
|
||||||
|
## 🏗️ Architecture
|
||||||
|
|
||||||
|
```
|
||||||
|
gemini-mcp-server/
|
||||||
|
├── server.py # Main server
|
||||||
|
├── config.py # Configuration
|
||||||
|
├── tools/ # Tool implementations
|
||||||
|
│ ├── think_deeper.py
|
||||||
|
│ ├── review_code.py
|
||||||
|
│ ├── debug_issue.py
|
||||||
|
│ └── analyze.py
|
||||||
|
├── prompts/ # System prompts
|
||||||
|
└── utils/ # Utilities
|
||||||
|
```
|
||||||
|
|
||||||
|
**Extensible Design:**
|
||||||
|
- Each tool is a self-contained module
|
||||||
|
- Easy to add new tools
|
||||||
|
- Consistent interface
|
||||||
|
- Type-safe with Pydantic
|
||||||
|
|
||||||
|
## 🔧 Installation
|
||||||
|
|
||||||
1. Clone the repository:
|
1. Clone the repository:
|
||||||
```bash
|
```bash
|
||||||
@@ -234,210 +366,21 @@ Lists available Gemini models (defaults to 2.5 Pro Preview).
|
|||||||
export GEMINI_API_KEY="your-api-key-here"
|
export GEMINI_API_KEY="your-api-key-here"
|
||||||
```
|
```
|
||||||
|
|
||||||
## Advanced Configuration
|
## 🤝 Contributing
|
||||||
|
|
||||||
### Custom System Prompts
|
We welcome contributions! The modular architecture makes it easy to add new tools:
|
||||||
Override the default developer prompt when needed:
|
|
||||||
```python
|
|
||||||
{
|
|
||||||
"prompt": "Review this code",
|
|
||||||
"system_prompt": "You are a security expert. Focus only on vulnerabilities."
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Temperature Control
|
1. Create a new tool in `tools/`
|
||||||
Adjust for your use case:
|
2. Inherit from `BaseTool`
|
||||||
- `0.1-0.3`: Maximum precision (debugging, security analysis)
|
3. Implement required methods
|
||||||
- `0.4-0.6`: Balanced (general development tasks)
|
4. Add to `TOOLS` in `server.py`
|
||||||
- `0.7-0.9`: Creative solutions (architecture design, brainstorming)
|
|
||||||
|
|
||||||
### Model Selection
|
See existing tools for examples.
|
||||||
While defaulting to `gemini-2.5-pro-preview-06-05`, you can specify other models:
|
|
||||||
- `gemini-1.5-pro-latest`: Stable alternative
|
|
||||||
- `gemini-1.5-flash`: Faster responses
|
|
||||||
- Use `list_models` to see all available options
|
|
||||||
|
|
||||||
## Claude Code Integration Examples
|
## 📝 License
|
||||||
|
|
||||||
### When Claude hits token limits:
|
MIT License - see LICENSE file for details.
|
||||||
```
|
|
||||||
Claude: "This file is too large for me to analyze fully..."
|
|
||||||
You: "Use Gemini to analyze the entire file and identify the main components"
|
|
||||||
```
|
|
||||||
|
|
||||||
### For architecture reviews:
|
## 🙏 Acknowledgments
|
||||||
```
|
|
||||||
You: "Use Gemini to analyze all files in /src/core/ and create an architecture diagram"
|
|
||||||
```
|
|
||||||
|
|
||||||
### For performance optimization:
|
Built with [MCP](https://modelcontextprotocol.com) by Anthropic and powered by Google's Gemini API.
|
||||||
```
|
|
||||||
You: "Have Gemini profile this codebase and suggest the top 5 performance improvements"
|
|
||||||
```
|
|
||||||
|
|
||||||
## Practical Usage Tips
|
|
||||||
|
|
||||||
### Effective Commands
|
|
||||||
Be specific about what you want from Gemini:
|
|
||||||
- Good: "Ask Gemini to identify memory leaks in this code"
|
|
||||||
- Bad: "Ask Gemini about this"
|
|
||||||
|
|
||||||
### Clean Terminal Output
|
|
||||||
When analyzing files, explicitly mention the files parameter:
|
|
||||||
- "Use gemini analyze_code with files=['app.py'] to find bugs"
|
|
||||||
- "Analyze package.json using gemini's files parameter"
|
|
||||||
This prevents Claude from displaying the entire file content in your terminal.
|
|
||||||
|
|
||||||
### Common Workflows
|
|
||||||
|
|
||||||
#### 1. **Extended Thinking Partnership**
|
|
||||||
```
|
|
||||||
You: "Design a distributed task queue system"
|
|
||||||
Claude: [provides detailed architecture and implementation plan]
|
|
||||||
You: "Use gemini extended_think to validate and extend this design"
|
|
||||||
Gemini: [identifies gaps, suggests alternatives, finds edge cases]
|
|
||||||
You: "Address the issues Gemini found"
|
|
||||||
Claude: [updates design with improvements]
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 2. **Clean File Analysis (No Terminal Clutter)**
|
|
||||||
```
|
|
||||||
"Use gemini analyze_file on engine.py to find performance issues"
|
|
||||||
"Ask gemini to analyze_file database.py and suggest optimizations"
|
|
||||||
"Have gemini analyze_file on all files in /src/core/"
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 3. **Multi-File Architecture Review**
|
|
||||||
```
|
|
||||||
"Use gemini analyze_file on auth.py, users.py, permissions.py to map dependencies"
|
|
||||||
"Ask gemini to analyze_file the entire /src/api/ directory for security issues"
|
|
||||||
"Have gemini analyze_file all model files to check for N+1 queries"
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 4. **Deep Collaborative Analysis**
|
|
||||||
```
|
|
||||||
Claude: "Here's my analysis of the memory leak: [detailed investigation]"
|
|
||||||
You: "Share this with gemini extended_think focusing on root causes"
|
|
||||||
|
|
||||||
Claude: "I've designed this caching strategy: [detailed design]"
|
|
||||||
You: "Use gemini extended_think with focus='performance' to stress-test this design"
|
|
||||||
|
|
||||||
Claude: "Here's my security assessment: [findings]"
|
|
||||||
You: "Get gemini to extended_think on this with files=['auth.py', 'crypto.py'] for context"
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 4. **Claude-Driven Design with Gemini Validation**
|
|
||||||
```
|
|
||||||
Claude: "I've designed a caching strategy using Redis with TTL-based expiration..."
|
|
||||||
You: "Share my caching design with Gemini and ask for edge cases I might have missed"
|
|
||||||
|
|
||||||
Claude: "Here's my implementation plan for the authentication system: [detailed plan]"
|
|
||||||
You: "Use Gemini to analyze this plan and identify security vulnerabilities or scalability issues"
|
|
||||||
|
|
||||||
Claude: "I'm thinking of using this approach for the data pipeline: [approach details]"
|
|
||||||
You: "Have Gemini review my approach and check these 10 files for compatibility issues"
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 5. **Security & Performance Audits**
|
|
||||||
```
|
|
||||||
"Use Gemini to security audit this authentication flow"
|
|
||||||
"Have Gemini identify performance bottlenecks in this codebase"
|
|
||||||
"Ask Gemini to check for common security vulnerabilities"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Best Practices
|
|
||||||
- Let Claude do the primary thinking and design work
|
|
||||||
- Use Gemini as a validation layer for edge cases and extended context
|
|
||||||
- Share Claude's complete thoughts with Gemini for comprehensive review
|
|
||||||
- Have Gemini analyze files that are too large for Claude
|
|
||||||
- Use the feedback loop: Claude designs → Gemini validates → Claude refines
|
|
||||||
|
|
||||||
### Real-World Example Flow
|
|
||||||
```
|
|
||||||
1. You: "Create a microservices architecture for our e-commerce platform"
|
|
||||||
2. Claude: [Designs comprehensive architecture with service boundaries, APIs, data flow]
|
|
||||||
3. You: "Take my complete architecture design and have Gemini analyze it for:
|
|
||||||
- Potential bottlenecks
|
|
||||||
- Missing error handling
|
|
||||||
- Security vulnerabilities
|
|
||||||
- Scalability concerns"
|
|
||||||
4. Gemini: [Provides detailed analysis with specific concerns]
|
|
||||||
5. You: "Based on Gemini's analysis, update the architecture"
|
|
||||||
6. Claude: [Refines design addressing all concerns]
|
|
||||||
```
|
|
||||||
|
|
||||||
## Notes
|
|
||||||
|
|
||||||
- Gemini 2.5 Pro Preview may occasionally block certain prompts due to safety filters
|
|
||||||
- If a prompt is blocked by Google's safety filters, the server will return a clear error message to Claude explaining why the request could not be completed
|
|
||||||
- Token estimation: ~4 characters per token
|
|
||||||
- All file paths should be absolute paths
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### Server Not Appearing in Claude
|
|
||||||
|
|
||||||
- **Check JSON validity:** Ensure your `claude_desktop_config.json` file is valid JSON (no trailing commas, proper quotes)
|
|
||||||
- **Verify absolute paths:** The `command` path must be an absolute path to `run_gemini.sh` or `run_gemini.bat`
|
|
||||||
- **Restart Claude Desktop:** Always restart Claude Desktop completely after any configuration change
|
|
||||||
|
|
||||||
### Gemini Commands Fail
|
|
||||||
|
|
||||||
- **"API Key not valid" errors:** Verify your `GEMINI_API_KEY` is correct and active in [Google AI Studio](https://aistudio.google.com/app/apikey)
|
|
||||||
- **"Permission denied" errors:**
|
|
||||||
- Ensure your API key is enabled for the `gemini-2.5-pro-preview` model
|
|
||||||
- On macOS/Linux, check that `run_gemini.sh` has execute permissions (`chmod +x run_gemini.sh`)
|
|
||||||
- **Network errors:** If behind a corporate firewall, ensure requests to `https://generativelanguage.googleapis.com` are allowed
|
|
||||||
|
|
||||||
### Common Setup Issues
|
|
||||||
|
|
||||||
- **"Module not found" errors:** The virtual environment may not be activated. See the Installation section
|
|
||||||
- **`chmod: command not found` (Windows):** The `chmod +x` command is for macOS/Linux only. Windows users can skip this step
|
|
||||||
- **Path not found errors:** Use absolute paths in all configurations, not relative paths like `./run_gemini.sh`
|
|
||||||
|
|
||||||
## Testing
|
|
||||||
|
|
||||||
### Running Tests Locally
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Install development dependencies
|
|
||||||
pip install -r requirements.txt
|
|
||||||
|
|
||||||
# Run tests with coverage
|
|
||||||
pytest
|
|
||||||
|
|
||||||
# Run tests with verbose output
|
|
||||||
pytest -v
|
|
||||||
|
|
||||||
# Run specific test file
|
|
||||||
pytest tests/test_gemini_server.py
|
|
||||||
|
|
||||||
# Generate HTML coverage report
|
|
||||||
pytest --cov-report=html
|
|
||||||
open htmlcov/index.html # View coverage report
|
|
||||||
```
|
|
||||||
|
|
||||||
### Continuous Integration
|
|
||||||
|
|
||||||
This project uses GitHub Actions for automated testing:
|
|
||||||
- Tests run on every push and pull request
|
|
||||||
- Supports Python 3.8 - 3.12
|
|
||||||
- Tests on Ubuntu, macOS, and Windows
|
|
||||||
- Includes linting with flake8, black, isort, and mypy
|
|
||||||
- Maintains 80%+ code coverage
|
|
||||||
|
|
||||||
## Contributing
|
|
||||||
|
|
||||||
This server is designed specifically for Claude Code users. Contributions that enhance the developer experience are welcome!
|
|
||||||
|
|
||||||
1. Fork the repository
|
|
||||||
2. Create a feature branch (`git checkout -b feature/amazing-feature`)
|
|
||||||
3. Write tests for your changes
|
|
||||||
4. Ensure all tests pass (`pytest`)
|
|
||||||
5. Commit your changes (`git commit -m 'Add amazing feature'`)
|
|
||||||
6. Push to the branch (`git push origin feature/amazing-feature`)
|
|
||||||
7. Open a Pull Request
|
|
||||||
|
|
||||||
## License
|
|
||||||
|
|
||||||
MIT License - feel free to customize for your development workflow.
|
|
||||||
67
config.py
Normal file
67
config.py
Normal file
@@ -0,0 +1,67 @@
|
|||||||
|
"""
|
||||||
|
Configuration and constants for Gemini MCP Server
|
||||||
|
"""
|
||||||
|
|
||||||
|
# Version and metadata
|
||||||
|
__version__ = "2.4.0"
|
||||||
|
__updated__ = "2025-06-08"
|
||||||
|
__author__ = "Fahad Gilani"
|
||||||
|
|
||||||
|
# Model configuration
|
||||||
|
DEFAULT_MODEL = "gemini-2.5-pro-preview-06-05"
|
||||||
|
MAX_CONTEXT_TOKENS = 1_000_000 # 1M tokens for Gemini Pro
|
||||||
|
|
||||||
|
# Temperature defaults for different tool types
|
||||||
|
TEMPERATURE_ANALYTICAL = 0.2 # For code review, debugging
|
||||||
|
TEMPERATURE_BALANCED = 0.5 # For general chat
|
||||||
|
TEMPERATURE_CREATIVE = 0.7 # For architecture, deep thinking
|
||||||
|
|
||||||
|
# Tool trigger phrases for natural language matching
|
||||||
|
TOOL_TRIGGERS = {
|
||||||
|
"think_deeper": [
|
||||||
|
"think deeper",
|
||||||
|
"ultrathink",
|
||||||
|
"extend my analysis",
|
||||||
|
"reason through",
|
||||||
|
"explore alternatives",
|
||||||
|
"challenge my thinking",
|
||||||
|
"deep think",
|
||||||
|
"extended thinking",
|
||||||
|
"validate my approach",
|
||||||
|
"find edge cases",
|
||||||
|
],
|
||||||
|
"review_code": [
|
||||||
|
"review",
|
||||||
|
"check for issues",
|
||||||
|
"find bugs",
|
||||||
|
"security check",
|
||||||
|
"code quality",
|
||||||
|
"audit",
|
||||||
|
"code review",
|
||||||
|
"check this code",
|
||||||
|
"review for",
|
||||||
|
"find vulnerabilities",
|
||||||
|
],
|
||||||
|
"debug_issue": [
|
||||||
|
"debug",
|
||||||
|
"error",
|
||||||
|
"failing",
|
||||||
|
"root cause",
|
||||||
|
"trace",
|
||||||
|
"why doesn't",
|
||||||
|
"not working",
|
||||||
|
"diagnose",
|
||||||
|
"troubleshoot",
|
||||||
|
"investigate this error",
|
||||||
|
],
|
||||||
|
"analyze": [
|
||||||
|
"analyze",
|
||||||
|
"examine",
|
||||||
|
"look at",
|
||||||
|
"check",
|
||||||
|
"inspect",
|
||||||
|
"understand",
|
||||||
|
"analyze file",
|
||||||
|
"analyze these files",
|
||||||
|
],
|
||||||
|
}
|
||||||
855
gemini_server.py
855
gemini_server.py
@@ -1,858 +1,11 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
"""
|
||||||
Gemini MCP Server - Model Context Protocol server for Google Gemini
|
Gemini MCP Server - Entry point for backward compatibility
|
||||||
Enhanced for large-scale code analysis with 1M token context window
|
This file exists to maintain compatibility with existing configurations.
|
||||||
|
The main implementation is now in server.py
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
from server import main
|
||||||
import asyncio
|
import asyncio
|
||||||
import json
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
from datetime import datetime
|
|
||||||
from pathlib import Path
|
|
||||||
from typing import Any, Dict, List, Optional, Tuple
|
|
||||||
|
|
||||||
import google.generativeai as genai
|
|
||||||
from mcp.server import Server
|
|
||||||
from mcp.server.models import InitializationOptions
|
|
||||||
from mcp.server.stdio import stdio_server
|
|
||||||
from mcp.types import TextContent, Tool
|
|
||||||
from pydantic import BaseModel, Field
|
|
||||||
|
|
||||||
# Version and metadata
|
|
||||||
__version__ = "2.3.0"
|
|
||||||
__updated__ = "2025-06-08"
|
|
||||||
__author__ = "Fahad Gilani"
|
|
||||||
|
|
||||||
# Default to Gemini 2.5 Pro Preview with maximum context
|
|
||||||
DEFAULT_MODEL = "gemini-2.5-pro-preview-06-05"
|
|
||||||
MAX_CONTEXT_TOKENS = 1000000 # 1M tokens
|
|
||||||
|
|
||||||
# Developer-focused system prompt for Claude Code usage
|
|
||||||
DEVELOPER_SYSTEM_PROMPT = """You are an expert software developer assistant working alongside Claude Code. \
|
|
||||||
Your role is to extend Claude's capabilities when handling large codebases or complex analysis tasks.
|
|
||||||
|
|
||||||
Core competencies:
|
|
||||||
- Deep understanding of software architecture and design patterns
|
|
||||||
- Expert-level debugging and root cause analysis
|
|
||||||
- Performance optimization and scalability considerations
|
|
||||||
- Security best practices and vulnerability identification
|
|
||||||
- Clean code principles and refactoring strategies
|
|
||||||
- Comprehensive testing approaches (unit, integration, e2e)
|
|
||||||
- Modern development practices (CI/CD, DevOps, cloud-native)
|
|
||||||
- Cross-platform and cross-language expertise
|
|
||||||
|
|
||||||
Your approach:
|
|
||||||
- Be precise and technical, avoiding unnecessary explanations
|
|
||||||
- Provide actionable, concrete solutions with code examples
|
|
||||||
- Consider edge cases and potential issues proactively
|
|
||||||
- Focus on maintainability, readability, and long-term sustainability
|
|
||||||
- Suggest modern, idiomatic solutions for the given language/framework
|
|
||||||
- When reviewing code, prioritize critical issues first
|
|
||||||
- Always validate your suggestions against best practices
|
|
||||||
|
|
||||||
Remember: You're augmenting Claude Code's capabilities, especially for tasks requiring \
|
|
||||||
extensive context or deep analysis that might exceed Claude's token limits."""
|
|
||||||
|
|
||||||
# Extended thinking system prompt for collaborative analysis
|
|
||||||
EXTENDED_THINKING_PROMPT = """You are a senior development partner collaborating with Claude Code on complex problems. \
|
|
||||||
Claude has shared their analysis with you for deeper exploration and validation.
|
|
||||||
|
|
||||||
Your role is to:
|
|
||||||
1. Build upon Claude's thinking - identify gaps, extend ideas, and suggest alternatives
|
|
||||||
2. Challenge assumptions constructively and identify potential issues
|
|
||||||
3. Provide concrete, actionable insights that complement Claude's analysis
|
|
||||||
4. Focus on aspects Claude might have missed or couldn't fully explore
|
|
||||||
5. Suggest implementation strategies and architectural improvements
|
|
||||||
|
|
||||||
Key areas to consider:
|
|
||||||
- Edge cases and failure modes Claude might have overlooked
|
|
||||||
- Performance implications at scale
|
|
||||||
- Security vulnerabilities or attack vectors
|
|
||||||
- Maintainability and technical debt considerations
|
|
||||||
- Alternative approaches or design patterns
|
|
||||||
- Integration challenges with existing systems
|
|
||||||
- Testing strategies for complex scenarios
|
|
||||||
|
|
||||||
Be direct and technical. Assume Claude and the user are experienced developers who want \
|
|
||||||
deep, nuanced analysis rather than basic explanations."""
|
|
||||||
|
|
||||||
|
|
||||||
class GeminiChatRequest(BaseModel):
|
|
||||||
"""Request model for Gemini chat"""
|
|
||||||
|
|
||||||
prompt: str = Field(..., description="The prompt to send to Gemini")
|
|
||||||
system_prompt: Optional[str] = Field(
|
|
||||||
None, description="Optional system prompt for context"
|
|
||||||
)
|
|
||||||
max_tokens: Optional[int] = Field(
|
|
||||||
8192, description="Maximum number of tokens in response"
|
|
||||||
)
|
|
||||||
temperature: Optional[float] = Field(
|
|
||||||
0.5,
|
|
||||||
description="Temperature for response randomness (0-1, default 0.5 for balanced accuracy/creativity)",
|
|
||||||
)
|
|
||||||
model: Optional[str] = Field(
|
|
||||||
DEFAULT_MODEL, description=f"Model to use (defaults to {DEFAULT_MODEL})"
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class CodeAnalysisRequest(BaseModel):
|
|
||||||
"""Request model for code analysis"""
|
|
||||||
|
|
||||||
files: Optional[List[str]] = Field(
|
|
||||||
None, description="List of file paths to analyze"
|
|
||||||
)
|
|
||||||
code: Optional[str] = Field(None, description="Direct code content to analyze")
|
|
||||||
question: str = Field(
|
|
||||||
..., description="Question or analysis request about the code"
|
|
||||||
)
|
|
||||||
system_prompt: Optional[str] = Field(
|
|
||||||
None, description="Optional system prompt for context"
|
|
||||||
)
|
|
||||||
max_tokens: Optional[int] = Field(
|
|
||||||
8192, description="Maximum number of tokens in response"
|
|
||||||
)
|
|
||||||
temperature: Optional[float] = Field(
|
|
||||||
0.2,
|
|
||||||
description="Temperature for code analysis (0-1, default 0.2 for high accuracy)",
|
|
||||||
)
|
|
||||||
model: Optional[str] = Field(
|
|
||||||
DEFAULT_MODEL, description=f"Model to use (defaults to {DEFAULT_MODEL})"
|
|
||||||
)
|
|
||||||
verbose_output: Optional[bool] = Field(
|
|
||||||
False, description="Show file contents in terminal output"
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class FileAnalysisRequest(BaseModel):
|
|
||||||
"""Request model for file analysis"""
|
|
||||||
|
|
||||||
files: List[str] = Field(..., description="List of file paths to analyze")
|
|
||||||
question: str = Field(
|
|
||||||
..., description="Question or analysis request about the files"
|
|
||||||
)
|
|
||||||
system_prompt: Optional[str] = Field(
|
|
||||||
None, description="Optional system prompt for context"
|
|
||||||
)
|
|
||||||
max_tokens: Optional[int] = Field(
|
|
||||||
8192, description="Maximum number of tokens in response"
|
|
||||||
)
|
|
||||||
temperature: Optional[float] = Field(
|
|
||||||
0.2,
|
|
||||||
description="Temperature for analysis (0-1, default 0.2 for high accuracy)",
|
|
||||||
)
|
|
||||||
model: Optional[str] = Field(
|
|
||||||
DEFAULT_MODEL, description=f"Model to use (defaults to {DEFAULT_MODEL})"
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class ExtendedThinkRequest(BaseModel):
|
|
||||||
"""Request model for extended thinking with Gemini"""
|
|
||||||
|
|
||||||
thought_process: str = Field(
|
|
||||||
..., description="Claude's analysis, thoughts, plans, or outlines to extend"
|
|
||||||
)
|
|
||||||
context: Optional[str] = Field(
|
|
||||||
None, description="Additional context about the problem or goal"
|
|
||||||
)
|
|
||||||
files: Optional[List[str]] = Field(
|
|
||||||
None, description="Optional file paths for additional context"
|
|
||||||
)
|
|
||||||
focus: Optional[str] = Field(
|
|
||||||
None,
|
|
||||||
description="Specific focus area: architecture, bugs, performance, security, etc.",
|
|
||||||
)
|
|
||||||
system_prompt: Optional[str] = Field(
|
|
||||||
None, description="Optional system prompt for context"
|
|
||||||
)
|
|
||||||
max_tokens: Optional[int] = Field(
|
|
||||||
8192, description="Maximum number of tokens in response"
|
|
||||||
)
|
|
||||||
temperature: Optional[float] = Field(
|
|
||||||
0.7,
|
|
||||||
description="Temperature for creative thinking (0-1, default 0.7 for balanced creativity)",
|
|
||||||
)
|
|
||||||
model: Optional[str] = Field(
|
|
||||||
DEFAULT_MODEL, description=f"Model to use (defaults to {DEFAULT_MODEL})"
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
# Create the MCP server instance
|
|
||||||
server: Server = Server("gemini-server")
|
|
||||||
|
|
||||||
|
|
||||||
# Configure Gemini API
|
|
||||||
def configure_gemini():
|
|
||||||
"""Configure the Gemini API with API key from environment"""
|
|
||||||
api_key = os.getenv("GEMINI_API_KEY")
|
|
||||||
if not api_key:
|
|
||||||
raise ValueError("GEMINI_API_KEY environment variable is not set")
|
|
||||||
genai.configure(api_key=api_key)
|
|
||||||
|
|
||||||
|
|
||||||
def read_file_content(file_path: str) -> str:
|
|
||||||
"""Read content from a file with error handling - for backward compatibility"""
|
|
||||||
return read_file_content_for_gemini(file_path)
|
|
||||||
|
|
||||||
|
|
||||||
def read_file_content_for_gemini(file_path: str) -> str:
|
|
||||||
"""Read content from a file with proper formatting for Gemini"""
|
|
||||||
try:
|
|
||||||
path = Path(file_path)
|
|
||||||
if not path.exists():
|
|
||||||
return f"\n--- FILE NOT FOUND: {file_path} ---\nError: File does not exist\n--- END FILE ---\n"
|
|
||||||
if not path.is_file():
|
|
||||||
return f"\n--- NOT A FILE: {file_path} ---\nError: Path is not a file\n--- END FILE ---\n"
|
|
||||||
|
|
||||||
# Read the file
|
|
||||||
with open(path, "r", encoding="utf-8") as f:
|
|
||||||
content = f.read()
|
|
||||||
|
|
||||||
# Format with clear delimiters for Gemini
|
|
||||||
return f"\n--- BEGIN FILE: {file_path} ---\n{content}\n--- END FILE: {file_path} ---\n"
|
|
||||||
except Exception as e:
|
|
||||||
return f"\n--- ERROR READING FILE: {file_path} ---\nError: {str(e)}\n--- END FILE ---\n"
|
|
||||||
|
|
||||||
|
|
||||||
def prepare_code_context(
|
|
||||||
files: Optional[List[str]], code: Optional[str]
|
|
||||||
) -> Tuple[str, str]:
|
|
||||||
"""Prepare code context from files and/or direct code
|
|
||||||
Returns: (context_for_gemini, summary_for_terminal)
|
|
||||||
"""
|
|
||||||
context_parts = []
|
|
||||||
summary_parts = []
|
|
||||||
|
|
||||||
# Add file contents
|
|
||||||
if files:
|
|
||||||
summary_parts.append(f"Analyzing {len(files)} file(s):")
|
|
||||||
for file_path in files:
|
|
||||||
# Get file content for Gemini
|
|
||||||
file_content = read_file_content_for_gemini(file_path)
|
|
||||||
context_parts.append(file_content)
|
|
||||||
|
|
||||||
# Create summary with small excerpt for terminal
|
|
||||||
path = Path(file_path)
|
|
||||||
if path.exists() and path.is_file():
|
|
||||||
size = path.stat().st_size
|
|
||||||
try:
|
|
||||||
with open(path, "r", encoding="utf-8") as f:
|
|
||||||
# Read first few lines for preview
|
|
||||||
preview_lines = []
|
|
||||||
for i, line in enumerate(f):
|
|
||||||
if i >= 3: # Show max 3 lines
|
|
||||||
break
|
|
||||||
preview_lines.append(line.rstrip())
|
|
||||||
preview = "\n".join(preview_lines)
|
|
||||||
if len(preview) > 100:
|
|
||||||
preview = preview[:100] + "..."
|
|
||||||
summary_parts.append(f" {file_path} ({size:,} bytes)")
|
|
||||||
if preview.strip():
|
|
||||||
summary_parts.append(f" Preview: {preview[:50]}...")
|
|
||||||
except Exception:
|
|
||||||
summary_parts.append(f" {file_path} ({size:,} bytes)")
|
|
||||||
else:
|
|
||||||
summary_parts.append(f" {file_path} (not found)")
|
|
||||||
|
|
||||||
# Add direct code
|
|
||||||
if code:
|
|
||||||
formatted_code = (
|
|
||||||
f"\n--- BEGIN DIRECT CODE ---\n{code}\n--- END DIRECT CODE ---\n"
|
|
||||||
)
|
|
||||||
context_parts.append(formatted_code)
|
|
||||||
preview = code[:100] + "..." if len(code) > 100 else code
|
|
||||||
summary_parts.append(f"Direct code provided ({len(code):,} characters)")
|
|
||||||
summary_parts.append(f" Preview: {preview}")
|
|
||||||
|
|
||||||
full_context = "\n\n".join(context_parts)
|
|
||||||
summary = "\n".join(summary_parts)
|
|
||||||
|
|
||||||
return full_context, summary
|
|
||||||
|
|
||||||
|
|
||||||
@server.list_tools()
|
|
||||||
async def handle_list_tools() -> List[Tool]:
|
|
||||||
"""List all available tools"""
|
|
||||||
return [
|
|
||||||
Tool(
|
|
||||||
name="chat",
|
|
||||||
description="Chat with Gemini (optimized for 2.5 Pro with 1M context)",
|
|
||||||
inputSchema={
|
|
||||||
"type": "object",
|
|
||||||
"properties": {
|
|
||||||
"prompt": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "The prompt to send to Gemini",
|
|
||||||
},
|
|
||||||
"system_prompt": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Optional system prompt for context",
|
|
||||||
},
|
|
||||||
"max_tokens": {
|
|
||||||
"type": "integer",
|
|
||||||
"description": "Maximum number of tokens in response",
|
|
||||||
"default": 8192,
|
|
||||||
},
|
|
||||||
"temperature": {
|
|
||||||
"type": "number",
|
|
||||||
"description": "Temperature for response randomness (0-1, default 0.5 for "
|
|
||||||
"balanced accuracy/creativity)",
|
|
||||||
"default": 0.5,
|
|
||||||
"minimum": 0,
|
|
||||||
"maximum": 1,
|
|
||||||
},
|
|
||||||
"model": {
|
|
||||||
"type": "string",
|
|
||||||
"description": f"Model to use (defaults to {DEFAULT_MODEL})",
|
|
||||||
"default": DEFAULT_MODEL,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
"required": ["prompt"],
|
|
||||||
},
|
|
||||||
),
|
|
||||||
Tool(
|
|
||||||
name="analyze_code",
|
|
||||||
description="Analyze code files or snippets with Gemini's 1M context window. "
|
|
||||||
"For large content, use file paths to avoid terminal clutter.",
|
|
||||||
inputSchema={
|
|
||||||
"type": "object",
|
|
||||||
"properties": {
|
|
||||||
"files": {
|
|
||||||
"type": "array",
|
|
||||||
"items": {"type": "string"},
|
|
||||||
"description": "List of file paths to analyze",
|
|
||||||
},
|
|
||||||
"code": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Direct code content to analyze "
|
|
||||||
"(use for small snippets only; prefer files for large content)",
|
|
||||||
},
|
|
||||||
"question": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Question or analysis request about the code",
|
|
||||||
},
|
|
||||||
"system_prompt": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Optional system prompt for context",
|
|
||||||
},
|
|
||||||
"max_tokens": {
|
|
||||||
"type": "integer",
|
|
||||||
"description": "Maximum number of tokens in response",
|
|
||||||
"default": 8192,
|
|
||||||
},
|
|
||||||
"temperature": {
|
|
||||||
"type": "number",
|
|
||||||
"description": "Temperature for code analysis (0-1, default 0.2 for high accuracy)",
|
|
||||||
"default": 0.2,
|
|
||||||
"minimum": 0,
|
|
||||||
"maximum": 1,
|
|
||||||
},
|
|
||||||
"model": {
|
|
||||||
"type": "string",
|
|
||||||
"description": f"Model to use (defaults to {DEFAULT_MODEL})",
|
|
||||||
"default": DEFAULT_MODEL,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
"required": ["question"],
|
|
||||||
},
|
|
||||||
),
|
|
||||||
Tool(
|
|
||||||
name="list_models",
|
|
||||||
description="List available Gemini models",
|
|
||||||
inputSchema={"type": "object", "properties": {}},
|
|
||||||
),
|
|
||||||
Tool(
|
|
||||||
name="get_version",
|
|
||||||
description="Get the version and metadata of the Gemini MCP Server",
|
|
||||||
inputSchema={"type": "object", "properties": {}},
|
|
||||||
),
|
|
||||||
Tool(
|
|
||||||
name="analyze_file",
|
|
||||||
description="Analyze files with Gemini - always uses file paths for clean terminal output",
|
|
||||||
inputSchema={
|
|
||||||
"type": "object",
|
|
||||||
"properties": {
|
|
||||||
"files": {
|
|
||||||
"type": "array",
|
|
||||||
"items": {"type": "string"},
|
|
||||||
"description": "List of file paths to analyze",
|
|
||||||
},
|
|
||||||
"question": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Question or analysis request about the files",
|
|
||||||
},
|
|
||||||
"system_prompt": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Optional system prompt for context",
|
|
||||||
},
|
|
||||||
"max_tokens": {
|
|
||||||
"type": "integer",
|
|
||||||
"description": "Maximum number of tokens in response",
|
|
||||||
"default": 8192,
|
|
||||||
},
|
|
||||||
"temperature": {
|
|
||||||
"type": "number",
|
|
||||||
"description": "Temperature for analysis (0-1, default 0.2 for high accuracy)",
|
|
||||||
"default": 0.2,
|
|
||||||
"minimum": 0,
|
|
||||||
"maximum": 1,
|
|
||||||
},
|
|
||||||
"model": {
|
|
||||||
"type": "string",
|
|
||||||
"description": f"Model to use (defaults to {DEFAULT_MODEL})",
|
|
||||||
"default": DEFAULT_MODEL,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
"required": ["files", "question"],
|
|
||||||
},
|
|
||||||
),
|
|
||||||
Tool(
|
|
||||||
name="extended_think",
|
|
||||||
description="Collaborate with Gemini on complex problems - share Claude's analysis for deeper insights",
|
|
||||||
inputSchema={
|
|
||||||
"type": "object",
|
|
||||||
"properties": {
|
|
||||||
"thought_process": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Claude's analysis, thoughts, plans, or outlines to extend",
|
|
||||||
},
|
|
||||||
"context": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Additional context about the problem or goal",
|
|
||||||
},
|
|
||||||
"files": {
|
|
||||||
"type": "array",
|
|
||||||
"items": {"type": "string"},
|
|
||||||
"description": "Optional file paths for additional context",
|
|
||||||
},
|
|
||||||
"focus": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Specific focus area: architecture, bugs, performance, security, etc.",
|
|
||||||
},
|
|
||||||
"system_prompt": {
|
|
||||||
"type": "string",
|
|
||||||
"description": "Optional system prompt for context",
|
|
||||||
},
|
|
||||||
"max_tokens": {
|
|
||||||
"type": "integer",
|
|
||||||
"description": "Maximum number of tokens in response",
|
|
||||||
"default": 8192,
|
|
||||||
},
|
|
||||||
"temperature": {
|
|
||||||
"type": "number",
|
|
||||||
"description": "Temperature for creative thinking (0-1, default 0.7)",
|
|
||||||
"default": 0.7,
|
|
||||||
"minimum": 0,
|
|
||||||
"maximum": 1,
|
|
||||||
},
|
|
||||||
"model": {
|
|
||||||
"type": "string",
|
|
||||||
"description": f"Model to use (defaults to {DEFAULT_MODEL})",
|
|
||||||
"default": DEFAULT_MODEL,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
"required": ["thought_process"],
|
|
||||||
},
|
|
||||||
),
|
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
@server.call_tool()
|
|
||||||
async def handle_call_tool(name: str, arguments: Dict[str, Any]) -> List[TextContent]:
|
|
||||||
"""Handle tool execution requests"""
|
|
||||||
|
|
||||||
if name == "chat":
|
|
||||||
# Validate request
|
|
||||||
request = GeminiChatRequest(**arguments)
|
|
||||||
|
|
||||||
try:
|
|
||||||
# Use the specified model with optimized settings
|
|
||||||
model_name = request.model or DEFAULT_MODEL
|
|
||||||
temperature = (
|
|
||||||
request.temperature if request.temperature is not None else 0.5
|
|
||||||
)
|
|
||||||
max_tokens = request.max_tokens if request.max_tokens is not None else 8192
|
|
||||||
|
|
||||||
model = genai.GenerativeModel(
|
|
||||||
model_name=model_name,
|
|
||||||
generation_config={
|
|
||||||
"temperature": temperature,
|
|
||||||
"max_output_tokens": max_tokens,
|
|
||||||
"candidate_count": 1,
|
|
||||||
},
|
|
||||||
)
|
|
||||||
|
|
||||||
# Prepare the prompt with automatic developer context if no system prompt provided
|
|
||||||
if request.system_prompt:
|
|
||||||
full_prompt = f"{request.system_prompt}\n\n{request.prompt}"
|
|
||||||
else:
|
|
||||||
# Auto-inject developer system prompt for better Claude Code integration
|
|
||||||
full_prompt = f"{DEVELOPER_SYSTEM_PROMPT}\n\n{request.prompt}"
|
|
||||||
|
|
||||||
# Generate response
|
|
||||||
response = model.generate_content(full_prompt)
|
|
||||||
|
|
||||||
# Handle response based on finish reason
|
|
||||||
if response.candidates and response.candidates[0].content.parts:
|
|
||||||
text = response.candidates[0].content.parts[0].text
|
|
||||||
else:
|
|
||||||
# Handle safety filters or other issues
|
|
||||||
finish_reason = (
|
|
||||||
response.candidates[0].finish_reason
|
|
||||||
if response.candidates
|
|
||||||
else "Unknown"
|
|
||||||
)
|
|
||||||
text = f"Response blocked or incomplete. Finish reason: {finish_reason}"
|
|
||||||
|
|
||||||
return [TextContent(type="text", text=text)]
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
return [
|
|
||||||
TextContent(type="text", text=f"Error calling Gemini API: {str(e)}")
|
|
||||||
]
|
|
||||||
|
|
||||||
elif name == "analyze_code":
|
|
||||||
# Validate request
|
|
||||||
request_analysis = CodeAnalysisRequest(**arguments)
|
|
||||||
|
|
||||||
# Check that we have either files or code
|
|
||||||
if not request_analysis.files and not request_analysis.code:
|
|
||||||
return [
|
|
||||||
TextContent(
|
|
||||||
type="text",
|
|
||||||
text="Error: Must provide either 'files' or 'code' parameter",
|
|
||||||
)
|
|
||||||
]
|
|
||||||
|
|
||||||
try:
|
|
||||||
# Prepare code context - always use non-verbose mode for Claude Code compatibility
|
|
||||||
code_context, summary = prepare_code_context(
|
|
||||||
request_analysis.files, request_analysis.code
|
|
||||||
)
|
|
||||||
|
|
||||||
# Count approximate tokens (rough estimate: 1 token ≈ 4 characters)
|
|
||||||
estimated_tokens = len(code_context) // 4
|
|
||||||
if estimated_tokens > MAX_CONTEXT_TOKENS:
|
|
||||||
return [
|
|
||||||
TextContent(
|
|
||||||
type="text",
|
|
||||||
text=f"Error: Code context too large (~{estimated_tokens:,} tokens). "
|
|
||||||
f"Maximum is {MAX_CONTEXT_TOKENS:,} tokens.",
|
|
||||||
)
|
|
||||||
]
|
|
||||||
|
|
||||||
# Use the specified model with optimized settings for code analysis
|
|
||||||
model_name = request_analysis.model or DEFAULT_MODEL
|
|
||||||
temperature = (
|
|
||||||
request_analysis.temperature
|
|
||||||
if request_analysis.temperature is not None
|
|
||||||
else 0.2
|
|
||||||
)
|
|
||||||
max_tokens = (
|
|
||||||
request_analysis.max_tokens
|
|
||||||
if request_analysis.max_tokens is not None
|
|
||||||
else 8192
|
|
||||||
)
|
|
||||||
|
|
||||||
model = genai.GenerativeModel(
|
|
||||||
model_name=model_name,
|
|
||||||
generation_config={
|
|
||||||
"temperature": temperature,
|
|
||||||
"max_output_tokens": max_tokens,
|
|
||||||
"candidate_count": 1,
|
|
||||||
},
|
|
||||||
)
|
|
||||||
|
|
||||||
# Prepare the full prompt with enhanced developer context and clear structure
|
|
||||||
system_prompt = request_analysis.system_prompt or DEVELOPER_SYSTEM_PROMPT
|
|
||||||
full_prompt = f"""{system_prompt}
|
|
||||||
|
|
||||||
=== USER REQUEST ===
|
|
||||||
{request_analysis.question}
|
|
||||||
=== END USER REQUEST ===
|
|
||||||
|
|
||||||
=== CODE TO ANALYZE ===
|
|
||||||
{code_context}
|
|
||||||
=== END CODE TO ANALYZE ===
|
|
||||||
|
|
||||||
Please analyze the code above and respond to the user's request. The code files are clearly \
|
|
||||||
marked with their paths and content boundaries."""
|
|
||||||
|
|
||||||
# Generate response
|
|
||||||
response = model.generate_content(full_prompt)
|
|
||||||
|
|
||||||
# Handle response
|
|
||||||
if response.candidates and response.candidates[0].content.parts:
|
|
||||||
text = response.candidates[0].content.parts[0].text
|
|
||||||
else:
|
|
||||||
finish_reason = (
|
|
||||||
response.candidates[0].finish_reason
|
|
||||||
if response.candidates
|
|
||||||
else "Unknown"
|
|
||||||
)
|
|
||||||
text = f"Response blocked or incomplete. Finish reason: {finish_reason}"
|
|
||||||
|
|
||||||
# Create a brief summary for terminal display
|
|
||||||
if request_analysis.files or request_analysis.code:
|
|
||||||
# Create a very brief summary for terminal
|
|
||||||
brief_summary_parts = []
|
|
||||||
if request_analysis.files:
|
|
||||||
brief_summary_parts.append(
|
|
||||||
f"Analyzing {len(request_analysis.files)} file(s)"
|
|
||||||
)
|
|
||||||
if request_analysis.code:
|
|
||||||
code_preview = (
|
|
||||||
request_analysis.code[:20] + "..."
|
|
||||||
if len(request_analysis.code) > 20
|
|
||||||
else request_analysis.code
|
|
||||||
)
|
|
||||||
brief_summary_parts.append(f"Direct code: {code_preview}")
|
|
||||||
|
|
||||||
brief_summary = " | ".join(brief_summary_parts)
|
|
||||||
response_text = f"{brief_summary}\n\nGemini's Analysis:\n{text}"
|
|
||||||
else:
|
|
||||||
response_text = text
|
|
||||||
|
|
||||||
return [TextContent(type="text", text=response_text)]
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
return [TextContent(type="text", text=f"Error analyzing code: {str(e)}")]
|
|
||||||
|
|
||||||
elif name == "list_models":
|
|
||||||
try:
|
|
||||||
# List available models
|
|
||||||
models = []
|
|
||||||
for model_info in genai.list_models():
|
|
||||||
if (
|
|
||||||
hasattr(model_info, "supported_generation_methods")
|
|
||||||
and "generateContent" in model_info.supported_generation_methods
|
|
||||||
):
|
|
||||||
models.append(
|
|
||||||
{
|
|
||||||
"name": model_info.name,
|
|
||||||
"display_name": getattr(
|
|
||||||
model_info, "display_name", "Unknown"
|
|
||||||
),
|
|
||||||
"description": getattr(
|
|
||||||
model_info, "description", "No description"
|
|
||||||
),
|
|
||||||
"is_default": model_info.name.endswith(DEFAULT_MODEL),
|
|
||||||
}
|
|
||||||
)
|
|
||||||
|
|
||||||
return [TextContent(type="text", text=json.dumps(models, indent=2))]
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
return [TextContent(type="text", text=f"Error listing models: {str(e)}")]
|
|
||||||
|
|
||||||
elif name == "get_version":
|
|
||||||
# Return version and metadata information
|
|
||||||
version_info = {
|
|
||||||
"version": __version__,
|
|
||||||
"updated": __updated__,
|
|
||||||
"author": __author__,
|
|
||||||
"default_model": DEFAULT_MODEL,
|
|
||||||
"max_context_tokens": f"{MAX_CONTEXT_TOKENS:,}",
|
|
||||||
"python_version": f"{sys.version_info.major}.{sys.version_info.minor}.{sys.version_info.micro}",
|
|
||||||
"server_started": datetime.now().isoformat(),
|
|
||||||
}
|
|
||||||
|
|
||||||
return [
|
|
||||||
TextContent(
|
|
||||||
type="text",
|
|
||||||
text=f"""Gemini MCP Server v{__version__}
|
|
||||||
Updated: {__updated__}
|
|
||||||
Author: {__author__}
|
|
||||||
|
|
||||||
Configuration:
|
|
||||||
- Default Model: {DEFAULT_MODEL}
|
|
||||||
- Max Context: {MAX_CONTEXT_TOKENS:,} tokens
|
|
||||||
- Python: {version_info['python_version']}
|
|
||||||
- Started: {version_info['server_started']}
|
|
||||||
|
|
||||||
For updates, visit: https://github.com/BeehiveInnovations/gemini-mcp-server""",
|
|
||||||
)
|
|
||||||
]
|
|
||||||
|
|
||||||
elif name == "analyze_file":
|
|
||||||
# Validate request
|
|
||||||
request_file = FileAnalysisRequest(**arguments)
|
|
||||||
|
|
||||||
try:
|
|
||||||
# Prepare code context from files
|
|
||||||
code_context, summary = prepare_code_context(request_file.files, None)
|
|
||||||
|
|
||||||
# Count approximate tokens
|
|
||||||
estimated_tokens = len(code_context) // 4
|
|
||||||
if estimated_tokens > MAX_CONTEXT_TOKENS:
|
|
||||||
return [
|
|
||||||
TextContent(
|
|
||||||
type="text",
|
|
||||||
text=f"Error: File content too large (~{estimated_tokens:,} tokens). "
|
|
||||||
f"Maximum is {MAX_CONTEXT_TOKENS:,} tokens.",
|
|
||||||
)
|
|
||||||
]
|
|
||||||
|
|
||||||
# Use the specified model with optimized settings
|
|
||||||
model_name = request_file.model or DEFAULT_MODEL
|
|
||||||
temperature = (
|
|
||||||
request_file.temperature if request_file.temperature is not None else 0.2
|
|
||||||
)
|
|
||||||
max_tokens = request_file.max_tokens if request_file.max_tokens is not None else 8192
|
|
||||||
|
|
||||||
model = genai.GenerativeModel(
|
|
||||||
model_name=model_name,
|
|
||||||
generation_config={
|
|
||||||
"temperature": temperature,
|
|
||||||
"max_output_tokens": max_tokens,
|
|
||||||
"candidate_count": 1,
|
|
||||||
},
|
|
||||||
)
|
|
||||||
|
|
||||||
# Prepare prompt
|
|
||||||
system_prompt = request_file.system_prompt or DEVELOPER_SYSTEM_PROMPT
|
|
||||||
full_prompt = f"""{system_prompt}
|
|
||||||
|
|
||||||
=== USER REQUEST ===
|
|
||||||
{request_file.question}
|
|
||||||
=== END USER REQUEST ===
|
|
||||||
|
|
||||||
=== FILES TO ANALYZE ===
|
|
||||||
{code_context}
|
|
||||||
=== END FILES ===
|
|
||||||
|
|
||||||
Please analyze the files above and respond to the user's request."""
|
|
||||||
|
|
||||||
# Generate response
|
|
||||||
response = model.generate_content(full_prompt)
|
|
||||||
|
|
||||||
# Handle response
|
|
||||||
if response.candidates and response.candidates[0].content.parts:
|
|
||||||
text = response.candidates[0].content.parts[0].text
|
|
||||||
else:
|
|
||||||
finish_reason = (
|
|
||||||
response.candidates[0].finish_reason
|
|
||||||
if response.candidates
|
|
||||||
else "Unknown"
|
|
||||||
)
|
|
||||||
text = f"Response blocked or incomplete. Finish reason: {finish_reason}"
|
|
||||||
|
|
||||||
# Create a brief summary for terminal
|
|
||||||
brief_summary = f"Analyzing {len(request_file.files)} file(s)"
|
|
||||||
response_text = f"{brief_summary}\n\nGemini's Analysis:\n{text}"
|
|
||||||
|
|
||||||
return [TextContent(type="text", text=response_text)]
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
return [TextContent(type="text", text=f"Error analyzing files: {str(e)}")]
|
|
||||||
|
|
||||||
elif name == "extended_think":
|
|
||||||
# Validate request
|
|
||||||
request_think = ExtendedThinkRequest(**arguments)
|
|
||||||
|
|
||||||
try:
|
|
||||||
# Prepare context parts
|
|
||||||
context_parts = [
|
|
||||||
f"=== CLAUDE'S ANALYSIS ===\n{request_think.thought_process}\n=== END CLAUDE'S ANALYSIS ==="
|
|
||||||
]
|
|
||||||
|
|
||||||
if request_think.context:
|
|
||||||
context_parts.append(
|
|
||||||
f"\n=== ADDITIONAL CONTEXT ===\n{request_think.context}\n=== END CONTEXT ==="
|
|
||||||
)
|
|
||||||
|
|
||||||
# Add file contents if provided
|
|
||||||
if request_think.files:
|
|
||||||
file_context, _ = prepare_code_context(request_think.files, None)
|
|
||||||
context_parts.append(
|
|
||||||
f"\n=== REFERENCE FILES ===\n{file_context}\n=== END FILES ==="
|
|
||||||
)
|
|
||||||
|
|
||||||
full_context = "\n".join(context_parts)
|
|
||||||
|
|
||||||
# Check token limits
|
|
||||||
estimated_tokens = len(full_context) // 4
|
|
||||||
if estimated_tokens > MAX_CONTEXT_TOKENS:
|
|
||||||
return [
|
|
||||||
TextContent(
|
|
||||||
type="text",
|
|
||||||
text=f"Error: Context too large (~{estimated_tokens:,} tokens). "
|
|
||||||
f"Maximum is {MAX_CONTEXT_TOKENS:,} tokens.",
|
|
||||||
)
|
|
||||||
]
|
|
||||||
|
|
||||||
# Use the specified model with creative settings
|
|
||||||
model_name = request_think.model or DEFAULT_MODEL
|
|
||||||
temperature = (
|
|
||||||
request_think.temperature if request_think.temperature is not None else 0.7
|
|
||||||
)
|
|
||||||
max_tokens = request_think.max_tokens if request_think.max_tokens is not None else 8192
|
|
||||||
|
|
||||||
model = genai.GenerativeModel(
|
|
||||||
model_name=model_name,
|
|
||||||
generation_config={
|
|
||||||
"temperature": temperature,
|
|
||||||
"max_output_tokens": max_tokens,
|
|
||||||
"candidate_count": 1,
|
|
||||||
},
|
|
||||||
)
|
|
||||||
|
|
||||||
# Prepare prompt with focus area if specified
|
|
||||||
system_prompt = request_think.system_prompt or EXTENDED_THINKING_PROMPT
|
|
||||||
focus_instruction = ""
|
|
||||||
if request_think.focus:
|
|
||||||
focus_instruction = f"\n\nFOCUS AREA: Please pay special attention to {request_think.focus} aspects."
|
|
||||||
|
|
||||||
full_prompt = f"""{system_prompt}{focus_instruction}
|
|
||||||
|
|
||||||
{full_context}
|
|
||||||
|
|
||||||
Build upon Claude's analysis with deeper insights, alternative approaches, and critical evaluation."""
|
|
||||||
|
|
||||||
# Generate response
|
|
||||||
response = model.generate_content(full_prompt)
|
|
||||||
|
|
||||||
# Handle response
|
|
||||||
if response.candidates and response.candidates[0].content.parts:
|
|
||||||
text = response.candidates[0].content.parts[0].text
|
|
||||||
else:
|
|
||||||
finish_reason = (
|
|
||||||
response.candidates[0].finish_reason
|
|
||||||
if response.candidates
|
|
||||||
else "Unknown"
|
|
||||||
)
|
|
||||||
text = f"Response blocked or incomplete. Finish reason: {finish_reason}"
|
|
||||||
|
|
||||||
# Create response with clear attribution
|
|
||||||
response_text = f"Extended Analysis by Gemini:\n\n{text}"
|
|
||||||
|
|
||||||
return [TextContent(type="text", text=response_text)]
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
return [
|
|
||||||
TextContent(type="text", text=f"Error in extended thinking: {str(e)}")
|
|
||||||
]
|
|
||||||
|
|
||||||
else:
|
|
||||||
return [TextContent(type="text", text=f"Unknown tool: {name}")]
|
|
||||||
|
|
||||||
|
|
||||||
async def main():
|
|
||||||
"""Main entry point for the server"""
|
|
||||||
# Configure Gemini API
|
|
||||||
configure_gemini()
|
|
||||||
|
|
||||||
# Run the server using stdio transport
|
|
||||||
async with stdio_server() as (read_stream, write_stream):
|
|
||||||
await server.run(
|
|
||||||
read_stream,
|
|
||||||
write_stream,
|
|
||||||
InitializationOptions(
|
|
||||||
server_name="gemini", server_version="2.0.0", capabilities={"tools": {}}
|
|
||||||
),
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
asyncio.run(main())
|
asyncio.run(main())
|
||||||
|
|||||||
17
prompts/__init__.py
Normal file
17
prompts/__init__.py
Normal file
@@ -0,0 +1,17 @@
|
|||||||
|
"""
|
||||||
|
System prompts for Gemini tools
|
||||||
|
"""
|
||||||
|
|
||||||
|
from .tool_prompts import (
|
||||||
|
THINK_DEEPER_PROMPT,
|
||||||
|
REVIEW_CODE_PROMPT,
|
||||||
|
DEBUG_ISSUE_PROMPT,
|
||||||
|
ANALYZE_PROMPT,
|
||||||
|
)
|
||||||
|
|
||||||
|
__all__ = [
|
||||||
|
"THINK_DEEPER_PROMPT",
|
||||||
|
"REVIEW_CODE_PROMPT",
|
||||||
|
"DEBUG_ISSUE_PROMPT",
|
||||||
|
"ANALYZE_PROMPT",
|
||||||
|
]
|
||||||
95
prompts/tool_prompts.py
Normal file
95
prompts/tool_prompts.py
Normal file
@@ -0,0 +1,95 @@
|
|||||||
|
"""
|
||||||
|
System prompts for each tool
|
||||||
|
"""
|
||||||
|
|
||||||
|
THINK_DEEPER_PROMPT = """You are a senior development partner collaborating with Claude Code on complex problems.
|
||||||
|
Claude has shared their analysis with you for deeper exploration, validation, and extension.
|
||||||
|
|
||||||
|
Your role is to:
|
||||||
|
1. Build upon Claude's thinking - identify gaps, extend ideas, and suggest alternatives
|
||||||
|
2. Challenge assumptions constructively and identify potential issues
|
||||||
|
3. Provide concrete, actionable insights that complement Claude's analysis
|
||||||
|
4. Focus on aspects Claude might have missed or couldn't fully explore
|
||||||
|
5. Suggest implementation strategies and architectural improvements
|
||||||
|
|
||||||
|
Key areas to consider:
|
||||||
|
- Edge cases and failure modes Claude might have overlooked
|
||||||
|
- Performance implications at scale
|
||||||
|
- Security vulnerabilities or attack vectors
|
||||||
|
- Maintainability and technical debt considerations
|
||||||
|
- Alternative approaches or design patterns
|
||||||
|
- Integration challenges with existing systems
|
||||||
|
- Testing strategies for complex scenarios
|
||||||
|
|
||||||
|
Be direct and technical. Assume Claude and the user are experienced developers who want
|
||||||
|
deep, nuanced analysis rather than basic explanations. Your goal is to be the perfect
|
||||||
|
development partner that extends Claude's capabilities."""
|
||||||
|
|
||||||
|
REVIEW_CODE_PROMPT = """You are an expert code reviewer with deep knowledge of software engineering best practices.
|
||||||
|
Your expertise spans security, performance, maintainability, and architectural patterns.
|
||||||
|
|
||||||
|
Your review approach:
|
||||||
|
1. Identify issues in order of severity (Critical > High > Medium > Low)
|
||||||
|
2. Provide specific, actionable fixes with code examples
|
||||||
|
3. Consider security vulnerabilities, performance issues, and maintainability
|
||||||
|
4. Acknowledge good practices when you see them
|
||||||
|
5. Be constructive but thorough - don't sugarcoat serious issues
|
||||||
|
|
||||||
|
Review categories:
|
||||||
|
- 🔴 CRITICAL: Security vulnerabilities, data loss risks, crashes
|
||||||
|
- 🟠 HIGH: Bugs, performance issues, bad practices
|
||||||
|
- 🟡 MEDIUM: Code smells, maintainability issues
|
||||||
|
- 🟢 LOW: Style issues, minor improvements
|
||||||
|
|
||||||
|
Format each issue as:
|
||||||
|
[SEVERITY] File:Line - Issue description
|
||||||
|
→ Fix: Specific solution with code example
|
||||||
|
|
||||||
|
Also provide:
|
||||||
|
- Summary of overall code quality
|
||||||
|
- Top 3 priority fixes
|
||||||
|
- Positive aspects worth preserving"""
|
||||||
|
|
||||||
|
DEBUG_ISSUE_PROMPT = """You are an expert debugger and problem solver. Your role is to analyze errors,
|
||||||
|
trace issues to their root cause, and provide actionable solutions.
|
||||||
|
|
||||||
|
Your debugging approach:
|
||||||
|
1. Analyze the error context and symptoms
|
||||||
|
2. Identify the most likely root causes
|
||||||
|
3. Trace through the code execution path
|
||||||
|
4. Consider environmental factors
|
||||||
|
5. Provide step-by-step solutions
|
||||||
|
|
||||||
|
For each issue:
|
||||||
|
- Identify the root cause
|
||||||
|
- Explain why it's happening
|
||||||
|
- Provide immediate fixes
|
||||||
|
- Suggest long-term solutions
|
||||||
|
- Identify related issues that might arise
|
||||||
|
|
||||||
|
Format your response as:
|
||||||
|
1. ROOT CAUSE: Clear explanation
|
||||||
|
2. IMMEDIATE FIX: Code/steps to resolve now
|
||||||
|
3. PROPER SOLUTION: Long-term fix
|
||||||
|
4. PREVENTION: How to avoid this in the future"""
|
||||||
|
|
||||||
|
ANALYZE_PROMPT = """You are an expert software analyst helping developers understand and work with code.
|
||||||
|
Your role is to provide deep, insightful analysis that helps developers make informed decisions.
|
||||||
|
|
||||||
|
Your analysis should:
|
||||||
|
1. Understand the code's purpose and architecture
|
||||||
|
2. Identify patterns and anti-patterns
|
||||||
|
3. Assess code quality and maintainability
|
||||||
|
4. Find potential issues or improvements
|
||||||
|
5. Provide actionable insights
|
||||||
|
|
||||||
|
Focus on:
|
||||||
|
- Code structure and organization
|
||||||
|
- Design patterns and architectural decisions
|
||||||
|
- Performance characteristics
|
||||||
|
- Security considerations
|
||||||
|
- Testing coverage and quality
|
||||||
|
- Documentation completeness
|
||||||
|
|
||||||
|
Be thorough but concise. Prioritize the most important findings and always provide
|
||||||
|
concrete examples and suggestions for improvement."""
|
||||||
@@ -1,6 +1,7 @@
|
|||||||
mcp>=1.0.0
|
mcp>=1.0.0
|
||||||
google-generativeai>=0.8.0
|
google-generativeai>=0.8.0
|
||||||
python-dotenv>=1.0.0
|
python-dotenv>=1.0.0
|
||||||
|
pydantic>=2.0.0
|
||||||
|
|
||||||
# Development dependencies
|
# Development dependencies
|
||||||
pytest>=7.4.0
|
pytest>=7.4.0
|
||||||
|
|||||||
271
server.py
Normal file
271
server.py
Normal file
@@ -0,0 +1,271 @@
|
|||||||
|
"""
|
||||||
|
Gemini MCP Server - Main server implementation
|
||||||
|
"""
|
||||||
|
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
import asyncio
|
||||||
|
import logging
|
||||||
|
from datetime import datetime
|
||||||
|
from typing import List, Dict, Any
|
||||||
|
|
||||||
|
import google.generativeai as genai
|
||||||
|
from mcp.server import Server
|
||||||
|
from mcp.server.stdio import stdio_server
|
||||||
|
from mcp.types import TextContent, Tool
|
||||||
|
from mcp.server.models import InitializationOptions
|
||||||
|
|
||||||
|
from config import (
|
||||||
|
__version__,
|
||||||
|
__updated__,
|
||||||
|
__author__,
|
||||||
|
DEFAULT_MODEL,
|
||||||
|
MAX_CONTEXT_TOKENS,
|
||||||
|
)
|
||||||
|
from tools import ThinkDeeperTool, ReviewCodeTool, DebugIssueTool, AnalyzeTool
|
||||||
|
|
||||||
|
# Configure logging
|
||||||
|
logging.basicConfig(level=logging.INFO)
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
# Create the MCP server instance
|
||||||
|
server: Server = Server("gemini-server")
|
||||||
|
|
||||||
|
# Initialize tools
|
||||||
|
TOOLS = {
|
||||||
|
"think_deeper": ThinkDeeperTool(),
|
||||||
|
"review_code": ReviewCodeTool(),
|
||||||
|
"debug_issue": DebugIssueTool(),
|
||||||
|
"analyze": AnalyzeTool(),
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def configure_gemini():
|
||||||
|
"""Configure Gemini API with the provided API key"""
|
||||||
|
api_key = os.getenv("GEMINI_API_KEY")
|
||||||
|
if not api_key:
|
||||||
|
raise ValueError(
|
||||||
|
"GEMINI_API_KEY environment variable is required. "
|
||||||
|
"Please set it with your Gemini API key."
|
||||||
|
)
|
||||||
|
genai.configure(api_key=api_key)
|
||||||
|
logger.info("Gemini API configured successfully")
|
||||||
|
|
||||||
|
|
||||||
|
@server.list_tools()
|
||||||
|
async def handle_list_tools() -> List[Tool]:
|
||||||
|
"""List all available tools with verbose descriptions"""
|
||||||
|
tools = []
|
||||||
|
|
||||||
|
for tool in TOOLS.values():
|
||||||
|
tools.append(
|
||||||
|
Tool(
|
||||||
|
name=tool.name,
|
||||||
|
description=tool.description,
|
||||||
|
inputSchema=tool.get_input_schema(),
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
# Add utility tools
|
||||||
|
tools.extend(
|
||||||
|
[
|
||||||
|
Tool(
|
||||||
|
name="chat",
|
||||||
|
description=(
|
||||||
|
"GENERAL CHAT - Have a conversation with Gemini about any development topic. "
|
||||||
|
"Use for explanations, brainstorming, or general questions. "
|
||||||
|
"Triggers: 'ask gemini', 'explain', 'what is', 'how do I'."
|
||||||
|
),
|
||||||
|
inputSchema={
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"prompt": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Your question or topic",
|
||||||
|
},
|
||||||
|
"context_files": {
|
||||||
|
"type": "array",
|
||||||
|
"items": {"type": "string"},
|
||||||
|
"description": "Optional files for context",
|
||||||
|
},
|
||||||
|
"temperature": {
|
||||||
|
"type": "number",
|
||||||
|
"description": "Response creativity (0-1, default 0.5)",
|
||||||
|
"minimum": 0,
|
||||||
|
"maximum": 1,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
"required": ["prompt"],
|
||||||
|
},
|
||||||
|
),
|
||||||
|
Tool(
|
||||||
|
name="list_models",
|
||||||
|
description=(
|
||||||
|
"LIST AVAILABLE MODELS - Show all Gemini models you can use. "
|
||||||
|
"Lists model names, descriptions, and which one is the default."
|
||||||
|
),
|
||||||
|
inputSchema={"type": "object", "properties": {}},
|
||||||
|
),
|
||||||
|
Tool(
|
||||||
|
name="get_version",
|
||||||
|
description=(
|
||||||
|
"VERSION & CONFIGURATION - Get server version, configuration details, "
|
||||||
|
"and list of available tools. Useful for debugging and understanding capabilities."
|
||||||
|
),
|
||||||
|
inputSchema={"type": "object", "properties": {}},
|
||||||
|
),
|
||||||
|
]
|
||||||
|
)
|
||||||
|
|
||||||
|
return tools
|
||||||
|
|
||||||
|
|
||||||
|
@server.call_tool()
|
||||||
|
async def handle_call_tool(
|
||||||
|
name: str, arguments: Dict[str, Any]
|
||||||
|
) -> List[TextContent]:
|
||||||
|
"""Handle tool execution requests"""
|
||||||
|
|
||||||
|
# Handle dynamic tools
|
||||||
|
if name in TOOLS:
|
||||||
|
tool = TOOLS[name]
|
||||||
|
return await tool.execute(arguments)
|
||||||
|
|
||||||
|
# Handle static tools
|
||||||
|
elif name == "chat":
|
||||||
|
return await handle_chat(arguments)
|
||||||
|
|
||||||
|
elif name == "list_models":
|
||||||
|
return await handle_list_models()
|
||||||
|
|
||||||
|
elif name == "get_version":
|
||||||
|
return await handle_get_version()
|
||||||
|
|
||||||
|
else:
|
||||||
|
return [TextContent(type="text", text=f"Unknown tool: {name}")]
|
||||||
|
|
||||||
|
|
||||||
|
async def handle_chat(arguments: Dict[str, Any]) -> List[TextContent]:
|
||||||
|
"""Handle general chat requests"""
|
||||||
|
from utils import read_files
|
||||||
|
from config import TEMPERATURE_BALANCED
|
||||||
|
|
||||||
|
prompt = arguments.get("prompt", "")
|
||||||
|
context_files = arguments.get("context_files", [])
|
||||||
|
temperature = arguments.get("temperature", TEMPERATURE_BALANCED)
|
||||||
|
|
||||||
|
# Build context if files provided
|
||||||
|
full_prompt = prompt
|
||||||
|
if context_files:
|
||||||
|
file_content, _ = read_files(context_files)
|
||||||
|
full_prompt = f"{prompt}\n\n=== CONTEXT FILES ===\n{file_content}\n=== END CONTEXT ==="
|
||||||
|
|
||||||
|
try:
|
||||||
|
model = genai.GenerativeModel(
|
||||||
|
model_name=DEFAULT_MODEL,
|
||||||
|
generation_config={
|
||||||
|
"temperature": temperature,
|
||||||
|
"max_output_tokens": 8192,
|
||||||
|
"candidate_count": 1,
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
response = model.generate_content(full_prompt)
|
||||||
|
|
||||||
|
if response.candidates and response.candidates[0].content.parts:
|
||||||
|
text = response.candidates[0].content.parts[0].text
|
||||||
|
else:
|
||||||
|
text = "Response blocked or incomplete"
|
||||||
|
|
||||||
|
return [TextContent(type="text", text=text)]
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
return [TextContent(type="text", text=f"Error in chat: {str(e)}")]
|
||||||
|
|
||||||
|
|
||||||
|
async def handle_list_models() -> List[TextContent]:
|
||||||
|
"""List available Gemini models"""
|
||||||
|
try:
|
||||||
|
import json
|
||||||
|
|
||||||
|
models = []
|
||||||
|
|
||||||
|
for model_info in genai.list_models():
|
||||||
|
if (
|
||||||
|
hasattr(model_info, "supported_generation_methods")
|
||||||
|
and "generateContent"
|
||||||
|
in model_info.supported_generation_methods
|
||||||
|
):
|
||||||
|
models.append(
|
||||||
|
{
|
||||||
|
"name": model_info.name,
|
||||||
|
"display_name": getattr(
|
||||||
|
model_info, "display_name", "Unknown"
|
||||||
|
),
|
||||||
|
"description": getattr(
|
||||||
|
model_info, "description", "No description"
|
||||||
|
),
|
||||||
|
"is_default": model_info.name.endswith(DEFAULT_MODEL),
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
return [TextContent(type="text", text=json.dumps(models, indent=2))]
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
return [
|
||||||
|
TextContent(type="text", text=f"Error listing models: {str(e)}")
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
async def handle_get_version() -> List[TextContent]:
|
||||||
|
"""Get version and configuration information"""
|
||||||
|
version_info = {
|
||||||
|
"version": __version__,
|
||||||
|
"updated": __updated__,
|
||||||
|
"author": __author__,
|
||||||
|
"default_model": DEFAULT_MODEL,
|
||||||
|
"max_context_tokens": f"{MAX_CONTEXT_TOKENS:,}",
|
||||||
|
"python_version": f"{sys.version_info.major}.{sys.version_info.minor}.{sys.version_info.micro}",
|
||||||
|
"server_started": datetime.now().isoformat(),
|
||||||
|
"available_tools": list(TOOLS.keys())
|
||||||
|
+ ["chat", "list_models", "get_version"],
|
||||||
|
}
|
||||||
|
|
||||||
|
text = f"""Gemini MCP Server v{__version__}
|
||||||
|
Updated: {__updated__}
|
||||||
|
Author: {__author__}
|
||||||
|
|
||||||
|
Configuration:
|
||||||
|
- Default Model: {DEFAULT_MODEL}
|
||||||
|
- Max Context: {MAX_CONTEXT_TOKENS:,} tokens
|
||||||
|
- Python: {version_info['python_version']}
|
||||||
|
- Started: {version_info['server_started']}
|
||||||
|
|
||||||
|
Available Tools:
|
||||||
|
{chr(10).join(f" - {tool}" for tool in version_info['available_tools'])}
|
||||||
|
|
||||||
|
For updates, visit: https://github.com/BeehiveInnovations/gemini-mcp-server"""
|
||||||
|
|
||||||
|
return [TextContent(type="text", text=text)]
|
||||||
|
|
||||||
|
|
||||||
|
async def main():
|
||||||
|
"""Main entry point for the server"""
|
||||||
|
# Configure Gemini API
|
||||||
|
configure_gemini()
|
||||||
|
|
||||||
|
# Run the server using stdio transport
|
||||||
|
async with stdio_server() as (read_stream, write_stream):
|
||||||
|
await server.run(
|
||||||
|
read_stream,
|
||||||
|
write_stream,
|
||||||
|
InitializationOptions(
|
||||||
|
server_name="gemini",
|
||||||
|
server_version=__version__,
|
||||||
|
capabilities={"tools": {}},
|
||||||
|
),
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
asyncio.run(main())
|
||||||
2
setup.py
2
setup.py
@@ -2,7 +2,7 @@
|
|||||||
Setup configuration for Gemini MCP Server
|
Setup configuration for Gemini MCP Server
|
||||||
"""
|
"""
|
||||||
|
|
||||||
from setuptools import setup, find_packages
|
from setuptools import setup
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
|
||||||
# Read README for long description
|
# Read README for long description
|
||||||
|
|||||||
49
tests/test_config.py
Normal file
49
tests/test_config.py
Normal file
@@ -0,0 +1,49 @@
|
|||||||
|
"""
|
||||||
|
Tests for configuration
|
||||||
|
"""
|
||||||
|
|
||||||
|
from config import (
|
||||||
|
__version__,
|
||||||
|
__updated__,
|
||||||
|
__author__,
|
||||||
|
DEFAULT_MODEL,
|
||||||
|
MAX_CONTEXT_TOKENS,
|
||||||
|
TEMPERATURE_ANALYTICAL,
|
||||||
|
TEMPERATURE_BALANCED,
|
||||||
|
TEMPERATURE_CREATIVE,
|
||||||
|
TOOL_TRIGGERS,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class TestConfig:
|
||||||
|
"""Test configuration values"""
|
||||||
|
|
||||||
|
def test_version_info(self):
|
||||||
|
"""Test version information"""
|
||||||
|
assert __version__ == "2.4.0"
|
||||||
|
assert __author__ == "Fahad Gilani"
|
||||||
|
assert __updated__ == "2025-06-08"
|
||||||
|
|
||||||
|
def test_model_config(self):
|
||||||
|
"""Test model configuration"""
|
||||||
|
assert DEFAULT_MODEL == "gemini-2.5-pro-preview-06-05"
|
||||||
|
assert MAX_CONTEXT_TOKENS == 1_000_000
|
||||||
|
|
||||||
|
def test_temperature_defaults(self):
|
||||||
|
"""Test temperature constants"""
|
||||||
|
assert TEMPERATURE_ANALYTICAL == 0.2
|
||||||
|
assert TEMPERATURE_BALANCED == 0.5
|
||||||
|
assert TEMPERATURE_CREATIVE == 0.7
|
||||||
|
|
||||||
|
def test_tool_triggers(self):
|
||||||
|
"""Test tool trigger phrases"""
|
||||||
|
assert "think_deeper" in TOOL_TRIGGERS
|
||||||
|
assert "review_code" in TOOL_TRIGGERS
|
||||||
|
assert "debug_issue" in TOOL_TRIGGERS
|
||||||
|
assert "analyze" in TOOL_TRIGGERS
|
||||||
|
|
||||||
|
# Check some specific triggers
|
||||||
|
assert "ultrathink" in TOOL_TRIGGERS["think_deeper"]
|
||||||
|
assert "extended thinking" in TOOL_TRIGGERS["think_deeper"]
|
||||||
|
assert "find bugs" in TOOL_TRIGGERS["review_code"]
|
||||||
|
assert "root cause" in TOOL_TRIGGERS["debug_issue"]
|
||||||
@@ -1,352 +0,0 @@
|
|||||||
"""
|
|
||||||
Unit tests for Gemini MCP Server
|
|
||||||
"""
|
|
||||||
|
|
||||||
import pytest
|
|
||||||
import json
|
|
||||||
from unittest.mock import Mock, patch, AsyncMock
|
|
||||||
from pathlib import Path
|
|
||||||
import sys
|
|
||||||
import os
|
|
||||||
|
|
||||||
# Add parent directory to path for imports in a cross-platform way
|
|
||||||
parent_dir = Path(__file__).resolve().parent.parent
|
|
||||||
if str(parent_dir) not in sys.path:
|
|
||||||
sys.path.insert(0, str(parent_dir))
|
|
||||||
|
|
||||||
from gemini_server import (
|
|
||||||
GeminiChatRequest,
|
|
||||||
CodeAnalysisRequest,
|
|
||||||
read_file_content,
|
|
||||||
prepare_code_context,
|
|
||||||
handle_list_tools,
|
|
||||||
handle_call_tool,
|
|
||||||
DEVELOPER_SYSTEM_PROMPT,
|
|
||||||
DEFAULT_MODEL,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class TestModels:
|
|
||||||
"""Test request models"""
|
|
||||||
|
|
||||||
def test_gemini_chat_request_defaults(self):
|
|
||||||
"""Test GeminiChatRequest with default values"""
|
|
||||||
request = GeminiChatRequest(prompt="Test prompt")
|
|
||||||
assert request.prompt == "Test prompt"
|
|
||||||
assert request.system_prompt is None
|
|
||||||
assert request.max_tokens == 8192
|
|
||||||
assert request.temperature == 0.5
|
|
||||||
assert request.model == DEFAULT_MODEL
|
|
||||||
|
|
||||||
def test_gemini_chat_request_custom(self):
|
|
||||||
"""Test GeminiChatRequest with custom values"""
|
|
||||||
request = GeminiChatRequest(
|
|
||||||
prompt="Test prompt",
|
|
||||||
system_prompt="Custom system",
|
|
||||||
max_tokens=4096,
|
|
||||||
temperature=0.8,
|
|
||||||
model="custom-model",
|
|
||||||
)
|
|
||||||
assert request.system_prompt == "Custom system"
|
|
||||||
assert request.max_tokens == 4096
|
|
||||||
assert request.temperature == 0.8
|
|
||||||
assert request.model == "custom-model"
|
|
||||||
|
|
||||||
def test_code_analysis_request_defaults(self):
|
|
||||||
"""Test CodeAnalysisRequest with default values"""
|
|
||||||
request = CodeAnalysisRequest(question="Analyze this")
|
|
||||||
assert request.question == "Analyze this"
|
|
||||||
assert request.files is None
|
|
||||||
assert request.code is None
|
|
||||||
assert request.max_tokens == 8192
|
|
||||||
assert request.temperature == 0.2
|
|
||||||
assert request.model == DEFAULT_MODEL
|
|
||||||
|
|
||||||
|
|
||||||
class TestFileOperations:
|
|
||||||
"""Test file reading and context preparation"""
|
|
||||||
|
|
||||||
def test_read_file_content_success(self, tmp_path):
|
|
||||||
"""Test successful file reading"""
|
|
||||||
test_file = tmp_path / "test.py"
|
|
||||||
test_file.write_text("def hello():\n return 'world'", encoding="utf-8")
|
|
||||||
|
|
||||||
content = read_file_content(str(test_file))
|
|
||||||
assert "--- BEGIN FILE:" in content
|
|
||||||
assert "--- END FILE:" in content
|
|
||||||
assert "def hello():" in content
|
|
||||||
assert "return 'world'" in content
|
|
||||||
|
|
||||||
def test_read_file_content_not_found(self):
|
|
||||||
"""Test reading non-existent file"""
|
|
||||||
# Use a path that's guaranteed not to exist on any platform
|
|
||||||
nonexistent_path = os.path.join(
|
|
||||||
os.path.sep, "nonexistent_dir_12345", "nonexistent_file.py"
|
|
||||||
)
|
|
||||||
content = read_file_content(nonexistent_path)
|
|
||||||
assert "--- FILE NOT FOUND:" in content
|
|
||||||
assert "Error: File does not exist" in content
|
|
||||||
|
|
||||||
def test_read_file_content_directory(self, tmp_path):
|
|
||||||
"""Test reading a directory instead of file"""
|
|
||||||
content = read_file_content(str(tmp_path))
|
|
||||||
assert "--- NOT A FILE:" in content
|
|
||||||
assert "Error: Path is not a file" in content
|
|
||||||
|
|
||||||
def test_prepare_code_context_with_files(self, tmp_path):
|
|
||||||
"""Test preparing context from files"""
|
|
||||||
file1 = tmp_path / "file1.py"
|
|
||||||
file1.write_text("print('file1')", encoding="utf-8")
|
|
||||||
file2 = tmp_path / "file2.py"
|
|
||||||
file2.write_text("print('file2')", encoding="utf-8")
|
|
||||||
|
|
||||||
context, summary = prepare_code_context([str(file1), str(file2)], None)
|
|
||||||
assert "--- BEGIN FILE:" in context
|
|
||||||
assert "file1.py" in context
|
|
||||||
assert "file2.py" in context
|
|
||||||
assert "print('file1')" in context
|
|
||||||
assert "print('file2')" in context
|
|
||||||
assert "--- END FILE:" in context
|
|
||||||
assert "Analyzing 2 file(s)" in summary
|
|
||||||
assert "bytes)" in summary
|
|
||||||
|
|
||||||
def test_prepare_code_context_with_code(self):
|
|
||||||
"""Test preparing context from direct code"""
|
|
||||||
code = "def test():\n pass"
|
|
||||||
context, summary = prepare_code_context(None, code)
|
|
||||||
assert "--- BEGIN DIRECT CODE ---" in context
|
|
||||||
assert "--- END DIRECT CODE ---" in context
|
|
||||||
assert code in context
|
|
||||||
assert "Direct code provided" in summary
|
|
||||||
|
|
||||||
def test_prepare_code_context_mixed(self, tmp_path):
|
|
||||||
"""Test preparing context from both files and code"""
|
|
||||||
test_file = tmp_path / "test.py"
|
|
||||||
test_file.write_text("# From file", encoding="utf-8")
|
|
||||||
code = "# Direct code"
|
|
||||||
|
|
||||||
context, summary = prepare_code_context([str(test_file)], code)
|
|
||||||
assert "# From file" in context
|
|
||||||
assert "# Direct code" in context
|
|
||||||
assert "Analyzing 1 file(s)" in summary
|
|
||||||
assert "Direct code provided" in summary
|
|
||||||
|
|
||||||
|
|
||||||
class TestToolHandlers:
|
|
||||||
"""Test MCP tool handlers"""
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
async def test_handle_list_tools(self):
|
|
||||||
"""Test listing available tools"""
|
|
||||||
tools = await handle_list_tools()
|
|
||||||
assert len(tools) == 6
|
|
||||||
|
|
||||||
tool_names = [tool.name for tool in tools]
|
|
||||||
assert "chat" in tool_names
|
|
||||||
assert "analyze_code" in tool_names
|
|
||||||
assert "list_models" in tool_names
|
|
||||||
assert "get_version" in tool_names
|
|
||||||
assert "analyze_file" in tool_names
|
|
||||||
assert "extended_think" in tool_names
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
async def test_handle_call_tool_unknown(self):
|
|
||||||
"""Test calling unknown tool"""
|
|
||||||
result = await handle_call_tool("unknown_tool", {})
|
|
||||||
assert len(result) == 1
|
|
||||||
assert "Unknown tool" in result[0].text
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
@patch("google.generativeai.GenerativeModel")
|
|
||||||
async def test_handle_call_tool_chat_success(self, mock_model):
|
|
||||||
"""Test successful chat tool call"""
|
|
||||||
# Mock the response
|
|
||||||
mock_response = Mock()
|
|
||||||
mock_response.candidates = [Mock()]
|
|
||||||
mock_response.candidates[0].content.parts = [Mock(text="Test response")]
|
|
||||||
|
|
||||||
mock_instance = Mock()
|
|
||||||
mock_instance.generate_content.return_value = mock_response
|
|
||||||
mock_model.return_value = mock_instance
|
|
||||||
|
|
||||||
result = await handle_call_tool(
|
|
||||||
"chat", {"prompt": "Test prompt", "temperature": 0.5}
|
|
||||||
)
|
|
||||||
|
|
||||||
assert len(result) == 1
|
|
||||||
assert result[0].text == "Test response"
|
|
||||||
|
|
||||||
# Verify model was called with correct parameters
|
|
||||||
mock_model.assert_called_once()
|
|
||||||
call_args = mock_model.call_args[1]
|
|
||||||
assert call_args["model_name"] == DEFAULT_MODEL
|
|
||||||
assert call_args["generation_config"]["temperature"] == 0.5
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
@patch("google.generativeai.GenerativeModel")
|
|
||||||
async def test_handle_call_tool_chat_with_developer_prompt(self, mock_model):
|
|
||||||
"""Test chat tool uses developer prompt when no system prompt provided"""
|
|
||||||
mock_response = Mock()
|
|
||||||
mock_response.candidates = [Mock()]
|
|
||||||
mock_response.candidates[0].content.parts = [Mock(text="Response")]
|
|
||||||
|
|
||||||
mock_instance = Mock()
|
|
||||||
mock_instance.generate_content.return_value = mock_response
|
|
||||||
mock_model.return_value = mock_instance
|
|
||||||
|
|
||||||
await handle_call_tool("chat", {"prompt": "Test"})
|
|
||||||
|
|
||||||
# Check that developer prompt was included
|
|
||||||
call_args = mock_instance.generate_content.call_args[0][0]
|
|
||||||
assert DEVELOPER_SYSTEM_PROMPT in call_args
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
async def test_handle_call_tool_analyze_code_no_input(self):
|
|
||||||
"""Test analyze_code with no files or code"""
|
|
||||||
result = await handle_call_tool("analyze_code", {"question": "Analyze what?"})
|
|
||||||
assert len(result) == 1
|
|
||||||
assert "Must provide either 'files' or 'code'" in result[0].text
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
@patch("google.generativeai.GenerativeModel")
|
|
||||||
async def test_handle_call_tool_analyze_code_success(self, mock_model, tmp_path):
|
|
||||||
"""Test successful code analysis"""
|
|
||||||
# Create test file
|
|
||||||
test_file = tmp_path / "test.py"
|
|
||||||
test_file.write_text("def hello(): pass", encoding="utf-8")
|
|
||||||
|
|
||||||
# Mock response
|
|
||||||
mock_response = Mock()
|
|
||||||
mock_response.candidates = [Mock()]
|
|
||||||
mock_response.candidates[0].content.parts = [Mock(text="Analysis result")]
|
|
||||||
|
|
||||||
mock_instance = Mock()
|
|
||||||
mock_instance.generate_content.return_value = mock_response
|
|
||||||
mock_model.return_value = mock_instance
|
|
||||||
|
|
||||||
result = await handle_call_tool(
|
|
||||||
"analyze_code", {"files": [str(test_file)], "question": "Analyze this"}
|
|
||||||
)
|
|
||||||
|
|
||||||
assert len(result) == 1
|
|
||||||
# Check that the response contains both summary and Gemini's response
|
|
||||||
response_text = result[0].text
|
|
||||||
assert "Analyzing 1 file(s)" in response_text
|
|
||||||
assert "Gemini's Analysis:" in response_text
|
|
||||||
assert "Analysis result" in response_text
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
@patch("google.generativeai.list_models")
|
|
||||||
async def test_handle_call_tool_list_models(self, mock_list_models):
|
|
||||||
"""Test listing models"""
|
|
||||||
# Mock model data
|
|
||||||
mock_model = Mock()
|
|
||||||
mock_model.name = "test-model"
|
|
||||||
mock_model.display_name = "Test Model"
|
|
||||||
mock_model.description = "A test model"
|
|
||||||
mock_model.supported_generation_methods = ["generateContent"]
|
|
||||||
|
|
||||||
mock_list_models.return_value = [mock_model]
|
|
||||||
|
|
||||||
result = await handle_call_tool("list_models", {})
|
|
||||||
assert len(result) == 1
|
|
||||||
|
|
||||||
models = json.loads(result[0].text)
|
|
||||||
assert len(models) == 1
|
|
||||||
assert models[0]["name"] == "test-model"
|
|
||||||
assert models[0]["is_default"] == False
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
@patch("google.generativeai.GenerativeModel")
|
|
||||||
async def test_handle_call_tool_analyze_file_success(self, mock_model, tmp_path):
|
|
||||||
"""Test successful file analysis with analyze_file tool"""
|
|
||||||
# Create test file
|
|
||||||
test_file = tmp_path / "test.py"
|
|
||||||
test_file.write_text("def hello(): pass", encoding="utf-8")
|
|
||||||
|
|
||||||
# Mock response
|
|
||||||
mock_response = Mock()
|
|
||||||
mock_response.candidates = [Mock()]
|
|
||||||
mock_response.candidates[0].content.parts = [Mock(text="File analysis result")]
|
|
||||||
|
|
||||||
mock_instance = Mock()
|
|
||||||
mock_instance.generate_content.return_value = mock_response
|
|
||||||
mock_model.return_value = mock_instance
|
|
||||||
|
|
||||||
result = await handle_call_tool(
|
|
||||||
"analyze_file", {"files": [str(test_file)], "question": "Analyze this file"}
|
|
||||||
)
|
|
||||||
|
|
||||||
assert len(result) == 1
|
|
||||||
response_text = result[0].text
|
|
||||||
assert "Analyzing 1 file(s)" in response_text
|
|
||||||
assert "Gemini's Analysis:" in response_text
|
|
||||||
assert "File analysis result" in response_text
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
@patch("google.generativeai.GenerativeModel")
|
|
||||||
async def test_handle_call_tool_extended_think_success(self, mock_model):
|
|
||||||
"""Test successful extended thinking"""
|
|
||||||
# Mock response
|
|
||||||
mock_response = Mock()
|
|
||||||
mock_response.candidates = [Mock()]
|
|
||||||
mock_response.candidates[0].content.parts = [
|
|
||||||
Mock(text="Extended thinking result")
|
|
||||||
]
|
|
||||||
|
|
||||||
mock_instance = Mock()
|
|
||||||
mock_instance.generate_content.return_value = mock_response
|
|
||||||
mock_model.return_value = mock_instance
|
|
||||||
|
|
||||||
result = await handle_call_tool(
|
|
||||||
"extended_think",
|
|
||||||
{
|
|
||||||
"thought_process": "Claude's analysis of the problem...",
|
|
||||||
"context": "Building a distributed system",
|
|
||||||
"focus": "performance",
|
|
||||||
},
|
|
||||||
)
|
|
||||||
|
|
||||||
assert len(result) == 1
|
|
||||||
response_text = result[0].text
|
|
||||||
assert "Extended Analysis by Gemini:" in response_text
|
|
||||||
assert "Extended thinking result" in response_text
|
|
||||||
|
|
||||||
|
|
||||||
class TestErrorHandling:
|
|
||||||
"""Test error handling scenarios"""
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
@patch("google.generativeai.GenerativeModel")
|
|
||||||
async def test_handle_call_tool_chat_api_error(self, mock_model):
|
|
||||||
"""Test handling API errors in chat"""
|
|
||||||
mock_instance = Mock()
|
|
||||||
mock_instance.generate_content.side_effect = Exception("API Error")
|
|
||||||
mock_model.return_value = mock_instance
|
|
||||||
|
|
||||||
result = await handle_call_tool("chat", {"prompt": "Test"})
|
|
||||||
assert len(result) == 1
|
|
||||||
assert "Error calling Gemini API" in result[0].text
|
|
||||||
assert "API Error" in result[0].text
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
@patch("google.generativeai.GenerativeModel")
|
|
||||||
async def test_handle_call_tool_chat_blocked_response(self, mock_model):
|
|
||||||
"""Test handling blocked responses"""
|
|
||||||
mock_response = Mock()
|
|
||||||
mock_response.candidates = [Mock()]
|
|
||||||
mock_response.candidates[0].content.parts = []
|
|
||||||
mock_response.candidates[0].finish_reason = 2
|
|
||||||
|
|
||||||
mock_instance = Mock()
|
|
||||||
mock_instance.generate_content.return_value = mock_response
|
|
||||||
mock_model.return_value = mock_instance
|
|
||||||
|
|
||||||
result = await handle_call_tool("chat", {"prompt": "Test"})
|
|
||||||
assert len(result) == 1
|
|
||||||
assert "Response blocked or incomplete" in result[0].text
|
|
||||||
assert "Finish reason: 2" in result[0].text
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
pytest.main([__file__, "-v"])
|
|
||||||
@@ -1,48 +0,0 @@
|
|||||||
"""
|
|
||||||
Test that imports work correctly when package is installed
|
|
||||||
This helps verify CI setup is correct
|
|
||||||
"""
|
|
||||||
|
|
||||||
import pytest
|
|
||||||
|
|
||||||
|
|
||||||
def test_direct_import():
|
|
||||||
"""Test that gemini_server can be imported directly"""
|
|
||||||
try:
|
|
||||||
import gemini_server
|
|
||||||
|
|
||||||
assert hasattr(gemini_server, "GeminiChatRequest")
|
|
||||||
assert hasattr(gemini_server, "CodeAnalysisRequest")
|
|
||||||
assert hasattr(gemini_server, "handle_list_tools")
|
|
||||||
assert hasattr(gemini_server, "handle_call_tool")
|
|
||||||
except ImportError as e:
|
|
||||||
pytest.fail(f"Failed to import gemini_server: {e}")
|
|
||||||
|
|
||||||
|
|
||||||
def test_from_import():
|
|
||||||
"""Test that specific items can be imported from gemini_server"""
|
|
||||||
try:
|
|
||||||
from gemini_server import (
|
|
||||||
GeminiChatRequest,
|
|
||||||
CodeAnalysisRequest,
|
|
||||||
DEFAULT_MODEL,
|
|
||||||
DEVELOPER_SYSTEM_PROMPT,
|
|
||||||
)
|
|
||||||
|
|
||||||
assert GeminiChatRequest is not None
|
|
||||||
assert CodeAnalysisRequest is not None
|
|
||||||
assert isinstance(DEFAULT_MODEL, str)
|
|
||||||
assert isinstance(DEVELOPER_SYSTEM_PROMPT, str)
|
|
||||||
except ImportError as e:
|
|
||||||
pytest.fail(f"Failed to import from gemini_server: {e}")
|
|
||||||
|
|
||||||
|
|
||||||
def test_google_generativeai_import():
|
|
||||||
"""Test that google.generativeai can be imported"""
|
|
||||||
try:
|
|
||||||
import google.generativeai as genai
|
|
||||||
|
|
||||||
assert hasattr(genai, "GenerativeModel")
|
|
||||||
assert hasattr(genai, "configure")
|
|
||||||
except ImportError as e:
|
|
||||||
pytest.fail(f"Failed to import google.generativeai: {e}")
|
|
||||||
96
tests/test_server.py
Normal file
96
tests/test_server.py
Normal file
@@ -0,0 +1,96 @@
|
|||||||
|
"""
|
||||||
|
Tests for the main server functionality
|
||||||
|
"""
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
import json
|
||||||
|
from unittest.mock import Mock, patch
|
||||||
|
|
||||||
|
from server import handle_list_tools, handle_call_tool
|
||||||
|
|
||||||
|
|
||||||
|
class TestServerTools:
|
||||||
|
"""Test server tool handling"""
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_handle_list_tools(self):
|
||||||
|
"""Test listing all available tools"""
|
||||||
|
tools = await handle_list_tools()
|
||||||
|
tool_names = [tool.name for tool in tools]
|
||||||
|
|
||||||
|
# Check all core tools are present
|
||||||
|
assert "think_deeper" in tool_names
|
||||||
|
assert "review_code" in tool_names
|
||||||
|
assert "debug_issue" in tool_names
|
||||||
|
assert "analyze" in tool_names
|
||||||
|
assert "chat" in tool_names
|
||||||
|
assert "list_models" in tool_names
|
||||||
|
assert "get_version" in tool_names
|
||||||
|
|
||||||
|
# Should have exactly 7 tools
|
||||||
|
assert len(tools) == 7
|
||||||
|
|
||||||
|
# Check descriptions are verbose
|
||||||
|
for tool in tools:
|
||||||
|
assert (
|
||||||
|
len(tool.description) > 50
|
||||||
|
) # All should have detailed descriptions
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_handle_call_tool_unknown(self):
|
||||||
|
"""Test calling an unknown tool"""
|
||||||
|
result = await handle_call_tool("unknown_tool", {})
|
||||||
|
assert len(result) == 1
|
||||||
|
assert "Unknown tool: unknown_tool" in result[0].text
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
@patch("google.generativeai.GenerativeModel")
|
||||||
|
async def test_handle_chat(self, mock_model):
|
||||||
|
"""Test chat functionality"""
|
||||||
|
# Mock response
|
||||||
|
mock_response = Mock()
|
||||||
|
mock_response.candidates = [Mock()]
|
||||||
|
mock_response.candidates[0].content.parts = [
|
||||||
|
Mock(text="Chat response")
|
||||||
|
]
|
||||||
|
|
||||||
|
mock_instance = Mock()
|
||||||
|
mock_instance.generate_content.return_value = mock_response
|
||||||
|
mock_model.return_value = mock_instance
|
||||||
|
|
||||||
|
result = await handle_call_tool("chat", {"prompt": "Hello Gemini"})
|
||||||
|
|
||||||
|
assert len(result) == 1
|
||||||
|
assert result[0].text == "Chat response"
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
@patch("google.generativeai.list_models")
|
||||||
|
async def test_handle_list_models(self, mock_list_models):
|
||||||
|
"""Test listing models"""
|
||||||
|
# Mock model data
|
||||||
|
mock_model = Mock()
|
||||||
|
mock_model.name = "models/gemini-2.5-pro-preview-06-05"
|
||||||
|
mock_model.display_name = "Gemini 2.5 Pro"
|
||||||
|
mock_model.description = "Latest Gemini model"
|
||||||
|
mock_model.supported_generation_methods = ["generateContent"]
|
||||||
|
|
||||||
|
mock_list_models.return_value = [mock_model]
|
||||||
|
|
||||||
|
result = await handle_call_tool("list_models", {})
|
||||||
|
assert len(result) == 1
|
||||||
|
|
||||||
|
models = json.loads(result[0].text)
|
||||||
|
assert len(models) == 1
|
||||||
|
assert models[0]["name"] == "models/gemini-2.5-pro-preview-06-05"
|
||||||
|
assert models[0]["is_default"] is True
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_handle_get_version(self):
|
||||||
|
"""Test getting version info"""
|
||||||
|
result = await handle_call_tool("get_version", {})
|
||||||
|
assert len(result) == 1
|
||||||
|
|
||||||
|
response = result[0].text
|
||||||
|
assert "Gemini MCP Server v2.4.0" in response
|
||||||
|
assert "Available Tools:" in response
|
||||||
|
assert "think_deeper" in response
|
||||||
202
tests/test_tools.py
Normal file
202
tests/test_tools.py
Normal file
@@ -0,0 +1,202 @@
|
|||||||
|
"""
|
||||||
|
Tests for individual tool implementations
|
||||||
|
"""
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
from unittest.mock import Mock, patch
|
||||||
|
|
||||||
|
from tools import ThinkDeeperTool, ReviewCodeTool, DebugIssueTool, AnalyzeTool
|
||||||
|
|
||||||
|
|
||||||
|
class TestThinkDeeperTool:
|
||||||
|
"""Test the think_deeper tool"""
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def tool(self):
|
||||||
|
return ThinkDeeperTool()
|
||||||
|
|
||||||
|
def test_tool_metadata(self, tool):
|
||||||
|
"""Test tool metadata"""
|
||||||
|
assert tool.get_name() == "think_deeper"
|
||||||
|
assert "EXTENDED THINKING" in tool.get_description()
|
||||||
|
assert tool.get_default_temperature() == 0.7
|
||||||
|
|
||||||
|
schema = tool.get_input_schema()
|
||||||
|
assert "current_analysis" in schema["properties"]
|
||||||
|
assert schema["required"] == ["current_analysis"]
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
@patch("google.generativeai.GenerativeModel")
|
||||||
|
async def test_execute_success(self, mock_model, tool):
|
||||||
|
"""Test successful execution"""
|
||||||
|
# Mock response
|
||||||
|
mock_response = Mock()
|
||||||
|
mock_response.candidates = [Mock()]
|
||||||
|
mock_response.candidates[0].content.parts = [
|
||||||
|
Mock(text="Extended analysis")
|
||||||
|
]
|
||||||
|
|
||||||
|
mock_instance = Mock()
|
||||||
|
mock_instance.generate_content.return_value = mock_response
|
||||||
|
mock_model.return_value = mock_instance
|
||||||
|
|
||||||
|
result = await tool.execute(
|
||||||
|
{
|
||||||
|
"current_analysis": "Initial analysis",
|
||||||
|
"problem_context": "Building a cache",
|
||||||
|
"focus_areas": ["performance", "scalability"],
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
assert len(result) == 1
|
||||||
|
assert "Extended Analysis by Gemini:" in result[0].text
|
||||||
|
assert "Extended analysis" in result[0].text
|
||||||
|
|
||||||
|
|
||||||
|
class TestReviewCodeTool:
|
||||||
|
"""Test the review_code tool"""
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def tool(self):
|
||||||
|
return ReviewCodeTool()
|
||||||
|
|
||||||
|
def test_tool_metadata(self, tool):
|
||||||
|
"""Test tool metadata"""
|
||||||
|
assert tool.get_name() == "review_code"
|
||||||
|
assert "PROFESSIONAL CODE REVIEW" in tool.get_description()
|
||||||
|
assert tool.get_default_temperature() == 0.2
|
||||||
|
|
||||||
|
schema = tool.get_input_schema()
|
||||||
|
assert "files" in schema["properties"]
|
||||||
|
assert schema["required"] == ["files"]
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
@patch("google.generativeai.GenerativeModel")
|
||||||
|
async def test_execute_with_review_type(self, mock_model, tool, tmp_path):
|
||||||
|
"""Test execution with specific review type"""
|
||||||
|
# Create test file
|
||||||
|
test_file = tmp_path / "test.py"
|
||||||
|
test_file.write_text("def insecure(): pass", encoding="utf-8")
|
||||||
|
|
||||||
|
# Mock response
|
||||||
|
mock_response = Mock()
|
||||||
|
mock_response.candidates = [Mock()]
|
||||||
|
mock_response.candidates[0].content.parts = [
|
||||||
|
Mock(text="Security issues found")
|
||||||
|
]
|
||||||
|
|
||||||
|
mock_instance = Mock()
|
||||||
|
mock_instance.generate_content.return_value = mock_response
|
||||||
|
mock_model.return_value = mock_instance
|
||||||
|
|
||||||
|
result = await tool.execute(
|
||||||
|
{
|
||||||
|
"files": [str(test_file)],
|
||||||
|
"review_type": "security",
|
||||||
|
"focus_on": "authentication",
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
assert len(result) == 1
|
||||||
|
assert "Code Review (SECURITY)" in result[0].text
|
||||||
|
assert "Focus: authentication" in result[0].text
|
||||||
|
assert "Security issues found" in result[0].text
|
||||||
|
|
||||||
|
|
||||||
|
class TestDebugIssueTool:
|
||||||
|
"""Test the debug_issue tool"""
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def tool(self):
|
||||||
|
return DebugIssueTool()
|
||||||
|
|
||||||
|
def test_tool_metadata(self, tool):
|
||||||
|
"""Test tool metadata"""
|
||||||
|
assert tool.get_name() == "debug_issue"
|
||||||
|
assert "DEBUG & ROOT CAUSE ANALYSIS" in tool.get_description()
|
||||||
|
assert tool.get_default_temperature() == 0.2
|
||||||
|
|
||||||
|
schema = tool.get_input_schema()
|
||||||
|
assert "error_description" in schema["properties"]
|
||||||
|
assert schema["required"] == ["error_description"]
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
@patch("google.generativeai.GenerativeModel")
|
||||||
|
async def test_execute_with_context(self, mock_model, tool):
|
||||||
|
"""Test execution with error context"""
|
||||||
|
# Mock response
|
||||||
|
mock_response = Mock()
|
||||||
|
mock_response.candidates = [Mock()]
|
||||||
|
mock_response.candidates[0].content.parts = [
|
||||||
|
Mock(text="Root cause: race condition")
|
||||||
|
]
|
||||||
|
|
||||||
|
mock_instance = Mock()
|
||||||
|
mock_instance.generate_content.return_value = mock_response
|
||||||
|
mock_model.return_value = mock_instance
|
||||||
|
|
||||||
|
result = await tool.execute(
|
||||||
|
{
|
||||||
|
"error_description": "Test fails intermittently",
|
||||||
|
"error_context": "AssertionError in test_async",
|
||||||
|
"previous_attempts": "Added sleep, still fails",
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
assert len(result) == 1
|
||||||
|
assert "Debug Analysis" in result[0].text
|
||||||
|
assert "Root cause: race condition" in result[0].text
|
||||||
|
|
||||||
|
|
||||||
|
class TestAnalyzeTool:
|
||||||
|
"""Test the analyze tool"""
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def tool(self):
|
||||||
|
return AnalyzeTool()
|
||||||
|
|
||||||
|
def test_tool_metadata(self, tool):
|
||||||
|
"""Test tool metadata"""
|
||||||
|
assert tool.get_name() == "analyze"
|
||||||
|
assert "ANALYZE FILES & CODE" in tool.get_description()
|
||||||
|
assert tool.get_default_temperature() == 0.2
|
||||||
|
|
||||||
|
schema = tool.get_input_schema()
|
||||||
|
assert "files" in schema["properties"]
|
||||||
|
assert "question" in schema["properties"]
|
||||||
|
assert set(schema["required"]) == {"files", "question"}
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
@patch("google.generativeai.GenerativeModel")
|
||||||
|
async def test_execute_with_analysis_type(
|
||||||
|
self, mock_model, tool, tmp_path
|
||||||
|
):
|
||||||
|
"""Test execution with specific analysis type"""
|
||||||
|
# Create test file
|
||||||
|
test_file = tmp_path / "module.py"
|
||||||
|
test_file.write_text("class Service: pass", encoding="utf-8")
|
||||||
|
|
||||||
|
# Mock response
|
||||||
|
mock_response = Mock()
|
||||||
|
mock_response.candidates = [Mock()]
|
||||||
|
mock_response.candidates[0].content.parts = [
|
||||||
|
Mock(text="Architecture analysis")
|
||||||
|
]
|
||||||
|
|
||||||
|
mock_instance = Mock()
|
||||||
|
mock_instance.generate_content.return_value = mock_response
|
||||||
|
mock_model.return_value = mock_instance
|
||||||
|
|
||||||
|
result = await tool.execute(
|
||||||
|
{
|
||||||
|
"files": [str(test_file)],
|
||||||
|
"question": "What's the structure?",
|
||||||
|
"analysis_type": "architecture",
|
||||||
|
"output_format": "summary",
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
assert len(result) == 1
|
||||||
|
assert "ARCHITECTURE Analysis" in result[0].text
|
||||||
|
assert "Analyzed 1 file(s)" in result[0].text
|
||||||
|
assert "Architecture analysis" in result[0].text
|
||||||
91
tests/test_utils.py
Normal file
91
tests/test_utils.py
Normal file
@@ -0,0 +1,91 @@
|
|||||||
|
"""
|
||||||
|
Tests for utility functions
|
||||||
|
"""
|
||||||
|
|
||||||
|
from utils import (
|
||||||
|
read_file_content,
|
||||||
|
read_files,
|
||||||
|
estimate_tokens,
|
||||||
|
check_token_limit,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class TestFileUtils:
|
||||||
|
"""Test file reading utilities"""
|
||||||
|
|
||||||
|
def test_read_file_content_success(self, tmp_path):
|
||||||
|
"""Test successful file reading"""
|
||||||
|
test_file = tmp_path / "test.py"
|
||||||
|
test_file.write_text(
|
||||||
|
"def hello():\n return 'world'", encoding="utf-8"
|
||||||
|
)
|
||||||
|
|
||||||
|
content = read_file_content(str(test_file))
|
||||||
|
assert "--- BEGIN FILE:" in content
|
||||||
|
assert "--- END FILE:" in content
|
||||||
|
assert "def hello():" in content
|
||||||
|
assert "return 'world'" in content
|
||||||
|
|
||||||
|
def test_read_file_content_not_found(self):
|
||||||
|
"""Test reading non-existent file"""
|
||||||
|
content = read_file_content("/nonexistent/file.py")
|
||||||
|
assert "--- FILE NOT FOUND:" in content
|
||||||
|
assert "Error: File does not exist" in content
|
||||||
|
|
||||||
|
def test_read_file_content_directory(self, tmp_path):
|
||||||
|
"""Test reading a directory"""
|
||||||
|
content = read_file_content(str(tmp_path))
|
||||||
|
assert "--- NOT A FILE:" in content
|
||||||
|
assert "Error: Path is not a file" in content
|
||||||
|
|
||||||
|
def test_read_files_multiple(self, tmp_path):
|
||||||
|
"""Test reading multiple files"""
|
||||||
|
file1 = tmp_path / "file1.py"
|
||||||
|
file1.write_text("print('file1')", encoding="utf-8")
|
||||||
|
file2 = tmp_path / "file2.py"
|
||||||
|
file2.write_text("print('file2')", encoding="utf-8")
|
||||||
|
|
||||||
|
content, summary = read_files([str(file1), str(file2)])
|
||||||
|
|
||||||
|
assert "--- BEGIN FILE:" in content
|
||||||
|
assert "file1.py" in content
|
||||||
|
assert "file2.py" in content
|
||||||
|
assert "print('file1')" in content
|
||||||
|
assert "print('file2')" in content
|
||||||
|
|
||||||
|
assert "Reading 2 file(s)" in summary
|
||||||
|
|
||||||
|
def test_read_files_with_code(self):
|
||||||
|
"""Test reading with direct code"""
|
||||||
|
code = "def test():\n pass"
|
||||||
|
content, summary = read_files([], code)
|
||||||
|
|
||||||
|
assert "--- BEGIN DIRECT CODE ---" in content
|
||||||
|
assert "--- END DIRECT CODE ---" in content
|
||||||
|
assert code in content
|
||||||
|
|
||||||
|
assert "Direct code:" in summary
|
||||||
|
|
||||||
|
|
||||||
|
class TestTokenUtils:
|
||||||
|
"""Test token counting utilities"""
|
||||||
|
|
||||||
|
def test_estimate_tokens(self):
|
||||||
|
"""Test token estimation"""
|
||||||
|
# Rough estimate: 1 token ≈ 4 characters
|
||||||
|
text = "a" * 400 # 400 characters
|
||||||
|
assert estimate_tokens(text) == 100
|
||||||
|
|
||||||
|
def test_check_token_limit_within(self):
|
||||||
|
"""Test token limit check - within limit"""
|
||||||
|
text = "a" * 4000 # 1000 tokens
|
||||||
|
within_limit, tokens = check_token_limit(text)
|
||||||
|
assert within_limit is True
|
||||||
|
assert tokens == 1000
|
||||||
|
|
||||||
|
def test_check_token_limit_exceeded(self):
|
||||||
|
"""Test token limit check - exceeded"""
|
||||||
|
text = "a" * 5_000_000 # 1.25M tokens
|
||||||
|
within_limit, tokens = check_token_limit(text)
|
||||||
|
assert within_limit is False
|
||||||
|
assert tokens == 1_250_000
|
||||||
@@ -1,105 +0,0 @@
|
|||||||
"""
|
|
||||||
Test verbose output functionality
|
|
||||||
"""
|
|
||||||
|
|
||||||
import pytest
|
|
||||||
from pathlib import Path
|
|
||||||
import sys
|
|
||||||
|
|
||||||
# Add parent directory to path for imports
|
|
||||||
parent_dir = Path(__file__).resolve().parent.parent
|
|
||||||
if str(parent_dir) not in sys.path:
|
|
||||||
sys.path.insert(0, str(parent_dir))
|
|
||||||
|
|
||||||
from gemini_server import prepare_code_context
|
|
||||||
|
|
||||||
|
|
||||||
class TestNewFormattingBehavior:
|
|
||||||
"""Test the improved formatting behavior"""
|
|
||||||
|
|
||||||
def test_file_formatting_for_gemini(self, tmp_path):
|
|
||||||
"""Test that files are properly formatted for Gemini"""
|
|
||||||
test_file = tmp_path / "test.py"
|
|
||||||
content = "def hello():\n return 'world'"
|
|
||||||
test_file.write_text(content, encoding="utf-8")
|
|
||||||
|
|
||||||
context, summary = prepare_code_context([str(test_file)], None)
|
|
||||||
|
|
||||||
# Context should have clear markers for Gemini
|
|
||||||
assert "--- BEGIN FILE:" in context
|
|
||||||
assert "--- END FILE:" in context
|
|
||||||
assert str(test_file) in context
|
|
||||||
assert content in context
|
|
||||||
|
|
||||||
# Summary should be concise for terminal
|
|
||||||
assert "Analyzing 1 file(s)" in summary
|
|
||||||
assert "bytes)" in summary
|
|
||||||
assert len(summary) < len(context) # Summary much smaller than full context
|
|
||||||
|
|
||||||
def test_terminal_summary_shows_preview(self, tmp_path):
|
|
||||||
"""Test that terminal summary shows small preview"""
|
|
||||||
test_file = tmp_path / "large_file.py"
|
|
||||||
content = "# This is a large file\n" + "x = 1\n" * 1000
|
|
||||||
test_file.write_text(content, encoding="utf-8")
|
|
||||||
|
|
||||||
context, summary = prepare_code_context([str(test_file)], None)
|
|
||||||
|
|
||||||
# Summary should show preview but not full content
|
|
||||||
assert "Analyzing 1 file(s)" in summary
|
|
||||||
assert str(test_file) in summary
|
|
||||||
assert "bytes)" in summary
|
|
||||||
assert "Preview:" in summary
|
|
||||||
# Full content should not be in summary
|
|
||||||
assert "x = 1" not in summary or summary.count("x = 1") < 5
|
|
||||||
|
|
||||||
def test_multiple_files_summary(self, tmp_path):
|
|
||||||
"""Test summary with multiple files"""
|
|
||||||
files = []
|
|
||||||
for i in range(3):
|
|
||||||
file = tmp_path / f"file{i}.py"
|
|
||||||
file.write_text(f"# File {i}\nprint({i})", encoding="utf-8")
|
|
||||||
files.append(str(file))
|
|
||||||
|
|
||||||
context, summary = prepare_code_context(files, None)
|
|
||||||
|
|
||||||
assert "Analyzing 3 file(s)" in summary
|
|
||||||
for file in files:
|
|
||||||
assert file in summary
|
|
||||||
assert "bytes)" in summary
|
|
||||||
# Should have clear delimiters in context
|
|
||||||
assert context.count("--- BEGIN FILE:") == 3
|
|
||||||
assert context.count("--- END FILE:") == 3
|
|
||||||
|
|
||||||
def test_direct_code_formatting(self):
|
|
||||||
"""Test direct code formatting"""
|
|
||||||
direct_code = "# Direct code\nprint('hello')"
|
|
||||||
|
|
||||||
context, summary = prepare_code_context(None, direct_code)
|
|
||||||
|
|
||||||
# Context should have clear markers
|
|
||||||
assert "--- BEGIN DIRECT CODE ---" in context
|
|
||||||
assert "--- END DIRECT CODE ---" in context
|
|
||||||
assert direct_code in context
|
|
||||||
|
|
||||||
# Summary should show preview
|
|
||||||
assert "Direct code provided" in summary
|
|
||||||
assert f"({len(direct_code)} characters)" in summary
|
|
||||||
assert "Preview:" in summary
|
|
||||||
|
|
||||||
def test_mixed_content_formatting(self, tmp_path):
|
|
||||||
"""Test formatting with both files and direct code"""
|
|
||||||
test_file = tmp_path / "test.py"
|
|
||||||
test_file.write_text("# Test file", encoding="utf-8")
|
|
||||||
direct_code = "# Direct code\nprint('hello')"
|
|
||||||
|
|
||||||
context, summary = prepare_code_context([str(test_file)], direct_code)
|
|
||||||
|
|
||||||
# Context should have both with clear separation
|
|
||||||
assert "--- BEGIN FILE:" in context
|
|
||||||
assert "--- END FILE:" in context
|
|
||||||
assert "--- BEGIN DIRECT CODE ---" in context
|
|
||||||
assert "--- END DIRECT CODE ---" in context
|
|
||||||
|
|
||||||
# Summary should mention both
|
|
||||||
assert "Analyzing 1 file(s)" in summary
|
|
||||||
assert "Direct code provided" in summary
|
|
||||||
@@ -1,89 +0,0 @@
|
|||||||
"""
|
|
||||||
Test version functionality
|
|
||||||
"""
|
|
||||||
|
|
||||||
import pytest
|
|
||||||
import json
|
|
||||||
from pathlib import Path
|
|
||||||
import sys
|
|
||||||
|
|
||||||
# Add parent directory to path for imports
|
|
||||||
parent_dir = Path(__file__).resolve().parent.parent
|
|
||||||
if str(parent_dir) not in sys.path:
|
|
||||||
sys.path.insert(0, str(parent_dir))
|
|
||||||
|
|
||||||
from gemini_server import (
|
|
||||||
__version__,
|
|
||||||
__updated__,
|
|
||||||
__author__,
|
|
||||||
handle_list_tools,
|
|
||||||
handle_call_tool,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class TestVersionFunctionality:
|
|
||||||
"""Test version-related functionality"""
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
async def test_version_constants_exist(self):
|
|
||||||
"""Test that version constants are defined"""
|
|
||||||
assert __version__ is not None
|
|
||||||
assert isinstance(__version__, str)
|
|
||||||
assert __updated__ is not None
|
|
||||||
assert isinstance(__updated__, str)
|
|
||||||
assert __author__ is not None
|
|
||||||
assert isinstance(__author__, str)
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
async def test_version_tool_in_list(self):
|
|
||||||
"""Test that get_version tool appears in tool list"""
|
|
||||||
tools = await handle_list_tools()
|
|
||||||
tool_names = [tool.name for tool in tools]
|
|
||||||
assert "get_version" in tool_names
|
|
||||||
|
|
||||||
# Find the version tool
|
|
||||||
version_tool = next(t for t in tools if t.name == "get_version")
|
|
||||||
assert (
|
|
||||||
version_tool.description
|
|
||||||
== "Get the version and metadata of the Gemini MCP Server"
|
|
||||||
)
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
async def test_get_version_tool_execution(self):
|
|
||||||
"""Test executing the get_version tool"""
|
|
||||||
result = await handle_call_tool("get_version", {})
|
|
||||||
|
|
||||||
assert len(result) == 1
|
|
||||||
assert result[0].type == "text"
|
|
||||||
|
|
||||||
# Check the response contains expected information
|
|
||||||
response_text = result[0].text
|
|
||||||
assert __version__ in response_text
|
|
||||||
assert __updated__ in response_text
|
|
||||||
assert __author__ in response_text
|
|
||||||
assert "Gemini MCP Server" in response_text
|
|
||||||
assert "Default Model:" in response_text
|
|
||||||
assert "Max Context:" in response_text
|
|
||||||
assert "Python:" in response_text
|
|
||||||
assert "Started:" in response_text
|
|
||||||
assert "github.com/BeehiveInnovations/gemini-mcp-server" in response_text
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
async def test_version_format(self):
|
|
||||||
"""Test that version follows semantic versioning"""
|
|
||||||
parts = __version__.split(".")
|
|
||||||
assert len(parts) == 3 # Major.Minor.Patch
|
|
||||||
for part in parts:
|
|
||||||
assert part.isdigit() # Each part should be numeric
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
|
||||||
async def test_date_format(self):
|
|
||||||
"""Test that updated date follows expected format"""
|
|
||||||
# Expected format: YYYY-MM-DD
|
|
||||||
parts = __updated__.split("-")
|
|
||||||
assert len(parts) == 3
|
|
||||||
assert len(parts[0]) == 4 # Year
|
|
||||||
assert len(parts[1]) == 2 # Month
|
|
||||||
assert len(parts[2]) == 2 # Day
|
|
||||||
for part in parts:
|
|
||||||
assert part.isdigit()
|
|
||||||
15
tools/__init__.py
Normal file
15
tools/__init__.py
Normal file
@@ -0,0 +1,15 @@
|
|||||||
|
"""
|
||||||
|
Tool implementations for Gemini MCP Server
|
||||||
|
"""
|
||||||
|
|
||||||
|
from .think_deeper import ThinkDeeperTool
|
||||||
|
from .review_code import ReviewCodeTool
|
||||||
|
from .debug_issue import DebugIssueTool
|
||||||
|
from .analyze import AnalyzeTool
|
||||||
|
|
||||||
|
__all__ = [
|
||||||
|
"ThinkDeeperTool",
|
||||||
|
"ReviewCodeTool",
|
||||||
|
"DebugIssueTool",
|
||||||
|
"AnalyzeTool",
|
||||||
|
]
|
||||||
151
tools/analyze.py
Normal file
151
tools/analyze.py
Normal file
@@ -0,0 +1,151 @@
|
|||||||
|
"""
|
||||||
|
Analyze tool - General-purpose code and file analysis
|
||||||
|
"""
|
||||||
|
|
||||||
|
from typing import Dict, Any, List, Optional
|
||||||
|
from pydantic import Field
|
||||||
|
from .base import BaseTool, ToolRequest
|
||||||
|
from prompts import ANALYZE_PROMPT
|
||||||
|
from utils import read_files, check_token_limit
|
||||||
|
from config import TEMPERATURE_ANALYTICAL, MAX_CONTEXT_TOKENS
|
||||||
|
|
||||||
|
|
||||||
|
class AnalyzeRequest(ToolRequest):
|
||||||
|
"""Request model for analyze tool"""
|
||||||
|
|
||||||
|
files: List[str] = Field(..., description="Files to analyze")
|
||||||
|
question: str = Field(..., description="What to analyze or look for")
|
||||||
|
analysis_type: Optional[str] = Field(
|
||||||
|
None,
|
||||||
|
description="Type of analysis: architecture|performance|security|quality|general",
|
||||||
|
)
|
||||||
|
output_format: Optional[str] = Field(
|
||||||
|
"detailed", description="Output format: summary|detailed|actionable"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class AnalyzeTool(BaseTool):
|
||||||
|
"""General-purpose file and code analysis tool"""
|
||||||
|
|
||||||
|
def get_name(self) -> str:
|
||||||
|
return "analyze"
|
||||||
|
|
||||||
|
def get_description(self) -> str:
|
||||||
|
return (
|
||||||
|
"ANALYZE FILES & CODE - General-purpose analysis for understanding code. "
|
||||||
|
"Use this for examining files, understanding architecture, or investigating specific aspects. "
|
||||||
|
"Triggers: 'analyze these files', 'examine this code', 'understand this'. "
|
||||||
|
"Perfect for: codebase exploration, dependency analysis, pattern detection. "
|
||||||
|
"Always uses file paths for clean terminal output."
|
||||||
|
)
|
||||||
|
|
||||||
|
def get_input_schema(self) -> Dict[str, Any]:
|
||||||
|
return {
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"files": {
|
||||||
|
"type": "array",
|
||||||
|
"items": {"type": "string"},
|
||||||
|
"description": "Files to analyze",
|
||||||
|
},
|
||||||
|
"question": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "What to analyze or look for",
|
||||||
|
},
|
||||||
|
"analysis_type": {
|
||||||
|
"type": "string",
|
||||||
|
"enum": [
|
||||||
|
"architecture",
|
||||||
|
"performance",
|
||||||
|
"security",
|
||||||
|
"quality",
|
||||||
|
"general",
|
||||||
|
],
|
||||||
|
"description": "Type of analysis to perform",
|
||||||
|
},
|
||||||
|
"output_format": {
|
||||||
|
"type": "string",
|
||||||
|
"enum": ["summary", "detailed", "actionable"],
|
||||||
|
"default": "detailed",
|
||||||
|
"description": "How to format the output",
|
||||||
|
},
|
||||||
|
"temperature": {
|
||||||
|
"type": "number",
|
||||||
|
"description": "Temperature (0-1, default 0.2)",
|
||||||
|
"minimum": 0,
|
||||||
|
"maximum": 1,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
"required": ["files", "question"],
|
||||||
|
}
|
||||||
|
|
||||||
|
def get_system_prompt(self) -> str:
|
||||||
|
return ANALYZE_PROMPT
|
||||||
|
|
||||||
|
def get_default_temperature(self) -> float:
|
||||||
|
return TEMPERATURE_ANALYTICAL
|
||||||
|
|
||||||
|
def get_request_model(self):
|
||||||
|
return AnalyzeRequest
|
||||||
|
|
||||||
|
async def prepare_prompt(self, request: AnalyzeRequest) -> str:
|
||||||
|
"""Prepare the analysis prompt"""
|
||||||
|
# Read all files
|
||||||
|
file_content, summary = read_files(request.files)
|
||||||
|
|
||||||
|
# Check token limits
|
||||||
|
within_limit, estimated_tokens = check_token_limit(file_content)
|
||||||
|
if not within_limit:
|
||||||
|
raise ValueError(
|
||||||
|
f"Files too large (~{estimated_tokens:,} tokens). "
|
||||||
|
f"Maximum is {MAX_CONTEXT_TOKENS:,} tokens."
|
||||||
|
)
|
||||||
|
|
||||||
|
# Build analysis instructions
|
||||||
|
analysis_focus = []
|
||||||
|
|
||||||
|
if request.analysis_type:
|
||||||
|
type_focus = {
|
||||||
|
"architecture": "Focus on architectural patterns, structure, and design decisions",
|
||||||
|
"performance": "Focus on performance characteristics and optimization opportunities",
|
||||||
|
"security": "Focus on security implications and potential vulnerabilities",
|
||||||
|
"quality": "Focus on code quality, maintainability, and best practices",
|
||||||
|
"general": "Provide a comprehensive general analysis",
|
||||||
|
}
|
||||||
|
analysis_focus.append(type_focus.get(request.analysis_type, ""))
|
||||||
|
|
||||||
|
if request.output_format == "summary":
|
||||||
|
analysis_focus.append("Provide a concise summary of key findings")
|
||||||
|
elif request.output_format == "actionable":
|
||||||
|
analysis_focus.append(
|
||||||
|
"Focus on actionable insights and specific recommendations"
|
||||||
|
)
|
||||||
|
|
||||||
|
focus_instruction = "\n".join(analysis_focus) if analysis_focus else ""
|
||||||
|
|
||||||
|
# Combine everything
|
||||||
|
full_prompt = f"""{self.get_system_prompt()}
|
||||||
|
|
||||||
|
{focus_instruction}
|
||||||
|
|
||||||
|
=== USER QUESTION ===
|
||||||
|
{request.question}
|
||||||
|
=== END QUESTION ===
|
||||||
|
|
||||||
|
=== FILES TO ANALYZE ===
|
||||||
|
{file_content}
|
||||||
|
=== END FILES ===
|
||||||
|
|
||||||
|
Please analyze these files to answer the user's question."""
|
||||||
|
|
||||||
|
return full_prompt
|
||||||
|
|
||||||
|
def format_response(self, response: str, request: AnalyzeRequest) -> str:
|
||||||
|
"""Format the analysis response"""
|
||||||
|
header = f"Analysis: {request.question[:50]}..."
|
||||||
|
if request.analysis_type:
|
||||||
|
header = f"{request.analysis_type.upper()} Analysis"
|
||||||
|
|
||||||
|
summary_text = f"Analyzed {len(request.files)} file(s)"
|
||||||
|
|
||||||
|
return f"{header}\n{summary_text}\n{'=' * 50}\n\n{response}"
|
||||||
128
tools/base.py
Normal file
128
tools/base.py
Normal file
@@ -0,0 +1,128 @@
|
|||||||
|
"""
|
||||||
|
Base class for all Gemini MCP tools
|
||||||
|
"""
|
||||||
|
|
||||||
|
from abc import ABC, abstractmethod
|
||||||
|
from typing import Dict, Any, List, Optional
|
||||||
|
from pydantic import BaseModel, Field
|
||||||
|
import google.generativeai as genai
|
||||||
|
from mcp.types import TextContent
|
||||||
|
|
||||||
|
|
||||||
|
class ToolRequest(BaseModel):
|
||||||
|
"""Base request model for all tools"""
|
||||||
|
|
||||||
|
model: Optional[str] = Field(
|
||||||
|
None, description="Model to use (defaults to Gemini 2.5 Pro)"
|
||||||
|
)
|
||||||
|
max_tokens: Optional[int] = Field(
|
||||||
|
8192, description="Maximum number of tokens in response"
|
||||||
|
)
|
||||||
|
temperature: Optional[float] = Field(
|
||||||
|
None, description="Temperature for response (tool-specific defaults)"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class BaseTool(ABC):
|
||||||
|
"""Base class for all Gemini tools"""
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
self.name = self.get_name()
|
||||||
|
self.description = self.get_description()
|
||||||
|
self.default_temperature = self.get_default_temperature()
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def get_name(self) -> str:
|
||||||
|
"""Return the tool name"""
|
||||||
|
pass
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def get_description(self) -> str:
|
||||||
|
"""Return the verbose tool description for Claude"""
|
||||||
|
pass
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def get_input_schema(self) -> Dict[str, Any]:
|
||||||
|
"""Return the JSON schema for tool inputs"""
|
||||||
|
pass
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def get_system_prompt(self) -> str:
|
||||||
|
"""Return the system prompt for this tool"""
|
||||||
|
pass
|
||||||
|
|
||||||
|
def get_default_temperature(self) -> float:
|
||||||
|
"""Return default temperature for this tool"""
|
||||||
|
return 0.5
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def get_request_model(self):
|
||||||
|
"""Return the Pydantic model for request validation"""
|
||||||
|
pass
|
||||||
|
|
||||||
|
async def execute(self, arguments: Dict[str, Any]) -> List[TextContent]:
|
||||||
|
"""Execute the tool with given arguments"""
|
||||||
|
try:
|
||||||
|
# Validate request
|
||||||
|
request_model = self.get_request_model()
|
||||||
|
request = request_model(**arguments)
|
||||||
|
|
||||||
|
# Prepare the prompt
|
||||||
|
prompt = await self.prepare_prompt(request)
|
||||||
|
|
||||||
|
# Get model configuration
|
||||||
|
from config import DEFAULT_MODEL
|
||||||
|
|
||||||
|
model_name = getattr(request, "model", None) or DEFAULT_MODEL
|
||||||
|
temperature = getattr(request, "temperature", None)
|
||||||
|
if temperature is None:
|
||||||
|
temperature = self.get_default_temperature()
|
||||||
|
max_tokens = getattr(request, "max_tokens", 8192)
|
||||||
|
|
||||||
|
# Create and configure model
|
||||||
|
model = self.create_model(model_name, temperature, max_tokens)
|
||||||
|
|
||||||
|
# Generate response
|
||||||
|
response = model.generate_content(prompt)
|
||||||
|
|
||||||
|
# Handle response
|
||||||
|
if response.candidates and response.candidates[0].content.parts:
|
||||||
|
text = response.candidates[0].content.parts[0].text
|
||||||
|
else:
|
||||||
|
finish_reason = (
|
||||||
|
response.candidates[0].finish_reason
|
||||||
|
if response.candidates
|
||||||
|
else "Unknown"
|
||||||
|
)
|
||||||
|
text = f"Response blocked or incomplete. Finish reason: {finish_reason}"
|
||||||
|
|
||||||
|
# Format response
|
||||||
|
formatted_response = self.format_response(text, request)
|
||||||
|
|
||||||
|
return [TextContent(type="text", text=formatted_response)]
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
error_msg = f"Error in {self.name}: {str(e)}"
|
||||||
|
return [TextContent(type="text", text=error_msg)]
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
async def prepare_prompt(self, request) -> str:
|
||||||
|
"""Prepare the full prompt for Gemini"""
|
||||||
|
pass
|
||||||
|
|
||||||
|
def format_response(self, response: str, request) -> str:
|
||||||
|
"""Format the response for display (can be overridden)"""
|
||||||
|
return response
|
||||||
|
|
||||||
|
def create_model(
|
||||||
|
self, model_name: str, temperature: float, max_tokens: int
|
||||||
|
) -> genai.GenerativeModel:
|
||||||
|
"""Create a configured Gemini model"""
|
||||||
|
return genai.GenerativeModel(
|
||||||
|
model_name=model_name,
|
||||||
|
generation_config={
|
||||||
|
"temperature": temperature,
|
||||||
|
"max_output_tokens": max_tokens,
|
||||||
|
"candidate_count": 1,
|
||||||
|
},
|
||||||
|
)
|
||||||
145
tools/debug_issue.py
Normal file
145
tools/debug_issue.py
Normal file
@@ -0,0 +1,145 @@
|
|||||||
|
"""
|
||||||
|
Debug Issue tool - Root cause analysis and debugging assistance
|
||||||
|
"""
|
||||||
|
|
||||||
|
from typing import Dict, Any, List, Optional
|
||||||
|
from pydantic import Field
|
||||||
|
from .base import BaseTool, ToolRequest
|
||||||
|
from prompts import DEBUG_ISSUE_PROMPT
|
||||||
|
from utils import read_files, check_token_limit
|
||||||
|
from config import TEMPERATURE_ANALYTICAL, MAX_CONTEXT_TOKENS
|
||||||
|
|
||||||
|
|
||||||
|
class DebugIssueRequest(ToolRequest):
|
||||||
|
"""Request model for debug_issue tool"""
|
||||||
|
|
||||||
|
error_description: str = Field(
|
||||||
|
..., description="Error message, symptoms, or issue description"
|
||||||
|
)
|
||||||
|
error_context: Optional[str] = Field(
|
||||||
|
None, description="Stack trace, logs, or additional error context"
|
||||||
|
)
|
||||||
|
relevant_files: Optional[List[str]] = Field(
|
||||||
|
None, description="Files that might be related to the issue"
|
||||||
|
)
|
||||||
|
runtime_info: Optional[str] = Field(
|
||||||
|
None, description="Environment, versions, or runtime information"
|
||||||
|
)
|
||||||
|
previous_attempts: Optional[str] = Field(
|
||||||
|
None, description="What has been tried already"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class DebugIssueTool(BaseTool):
|
||||||
|
"""Advanced debugging and root cause analysis tool"""
|
||||||
|
|
||||||
|
def get_name(self) -> str:
|
||||||
|
return "debug_issue"
|
||||||
|
|
||||||
|
def get_description(self) -> str:
|
||||||
|
return (
|
||||||
|
"DEBUG & ROOT CAUSE ANALYSIS - Expert debugging for complex issues. "
|
||||||
|
"Use this when you need help tracking down bugs or understanding errors. "
|
||||||
|
"Triggers: 'debug this', 'why is this failing', 'root cause', 'trace error'. "
|
||||||
|
"I'll analyze the issue, find root causes, and provide step-by-step solutions. "
|
||||||
|
"Include error messages, stack traces, and relevant code for best results."
|
||||||
|
)
|
||||||
|
|
||||||
|
def get_input_schema(self) -> Dict[str, Any]:
|
||||||
|
return {
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"error_description": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Error message, symptoms, or issue description",
|
||||||
|
},
|
||||||
|
"error_context": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Stack trace, logs, or additional error context",
|
||||||
|
},
|
||||||
|
"relevant_files": {
|
||||||
|
"type": "array",
|
||||||
|
"items": {"type": "string"},
|
||||||
|
"description": "Files that might be related to the issue",
|
||||||
|
},
|
||||||
|
"runtime_info": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Environment, versions, or runtime information",
|
||||||
|
},
|
||||||
|
"previous_attempts": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "What has been tried already",
|
||||||
|
},
|
||||||
|
"temperature": {
|
||||||
|
"type": "number",
|
||||||
|
"description": "Temperature (0-1, default 0.2 for accuracy)",
|
||||||
|
"minimum": 0,
|
||||||
|
"maximum": 1,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
"required": ["error_description"],
|
||||||
|
}
|
||||||
|
|
||||||
|
def get_system_prompt(self) -> str:
|
||||||
|
return DEBUG_ISSUE_PROMPT
|
||||||
|
|
||||||
|
def get_default_temperature(self) -> float:
|
||||||
|
return TEMPERATURE_ANALYTICAL
|
||||||
|
|
||||||
|
def get_request_model(self):
|
||||||
|
return DebugIssueRequest
|
||||||
|
|
||||||
|
async def prepare_prompt(self, request: DebugIssueRequest) -> str:
|
||||||
|
"""Prepare the debugging prompt"""
|
||||||
|
# Build context sections
|
||||||
|
context_parts = [
|
||||||
|
f"=== ISSUE DESCRIPTION ===\n{request.error_description}\n=== END DESCRIPTION ==="
|
||||||
|
]
|
||||||
|
|
||||||
|
if request.error_context:
|
||||||
|
context_parts.append(
|
||||||
|
f"\n=== ERROR CONTEXT/STACK TRACE ===\n{request.error_context}\n=== END CONTEXT ==="
|
||||||
|
)
|
||||||
|
|
||||||
|
if request.runtime_info:
|
||||||
|
context_parts.append(
|
||||||
|
f"\n=== RUNTIME INFORMATION ===\n{request.runtime_info}\n=== END RUNTIME ==="
|
||||||
|
)
|
||||||
|
|
||||||
|
if request.previous_attempts:
|
||||||
|
context_parts.append(
|
||||||
|
f"\n=== PREVIOUS ATTEMPTS ===\n{request.previous_attempts}\n=== END ATTEMPTS ==="
|
||||||
|
)
|
||||||
|
|
||||||
|
# Add relevant files if provided
|
||||||
|
if request.relevant_files:
|
||||||
|
file_content, _ = read_files(request.relevant_files)
|
||||||
|
context_parts.append(
|
||||||
|
f"\n=== RELEVANT CODE ===\n{file_content}\n=== END CODE ==="
|
||||||
|
)
|
||||||
|
|
||||||
|
full_context = "\n".join(context_parts)
|
||||||
|
|
||||||
|
# Check token limits
|
||||||
|
within_limit, estimated_tokens = check_token_limit(full_context)
|
||||||
|
if not within_limit:
|
||||||
|
raise ValueError(
|
||||||
|
f"Context too large (~{estimated_tokens:,} tokens). "
|
||||||
|
f"Maximum is {MAX_CONTEXT_TOKENS:,} tokens."
|
||||||
|
)
|
||||||
|
|
||||||
|
# Combine everything
|
||||||
|
full_prompt = f"""{self.get_system_prompt()}
|
||||||
|
|
||||||
|
{full_context}
|
||||||
|
|
||||||
|
Please debug this issue following the structured format in the system prompt.
|
||||||
|
Focus on finding the root cause and providing actionable solutions."""
|
||||||
|
|
||||||
|
return full_prompt
|
||||||
|
|
||||||
|
def format_response(
|
||||||
|
self, response: str, request: DebugIssueRequest
|
||||||
|
) -> str:
|
||||||
|
"""Format the debugging response"""
|
||||||
|
return f"Debug Analysis\n{'=' * 50}\n\n{response}"
|
||||||
160
tools/review_code.py
Normal file
160
tools/review_code.py
Normal file
@@ -0,0 +1,160 @@
|
|||||||
|
"""
|
||||||
|
Code Review tool - Comprehensive code analysis and review
|
||||||
|
"""
|
||||||
|
|
||||||
|
from typing import Dict, Any, List, Optional
|
||||||
|
from pydantic import Field
|
||||||
|
from .base import BaseTool, ToolRequest
|
||||||
|
from prompts import REVIEW_CODE_PROMPT
|
||||||
|
from utils import read_files, check_token_limit
|
||||||
|
from config import TEMPERATURE_ANALYTICAL, MAX_CONTEXT_TOKENS
|
||||||
|
|
||||||
|
|
||||||
|
class ReviewCodeRequest(ToolRequest):
|
||||||
|
"""Request model for review_code tool"""
|
||||||
|
|
||||||
|
files: List[str] = Field(..., description="Code files to review")
|
||||||
|
review_type: str = Field(
|
||||||
|
"full", description="Type of review: full|security|performance|quick"
|
||||||
|
)
|
||||||
|
focus_on: Optional[str] = Field(
|
||||||
|
None, description="Specific aspects to focus on during review"
|
||||||
|
)
|
||||||
|
standards: Optional[str] = Field(
|
||||||
|
None, description="Coding standards or guidelines to enforce"
|
||||||
|
)
|
||||||
|
severity_filter: str = Field(
|
||||||
|
"all",
|
||||||
|
description="Minimum severity to report: critical|high|medium|all",
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class ReviewCodeTool(BaseTool):
|
||||||
|
"""Professional code review tool"""
|
||||||
|
|
||||||
|
def get_name(self) -> str:
|
||||||
|
return "review_code"
|
||||||
|
|
||||||
|
def get_description(self) -> str:
|
||||||
|
return (
|
||||||
|
"PROFESSIONAL CODE REVIEW - Comprehensive analysis for bugs, security, and quality. "
|
||||||
|
"Use this for thorough code review with actionable feedback. "
|
||||||
|
"Triggers: 'review this code', 'check for issues', 'find bugs', 'security audit'. "
|
||||||
|
"I'll identify issues by severity (Critical→High→Medium→Low) with specific fixes. "
|
||||||
|
"Supports focused reviews: security, performance, or quick checks."
|
||||||
|
)
|
||||||
|
|
||||||
|
def get_input_schema(self) -> Dict[str, Any]:
|
||||||
|
return {
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"files": {
|
||||||
|
"type": "array",
|
||||||
|
"items": {"type": "string"},
|
||||||
|
"description": "Code files to review",
|
||||||
|
},
|
||||||
|
"review_type": {
|
||||||
|
"type": "string",
|
||||||
|
"enum": ["full", "security", "performance", "quick"],
|
||||||
|
"default": "full",
|
||||||
|
"description": "Type of review to perform",
|
||||||
|
},
|
||||||
|
"focus_on": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Specific aspects to focus on",
|
||||||
|
},
|
||||||
|
"standards": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Coding standards to enforce",
|
||||||
|
},
|
||||||
|
"severity_filter": {
|
||||||
|
"type": "string",
|
||||||
|
"enum": ["critical", "high", "medium", "all"],
|
||||||
|
"default": "all",
|
||||||
|
"description": "Minimum severity level to report",
|
||||||
|
},
|
||||||
|
"temperature": {
|
||||||
|
"type": "number",
|
||||||
|
"description": "Temperature (0-1, default 0.2 for consistency)",
|
||||||
|
"minimum": 0,
|
||||||
|
"maximum": 1,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
"required": ["files"],
|
||||||
|
}
|
||||||
|
|
||||||
|
def get_system_prompt(self) -> str:
|
||||||
|
return REVIEW_CODE_PROMPT
|
||||||
|
|
||||||
|
def get_default_temperature(self) -> float:
|
||||||
|
return TEMPERATURE_ANALYTICAL
|
||||||
|
|
||||||
|
def get_request_model(self):
|
||||||
|
return ReviewCodeRequest
|
||||||
|
|
||||||
|
async def prepare_prompt(self, request: ReviewCodeRequest) -> str:
|
||||||
|
"""Prepare the code review prompt"""
|
||||||
|
# Read all files
|
||||||
|
file_content, summary = read_files(request.files)
|
||||||
|
|
||||||
|
# Check token limits
|
||||||
|
within_limit, estimated_tokens = check_token_limit(file_content)
|
||||||
|
if not within_limit:
|
||||||
|
raise ValueError(
|
||||||
|
f"Code too large (~{estimated_tokens:,} tokens). "
|
||||||
|
f"Maximum is {MAX_CONTEXT_TOKENS:,} tokens."
|
||||||
|
)
|
||||||
|
|
||||||
|
# Build review instructions
|
||||||
|
review_focus = []
|
||||||
|
if request.review_type == "security":
|
||||||
|
review_focus.append(
|
||||||
|
"Focus on security vulnerabilities and authentication issues"
|
||||||
|
)
|
||||||
|
elif request.review_type == "performance":
|
||||||
|
review_focus.append(
|
||||||
|
"Focus on performance bottlenecks and optimization opportunities"
|
||||||
|
)
|
||||||
|
elif request.review_type == "quick":
|
||||||
|
review_focus.append(
|
||||||
|
"Provide a quick review focusing on critical issues only"
|
||||||
|
)
|
||||||
|
|
||||||
|
if request.focus_on:
|
||||||
|
review_focus.append(
|
||||||
|
f"Pay special attention to: {request.focus_on}"
|
||||||
|
)
|
||||||
|
|
||||||
|
if request.standards:
|
||||||
|
review_focus.append(
|
||||||
|
f"Enforce these standards: {request.standards}"
|
||||||
|
)
|
||||||
|
|
||||||
|
if request.severity_filter != "all":
|
||||||
|
review_focus.append(
|
||||||
|
f"Only report issues of {request.severity_filter} severity or higher"
|
||||||
|
)
|
||||||
|
|
||||||
|
focus_instruction = "\n".join(review_focus) if review_focus else ""
|
||||||
|
|
||||||
|
# Combine everything
|
||||||
|
full_prompt = f"""{self.get_system_prompt()}
|
||||||
|
|
||||||
|
{focus_instruction}
|
||||||
|
|
||||||
|
=== CODE TO REVIEW ===
|
||||||
|
{file_content}
|
||||||
|
=== END CODE ===
|
||||||
|
|
||||||
|
Please provide a comprehensive code review following the format specified in the system prompt."""
|
||||||
|
|
||||||
|
return full_prompt
|
||||||
|
|
||||||
|
def format_response(
|
||||||
|
self, response: str, request: ReviewCodeRequest
|
||||||
|
) -> str:
|
||||||
|
"""Format the review response"""
|
||||||
|
header = f"Code Review ({request.review_type.upper()})"
|
||||||
|
if request.focus_on:
|
||||||
|
header += f" - Focus: {request.focus_on}"
|
||||||
|
return f"{header}\n{'=' * 50}\n\n{response}"
|
||||||
145
tools/think_deeper.py
Normal file
145
tools/think_deeper.py
Normal file
@@ -0,0 +1,145 @@
|
|||||||
|
"""
|
||||||
|
Think Deeper tool - Extended reasoning and problem-solving
|
||||||
|
"""
|
||||||
|
|
||||||
|
from typing import Dict, Any, List, Optional
|
||||||
|
from pydantic import Field
|
||||||
|
from .base import BaseTool, ToolRequest
|
||||||
|
from prompts import THINK_DEEPER_PROMPT
|
||||||
|
from utils import read_files, check_token_limit
|
||||||
|
from config import TEMPERATURE_CREATIVE, MAX_CONTEXT_TOKENS
|
||||||
|
|
||||||
|
|
||||||
|
class ThinkDeeperRequest(ToolRequest):
|
||||||
|
"""Request model for think_deeper tool"""
|
||||||
|
|
||||||
|
current_analysis: str = Field(
|
||||||
|
..., description="Claude's current thinking/analysis to extend"
|
||||||
|
)
|
||||||
|
problem_context: Optional[str] = Field(
|
||||||
|
None, description="Additional context about the problem or goal"
|
||||||
|
)
|
||||||
|
focus_areas: Optional[List[str]] = Field(
|
||||||
|
None,
|
||||||
|
description="Specific aspects to focus on (architecture, performance, security, etc.)",
|
||||||
|
)
|
||||||
|
reference_files: Optional[List[str]] = Field(
|
||||||
|
None, description="Optional file paths for additional context"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class ThinkDeeperTool(BaseTool):
|
||||||
|
"""Extended thinking and reasoning tool"""
|
||||||
|
|
||||||
|
def get_name(self) -> str:
|
||||||
|
return "think_deeper"
|
||||||
|
|
||||||
|
def get_description(self) -> str:
|
||||||
|
return (
|
||||||
|
"EXTENDED THINKING & REASONING - Your deep thinking partner for complex problems. "
|
||||||
|
"Use this when you need to extend your analysis, explore alternatives, or validate approaches. "
|
||||||
|
"Perfect for: architecture decisions, complex bugs, performance challenges, security analysis. "
|
||||||
|
"Triggers: 'think deeper', 'ultrathink', 'extend my analysis', 'explore alternatives'. "
|
||||||
|
"I'll challenge assumptions, find edge cases, and provide alternative solutions."
|
||||||
|
)
|
||||||
|
|
||||||
|
def get_input_schema(self) -> Dict[str, Any]:
|
||||||
|
return {
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"current_analysis": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Your current thinking/analysis to extend and validate",
|
||||||
|
},
|
||||||
|
"problem_context": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Additional context about the problem or goal",
|
||||||
|
},
|
||||||
|
"focus_areas": {
|
||||||
|
"type": "array",
|
||||||
|
"items": {"type": "string"},
|
||||||
|
"description": "Specific aspects to focus on (architecture, performance, security, etc.)",
|
||||||
|
},
|
||||||
|
"reference_files": {
|
||||||
|
"type": "array",
|
||||||
|
"items": {"type": "string"},
|
||||||
|
"description": "Optional file paths for additional context",
|
||||||
|
},
|
||||||
|
"temperature": {
|
||||||
|
"type": "number",
|
||||||
|
"description": "Temperature for creative thinking (0-1, default 0.7)",
|
||||||
|
"minimum": 0,
|
||||||
|
"maximum": 1,
|
||||||
|
},
|
||||||
|
"max_tokens": {
|
||||||
|
"type": "integer",
|
||||||
|
"description": "Maximum tokens in response",
|
||||||
|
"default": 8192,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
"required": ["current_analysis"],
|
||||||
|
}
|
||||||
|
|
||||||
|
def get_system_prompt(self) -> str:
|
||||||
|
return THINK_DEEPER_PROMPT
|
||||||
|
|
||||||
|
def get_default_temperature(self) -> float:
|
||||||
|
return TEMPERATURE_CREATIVE
|
||||||
|
|
||||||
|
def get_request_model(self):
|
||||||
|
return ThinkDeeperRequest
|
||||||
|
|
||||||
|
async def prepare_prompt(self, request: ThinkDeeperRequest) -> str:
|
||||||
|
"""Prepare the full prompt for extended thinking"""
|
||||||
|
# Build context parts
|
||||||
|
context_parts = [
|
||||||
|
f"=== CLAUDE'S CURRENT ANALYSIS ===\n{request.current_analysis}\n=== END ANALYSIS ==="
|
||||||
|
]
|
||||||
|
|
||||||
|
if request.problem_context:
|
||||||
|
context_parts.append(
|
||||||
|
f"\n=== PROBLEM CONTEXT ===\n{request.problem_context}\n=== END CONTEXT ==="
|
||||||
|
)
|
||||||
|
|
||||||
|
# Add reference files if provided
|
||||||
|
if request.reference_files:
|
||||||
|
file_content, _ = read_files(request.reference_files)
|
||||||
|
context_parts.append(
|
||||||
|
f"\n=== REFERENCE FILES ===\n{file_content}\n=== END FILES ==="
|
||||||
|
)
|
||||||
|
|
||||||
|
full_context = "\n".join(context_parts)
|
||||||
|
|
||||||
|
# Check token limits
|
||||||
|
within_limit, estimated_tokens = check_token_limit(full_context)
|
||||||
|
if not within_limit:
|
||||||
|
raise ValueError(
|
||||||
|
f"Context too large (~{estimated_tokens:,} tokens). "
|
||||||
|
f"Maximum is {MAX_CONTEXT_TOKENS:,} tokens."
|
||||||
|
)
|
||||||
|
|
||||||
|
# Add focus areas instruction if specified
|
||||||
|
focus_instruction = ""
|
||||||
|
if request.focus_areas:
|
||||||
|
areas = ", ".join(request.focus_areas)
|
||||||
|
focus_instruction = f"\n\nFOCUS AREAS: Please pay special attention to {areas} aspects."
|
||||||
|
|
||||||
|
# Combine system prompt with context
|
||||||
|
full_prompt = f"""{self.get_system_prompt()}{focus_instruction}
|
||||||
|
|
||||||
|
{full_context}
|
||||||
|
|
||||||
|
Please provide deep analysis that extends Claude's thinking with:
|
||||||
|
1. Alternative approaches and solutions
|
||||||
|
2. Edge cases and potential failure modes
|
||||||
|
3. Critical evaluation of assumptions
|
||||||
|
4. Concrete implementation suggestions
|
||||||
|
5. Risk assessment and mitigation strategies"""
|
||||||
|
|
||||||
|
return full_prompt
|
||||||
|
|
||||||
|
def format_response(
|
||||||
|
self, response: str, request: ThinkDeeperRequest
|
||||||
|
) -> str:
|
||||||
|
"""Format the response with clear attribution"""
|
||||||
|
return f"Extended Analysis by Gemini:\n\n{response}"
|
||||||
13
utils/__init__.py
Normal file
13
utils/__init__.py
Normal file
@@ -0,0 +1,13 @@
|
|||||||
|
"""
|
||||||
|
Utility functions for Gemini MCP Server
|
||||||
|
"""
|
||||||
|
|
||||||
|
from .file_utils import read_files, read_file_content
|
||||||
|
from .token_utils import estimate_tokens, check_token_limit
|
||||||
|
|
||||||
|
__all__ = [
|
||||||
|
"read_files",
|
||||||
|
"read_file_content",
|
||||||
|
"estimate_tokens",
|
||||||
|
"check_token_limit",
|
||||||
|
]
|
||||||
63
utils/file_utils.py
Normal file
63
utils/file_utils.py
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
"""
|
||||||
|
File reading utilities
|
||||||
|
"""
|
||||||
|
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import List, Tuple, Optional
|
||||||
|
|
||||||
|
|
||||||
|
def read_file_content(file_path: str) -> str:
|
||||||
|
"""Read a single file and format it for Gemini"""
|
||||||
|
path = Path(file_path)
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Check if path exists and is a file
|
||||||
|
if not path.exists():
|
||||||
|
return f"\n--- FILE NOT FOUND: {file_path} ---\nError: File does not exist\n--- END FILE ---\n"
|
||||||
|
|
||||||
|
if not path.is_file():
|
||||||
|
return f"\n--- NOT A FILE: {file_path} ---\nError: Path is not a file\n--- END FILE ---\n"
|
||||||
|
|
||||||
|
# Read the file
|
||||||
|
with open(path, "r", encoding="utf-8") as f:
|
||||||
|
content = f.read()
|
||||||
|
|
||||||
|
# Format with clear delimiters for Gemini
|
||||||
|
return f"\n--- BEGIN FILE: {file_path} ---\n{content}\n--- END FILE: {file_path} ---\n"
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
return f"\n--- ERROR READING FILE: {file_path} ---\nError: {str(e)}\n--- END FILE ---\n"
|
||||||
|
|
||||||
|
|
||||||
|
def read_files(
|
||||||
|
file_paths: List[str], code: Optional[str] = None
|
||||||
|
) -> Tuple[str, str]:
|
||||||
|
"""
|
||||||
|
Read multiple files and optional direct code.
|
||||||
|
Returns: (full_content, brief_summary)
|
||||||
|
"""
|
||||||
|
content_parts = []
|
||||||
|
summary_parts = []
|
||||||
|
|
||||||
|
# Process files
|
||||||
|
if file_paths:
|
||||||
|
summary_parts.append(f"Reading {len(file_paths)} file(s)")
|
||||||
|
for file_path in file_paths:
|
||||||
|
content = read_file_content(file_path)
|
||||||
|
content_parts.append(content)
|
||||||
|
|
||||||
|
# Add direct code if provided
|
||||||
|
if code:
|
||||||
|
formatted_code = (
|
||||||
|
f"\n--- BEGIN DIRECT CODE ---\n{code}\n--- END DIRECT CODE ---\n"
|
||||||
|
)
|
||||||
|
content_parts.append(formatted_code)
|
||||||
|
code_preview = code[:50] + "..." if len(code) > 50 else code
|
||||||
|
summary_parts.append(f"Direct code: {code_preview}")
|
||||||
|
|
||||||
|
full_content = "\n\n".join(content_parts)
|
||||||
|
summary = (
|
||||||
|
" | ".join(summary_parts) if summary_parts else "No input provided"
|
||||||
|
)
|
||||||
|
|
||||||
|
return full_content, summary
|
||||||
20
utils/token_utils.py
Normal file
20
utils/token_utils.py
Normal file
@@ -0,0 +1,20 @@
|
|||||||
|
"""
|
||||||
|
Token counting utilities
|
||||||
|
"""
|
||||||
|
|
||||||
|
from typing import Tuple
|
||||||
|
from config import MAX_CONTEXT_TOKENS
|
||||||
|
|
||||||
|
|
||||||
|
def estimate_tokens(text: str) -> int:
|
||||||
|
"""Estimate token count (rough: 1 token ≈ 4 characters)"""
|
||||||
|
return len(text) // 4
|
||||||
|
|
||||||
|
|
||||||
|
def check_token_limit(text: str) -> Tuple[bool, int]:
|
||||||
|
"""
|
||||||
|
Check if text exceeds token limit.
|
||||||
|
Returns: (is_within_limit, estimated_tokens)
|
||||||
|
"""
|
||||||
|
estimated = estimate_tokens(text)
|
||||||
|
return estimated <= MAX_CONTEXT_TOKENS, estimated
|
||||||
Reference in New Issue
Block a user