This commit is contained in:
Patryk Ciechanski
2025-06-11 13:12:19 +02:00
parent af9a6d7202
commit c904b6c4e8
7 changed files with 1591 additions and 3 deletions

View File

@@ -11,8 +11,22 @@
"Bash(cat:*)",
"Bash(grep:*)",
"Bash(source:*)",
"Bash(rm:*)"
"Bash(rm:*)",
"mcp__gemini__thinkdeep",
"mcp__memory__create_entities",
"mcp__memory__create_relations",
"mcp__memory__add_observations",
"Bash(mkdir:*)",
"Bash(mv:*)"
],
"deny": []
}
},
"enableAllProjectMcpServers": true,
"enabledMcpjsonServers": [
"github",
"context7",
"memory",
"sequential-thinking",
"gemini"
]
}

1
.gitignore vendored
View File

@@ -165,7 +165,6 @@ run-gemini-mcp.sh
gemini-repo.md
.mcp.json
.claude
CLAUDE.md
# Memory Bank (optional - can be committed for shared context)
memory-bank

550
CLAUDE.md Normal file
View File

@@ -0,0 +1,550 @@
# Collaborating with Claude & Gemini on the Gemini MCP Server
This document establishes the framework for effective collaboration between Claude, Gemini, and human developers on this repository. It defines tool usage patterns, best practices, and documentation standards to ensure high-quality, comprehensive work.
## 🎯 Project Overview
The **Gemini MCP Server** is a Model Context Protocol (MCP) server that provides Claude with access to Google's Gemini AI models through specialized tools. This enables sophisticated AI-assisted development workflows combining Claude's general capabilities with Gemini's deep analytical and creative thinking abilities.
### Core Philosophy
- **Collaborative Intelligence**: Claude and Gemini work together, with Claude handling immediate tasks and coordination while Gemini provides deep analysis, creative solutions, and comprehensive code review
- **Task-Appropriate Tools**: Different tools for different purposes - quick chat for simple questions, deep thinking for architecture, specialized review for code quality
- **Documentation-Driven Development**: All code changes must be accompanied by comprehensive, accessible documentation
## 🛠️ The Collaboration Toolbox
### Tool Selection Matrix
| Tool | Primary Use Cases | When to Use | Collaboration Level |
|------|------------------|-------------|-------------------|
| **`chat`** | Quick questions, brainstorming, simple code snippets | Immediate answers, exploring ideas, general discussion | Low - Claude leads |
| **`thinkdeep`** | Complex architecture, system design, strategic planning | Major features, refactoring strategies, design decisions | High - Gemini leads |
| **`analyze`** | Code exploration, understanding existing systems | Onboarding, dependency analysis, codebase comprehension | Medium - Both collaborate |
| **`codereview`** | Code quality, security, bug detection | PR reviews, pre-commit validation, security audits | High - Gemini leads |
| **`debug`** | Root cause analysis, error investigation | Bug fixes, stack trace analysis, performance issues | Medium - Gemini leads |
| **`precommit`** | Automated quality gates | Before every commit (automated) | Medium - Gemini validates |
### Mandatory Collaboration Rules
1. **Complex Tasks (>3 steps)**: Always use TodoWrite to plan and track progress
2. **Architecture Decisions**: Must involve `thinkdeep` for exploration before implementation
3. **Code Reviews**: All significant changes require `codereview` analysis before committing
4. **Documentation Updates**: Any code change must include corresponding documentation updates
## 📋 Task Categories & Workflows
### 🏗️ New Feature Development
```
1. Planning (thinkdeep) → Architecture and approach
2. Analysis (analyze) → Understanding existing codebase
3. Implementation (human + Claude) → Writing the code
4. Review (codereview) → Quality validation
5. Documentation (both) → Comprehensive docs
6. Testing (precommit) → Automated validation
```
### 🐛 Bug Investigation & Fixing
```
1. Diagnosis (debug) → Root cause analysis
2. Analysis (analyze) → Understanding affected code
3. Implementation (human + Claude) → Fix development
4. Review (codereview) → Security and quality check
5. Testing (precommit) → Validation before commit
```
### 📖 Documentation & Analysis
```
1. Exploration (analyze) → Understanding current state
2. Planning (chat/thinkdeep) → Structure and approach
3. Documentation (both) → Writing comprehensive docs
4. Review (human) → Accuracy validation
```
## 📚 Documentation Standards & Best Practices
### Documentation Directory Structure
```
docs/
├── architecture/ # System design and technical architecture
│ ├── overview.md # High-level system architecture
│ ├── components.md # Component descriptions and interactions
│ ├── data-flow.md # Data flow diagrams and explanations
│ └── decisions/ # Architecture Decision Records (ADRs)
├── contributing/ # Development and contribution guidelines
│ ├── setup.md # Development environment setup
│ ├── workflows.md # Development workflows and processes
│ ├── code-style.md # Coding standards and style guide
│ ├── testing.md # Testing strategies and requirements
│ └── file-overview.md # Guide to repository structure
├── api/ # API documentation
│ ├── mcp-protocol.md # MCP protocol implementation details
│ └── tools/ # Individual tool documentation
└── user-guides/ # End-user documentation
├── installation.md # Installation and setup
├── configuration.md # Configuration options
└── troubleshooting.md # Common issues and solutions
```
### Documentation Quality Standards
#### For Technical Audiences
- **Code Context**: All explanations must reference specific files and line numbers using `file_path:line_number` format
- **Architecture Focus**: Explain *why* decisions were made, not just *what* was implemented
- **Data Flow**: Trace data through the system with concrete examples
- **Error Scenarios**: Document failure modes and recovery strategies
#### For Non-Technical Audiences
- **Plain Language**: Avoid jargon, explain technical terms when necessary
- **Purpose-Driven**: Start with "what problem does this solve?"
- **Visual Aids**: Use diagrams and flowcharts where helpful
- **Practical Examples**: Show real usage scenarios
### File Overview Requirements (Contributing Guide)
Each file must be documented with:
- **Purpose**: What problem does this file solve?
- **Key Components**: Main classes/functions and their roles
- **Dependencies**: What other files/modules does it interact with?
- **Data Flow**: How data moves through this component
- **Extension Points**: Where/how can this be extended?
## 🔄 Mandatory Collaboration Patterns
### Double Validation Protocol
**Critical Code Reviews**: For security-sensitive or architecture-critical changes:
1. **Primary Analysis** (Gemini): Deep analysis using `codereview` or `thinkdeep`
2. **Adversarial Review** (Claude): Challenge findings, look for edge cases, validate assumptions
3. **Synthesis**: Combine insights, resolve disagreements, document final approach
4. **Memory Update**: Record key decisions and validation results
### Memory-Driven Context Management
**Active Memory Usage**: Always maintain project context via memory MCP:
```bash
# Store key insights
mcp_memory_create_entities: Project decisions, validation findings, user preferences
# Track progress
mcp_memory_add_observations: Task status, approach changes, learning insights
# Retrieve context
mcp_memory_search_nodes: Before starting tasks, query relevant past decisions
```
### Pre-Implementation Analysis
Before any significant code change:
1. **Query Memory**: Search for related past decisions and constraints
2. Use `analyze` to understand current implementation
3. Use `thinkdeep` for architectural planning if complex
4. **Store Plan**: Document approach in memory and todos
5. Get consensus on direction before coding
### Pre-Commit Validation
Before every commit:
1. **Memory Check**: Verify alignment with past architectural decisions
2. Run `precommit` tool for automated validation
3. Use `codereview` for manual quality check (with adversarial validation if critical)
4. **Update Progress**: Record completion status in memory
5. Ensure documentation is updated
### Cross-Tool Continuation & Memory Persistence
- Use `continuation_id` to maintain context across tool calls
- **Mandatory Memory Updates**: Record all significant findings and decisions
- Document decision rationale when switching between tools
- Always summarize findings when moving between analysis phases
- **Context Retrieval**: Start complex tasks by querying memory for relevant background
### CLAUDE.md Auto-Refresh Protocol
**Mandatory context updates for consistent collaboration:**
1. **Session Start**: Always read CLAUDE.md to understand current collaboration rules
2. **Every 10 interactions**: Re-read CLAUDE.md to ensure rule compliance
3. **Before complex tasks**: Check CLAUDE.md for appropriate tool selection and collaboration patterns
4. **After rule changes**: Immediately inform Gemini of any CLAUDE.md updates
5. **Memory synchronization**: Store CLAUDE.md key principles in Memory MCP for quick reference
**Implementation Pattern:**
```bash
# At session start and every 10 interactions
Read: /path/to/CLAUDE.md
# Store key rules in memory
mcp_memory_create_entities: "CLAUDE Collaboration Rules" (entityType: "guidelines")
# Inform Gemini of rule updates
mcp_gemini_chat: "CLAUDE.md has been updated with new collaboration rules: [summary]"
```
**Rule Propagation**: When CLAUDE.md is updated, both Claude and Gemini must acknowledge and adapt to new collaboration patterns within the same session.
## 📋 Quality Gates & Standards
### Code Quality Requirements
- **Security**: No exposed secrets, proper input validation
- **Performance**: Consider token usage, avoid unnecessary API calls
- **Maintainability**: Clear variable names, logical structure
- **Documentation**: Inline comments for complex logic only when requested
### Documentation Quality Gates
- **Accuracy**: Documentation must reflect actual code behavior
- **Completeness**: Cover all user-facing functionality
- **Accessibility**: Understandable by intended audience
- **Currency**: Updated with every related code change
### Collaboration Quality Gates
- **Task Planning**: Use TodoWrite for complex tasks
- **Tool Appropriateness**: Use the right tool for each job
- **Context Preservation**: Maintain conversation threads
- **Validation**: Always validate assumptions with appropriate tools
## 🖥️ MCP Server Integration Rules
### Memory MCP Server (`mcp__memory__*`)
**Primary Usage**: Long-term context preservation and project knowledge management
#### Entity Management Strategy
```bash
# Project Structure Entities
- "Repository Architecture" (entityType: "codebase_structure")
- "User Preferences" (entityType: "configuration")
- "Active Tasks" (entityType: "work_items")
- "Validation History" (entityType: "quality_records")
# Relationship Patterns
- "depends_on", "conflicts_with", "validates", "implements"
```
#### Mandatory Memory Operations
1. **Task Start**: Query memory for related context
2. **Key Decisions**: Create entities for architectural choices
3. **Progress Updates**: Add observations to track status
4. **Task Completion**: Record final outcomes and learnings
5. **Validation Results**: Store both positive and negative findings
### Context7 MCP Server (`mcp__context7__*`)
**Primary Usage**: External documentation and library reference
#### Usage Guidelines
1. **Library Research**: Always resolve library IDs before requesting docs
2. **Architecture Decisions**: Fetch relevant framework documentation
3. **Best Practices**: Query for current industry standards
4. **Token Management**: Use focused topics to optimize context usage
```bash
# Workflow Example
mcp__context7__resolve-library-id libraryName="fastapi"
mcp__context7__get-library-docs context7CompatibleLibraryID="/tiangolo/fastapi" topic="security middleware"
```
### IDE MCP Server (`mcp__ide__*`)
**Primary Usage**: Real-time code validation and execution
#### Integration Pattern
1. **Live Validation**: Check diagnostics before final review
2. **Testing**: Execute code snippets for validation
3. **Error Verification**: Confirm fixes resolve actual issues
### Memory Bank Strategy
#### Initialization Protocol
**ALWAYS start every session by checking for `memory-bank/` directory:**
**Initial Check:**
```bash
# First action in any session
<thinking>
- **CHECK FOR MEMORY BANK:**
* First, check if the memory-bank/ directory exists.
* If memory-bank DOES exist, skip immediately to `if_memory_bank_exists`.
</thinking>
LS tool: Check for memory-bank/ directory existence
```
**If No Memory Bank Exists:**
1. **Inform User**: "No Memory Bank was found. I recommend creating one to maintain project context."
2. **Offer Initialization**: Ask user if they would like to initialize the Memory Bank.
3. **Conditional Actions**:
- **If user declines**:
```bash
<thinking>
I need to proceed with the task without Memory Bank functionality.
</thinking>
```
a. Inform user that Memory Bank will not be created
b. Set status to `[MEMORY BANK: INACTIVE]`
c. Proceed with task using current context or ask followup question if no task provided
- **If user agrees**:
```bash
<thinking>
I need to create the `memory-bank/` directory and core files. I should use Write tool for this, and I should do it one file at a time, waiting for confirmation after each. The initial content for each file is defined below. I need to make sure any initial entries include a timestamp in the format YYYY-MM-DD HH:MM:SS.
</thinking>
```
4. **Check for `projectBrief.md`**:
- Use LS tool to check for `projectBrief.md` *before* offering to create memory bank
- If `projectBrief.md` exists: Read its contents *before* offering to create memory bank
- If no `projectBrief.md`: Skip this step (handle prompting for project info *after* user agrees to initialize)
5. **Memory Bank Creation Process**:
```bash
<thinking>
I need to add default content for the Memory Bank files.
</thinking>
```
a. Create the `memory-bank/` directory
b. Create `memory-bank/productContext.md` with initial content template
c. Create `memory-bank/activeContext.md` with initial content template
d. Create `memory-bank/progress.md` with initial content template
e. Create `memory-bank/decisionLog.md` with initial content template
f. Create `memory-bank/systemPatterns.md` with initial content template
g. Set status to `[MEMORY BANK: ACTIVE]` and inform user
h. Proceed with task using Memory Bank context or ask followup question if no task provided
**If Memory Bank Exists:**
```bash
**READ *ALL* MEMORY BANK FILES**
<thinking>
I will read all memory bank files, one at a time.
</thinking>
Plan: Read all mandatory files sequentially.
1. Read `productContext.md`
2. Read `activeContext.md`
3. Read `systemPatterns.md`
4. Read `decisionLog.md`
5. Read `progress.md`
6. Set status to [MEMORY BANK: ACTIVE] and inform user
7. Proceed with task using Memory Bank context or ask followup question if no task provided
```
**Status Requirement:**
- Begin EVERY response with either `[MEMORY BANK: ACTIVE]` or `[MEMORY BANK: INACTIVE]` according to current state
#### Memory Bank File Structure & Templates
```
memory-bank/
├── productContext.md # High-level project overview and goals
├── activeContext.md # Current status, recent changes, open issues
├── progress.md # Task tracking (completed, current, next)
├── decisionLog.md # Architectural decisions with rationale
└── systemPatterns.md # Recurring patterns and standards
```
**Initial Content Templates**:
**productContext.md**:
```markdown
# Product Context
This file provides a high-level overview of the project and the expected product that will be created. Initially it is based upon projectBrief.md (if provided) and all other available project-related information in the working directory. This file is intended to be updated as the project evolves, and should be used to inform all other modes of the project's goals and context.
YYYY-MM-DD HH:MM:SS - Log of updates made will be appended as footnotes to the end of this file.
*
## Project Goal
*
## Key Features
*
## Overall Architecture
*
```
**activeContext.md**:
```markdown
# Active Context
This file tracks the project's current status, including recent changes, current goals, and open questions.
YYYY-MM-DD HH:MM:SS - Log of updates made.
*
## Current Focus
*
## Recent Changes
*
## Open Questions/Issues
*
```
**progress.md**:
```markdown
# Progress
This file tracks the project's progress using a task list format.
YYYY-MM-DD HH:MM:SS - Log of updates made.
*
## Completed Tasks
*
## Current Tasks
*
## Next Steps
*
```
**decisionLog.md**:
```markdown
# Decision Log
This file records architectural and implementation decisions using a list format.
YYYY-MM-DD HH:MM:SS - Log of updates made.
*
## Decision
*
## Rationale
*
## Implementation Details
*
```
**systemPatterns.md**:
```markdown
# System Patterns *Optional*
This file documents recurring patterns and standards used in the project.
It is optional, but recommended to be updated as the project evolves.
YYYY-MM-DD HH:MM:SS - Log of updates made.
*
## Coding Patterns
*
## Architectural Patterns
*
## Testing Patterns
*
```
#### Update Triggers & Patterns
**Real-time updates throughout session when:**
- **Product Context**: High-level goals/features/architecture changes
- **Active Context**: Focus shifts, significant progress, new issues arise
- **Progress**: Tasks begin, complete, or change status
- **Decision Log**: Architectural decisions, technology choices, design patterns
- **System Patterns**: New patterns introduced or existing ones modified
#### UMB Command (`Update Memory Bank`)
**Manual synchronization command for comprehensive updates:**
```bash
User: "UMB" or "Update Memory Bank"
Response: "[MEMORY BANK: UPDATING]"
```
**UMB Process**:
1. Review complete chat history
2. Extract cross-mode information and context
3. Update all affected memory-bank files
4. Sync with Memory MCP entities
5. Ensure consistency across all systems
#### Memory Bank ↔ Memory MCP Integration
**Dual-system approach for maximum context preservation:**
```bash
# On Memory Bank creation/update
1. Update memory-bank/*.md files
2. Create/update corresponding Memory MCP entities:
- "Project Context" (entityType: "memory_bank_sync")
- "Active Tasks" (entityType: "memory_bank_sync")
- "Decision History" (entityType: "memory_bank_sync")
# Cross-reference pattern
mcp__memory__create_relations:
- "Memory Bank" -> "validates" -> "Memory MCP Context"
- "Decision Log Entry" -> "implements" -> "Architecture Decision"
```
### MCP Server Orchestration Rules
#### Priority Order for Context
1. **Memory Bank**: Local file-based project context (primary)
2. **Memory MCP**: Entity-based context and relationships (secondary)
3. **Context7**: External documentation when needed
4. **IDE**: Live validation as final check
#### Resource Management
- **Token Budgeting**: Reserve 40% of context (30% Memory Bank + 10% Memory MCP)
- **Update Frequency**: Memory Bank updates real-time, Memory MCP after significant decisions
- **Cleanup**: Archive completed entities monthly, rotate old memory-bank entries
#### Error Handling
- **Memory Bank Unavailable**: Fall back to Memory MCP only
- **Memory MCP Unavailable**: Use Memory Bank files only
- **Both Unavailable**: Fall back to TodoWrite for basic tracking
- **Context7 Timeout**: Use web search as backup
- **IDE Issues**: Continue with static analysis only
## 🚀 Repository-Specific Guidelines
### File Structure Understanding
- `tools/`: Individual MCP tool implementations
- `utils/`: Shared utilities (file handling, git operations, token management)
- `prompts/`: System prompts for different tool types
- `tests/`: Comprehensive test suite
- `config.py`: Centralized configuration
### Key Integration Points
- `config.py:24`: Model configuration (`GEMINI_MODEL`)
- `config.py:30`: Token limits (`MAX_CONTEXT_TOKENS`)
- `utils/git_utils.py`: Git operations for code analysis
- `utils/file_utils.py`: File reading and processing
- `utils/conversation_memory.py`: Cross-session context
### Development Workflows
1. **Feature Branches**: Always work on feature branches
2. **Testing**: Run full test suite before PR
3. **Documentation**: Update docs with every change
4. **Review Process**: Use `codereview` tool, then human review
## 🎯 Success Metrics
### For Claude & Gemini Collaboration
- All complex tasks tracked with TodoWrite
- Appropriate tool selection for each phase
- Comprehensive pre-commit validation
- Documentation updated with every code change
### For Code Quality
- No critical security issues in `codereview`
- All tests passing
- Documentation accuracy verified
- Performance considerations addressed
### For User Experience
- Technical users can contribute using contributing docs
- Non-technical users can understand system purpose
- Clear troubleshooting guidance available
- Setup instructions are complete and tested
---
This framework ensures that every contribution to the repository maintains high standards while leveraging the full collaborative potential of Claude and Gemini working together.

380
docs/contributing/setup.md Normal file
View File

@@ -0,0 +1,380 @@
# Development Environment Setup
This guide helps you set up a development environment for contributing to the Gemini MCP Server.
## Prerequisites
### Required Software
- **Python 3.11+** - [Download](https://www.python.org/downloads/)
- **Docker Desktop** - [Download](https://www.docker.com/products/docker-desktop/)
- **Git** - [Download](https://git-scm.com/downloads)
- **Claude Desktop** - [Download](https://claude.ai/download) (for testing)
### Recommended Tools
- **VS Code** with Python extension
- **PyCharm** or your preferred Python IDE
- **pytest** for running tests
- **black** and **ruff** for code formatting
## Quick Setup
### 1. Clone Repository
```bash
git clone https://github.com/BeehiveInnovations/gemini-mcp-server.git
cd gemini-mcp-server
```
### 2. Choose Development Method
#### Option A: Docker Development (Recommended)
Best for consistency and avoiding local Python environment issues:
```bash
# One-command setup
./setup-docker.sh
# Development with auto-reload
docker compose -f docker-compose.yml -f docker-compose.dev.yml up
```
#### Option B: Local Python Development
For direct Python development and debugging:
```bash
# Create virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Install development dependencies
pip install -r requirements-dev.txt
```
### 3. Configuration
```bash
# Copy example environment file
cp .env.example .env
# Edit with your API key
nano .env
# Add: GEMINI_API_KEY=your-gemini-api-key-here
```
### 4. Verify Setup
```bash
# Run unit tests
python -m pytest tests/ --ignore=tests/test_live_integration.py -v
# Test with live API (requires API key)
python tests/test_live_integration.py
# Run linting
black --check .
ruff check .
```
## Development Workflows
### Code Quality Tools
```bash
# Format code
black .
# Lint code
ruff check .
ruff check . --fix # Auto-fix issues
# Type checking
mypy .
# Run all quality checks
./scripts/quality-check.sh # If available
```
### Testing Strategy
#### Unit Tests (No API Key Required)
```bash
# Run all unit tests
python -m pytest tests/ --ignore=tests/test_live_integration.py -v
# Run with coverage
python -m pytest tests/ --ignore=tests/test_live_integration.py --cov=. --cov-report=html
# Run specific test file
python -m pytest tests/test_tools.py -v
```
#### Live Integration Tests (API Key Required)
```bash
# Set API key
export GEMINI_API_KEY=your-api-key-here
# Run live tests
python tests/test_live_integration.py
# Or specific live test
python -m pytest tests/test_live_integration.py::test_chat_tool -v
```
### Adding New Tools
1. **Create tool file**: `tools/your_tool.py`
2. **Inherit from BaseTool**: Implement required methods
3. **Add system prompt**: Include in `prompts/tool_prompts.py`
4. **Register tool**: Add to `TOOLS` dict in `server.py`
5. **Write tests**: Add unit tests with mocks
6. **Test live**: Verify with actual API calls
#### Tool Template
```python
# tools/your_tool.py
from typing import Any, Optional
from mcp.types import TextContent
from pydantic import Field
from .base import BaseTool, ToolRequest
from prompts import YOUR_TOOL_PROMPT
class YourToolRequest(ToolRequest):
"""Request model for your tool"""
param1: str = Field(..., description="Required parameter")
param2: Optional[str] = Field(None, description="Optional parameter")
class YourTool(BaseTool):
"""Your tool description"""
def get_name(self) -> str:
return "your_tool"
def get_description(self) -> str:
return "Your tool description for Claude"
def get_system_prompt(self) -> str:
return YOUR_TOOL_PROMPT
def get_request_model(self):
return YourToolRequest
async def prepare_prompt(self, request: YourToolRequest) -> str:
# Build your prompt here
return f"Your prompt with {request.param1}"
```
### Docker Development
#### Development Compose File
Create `docker-compose.dev.yml`:
```yaml
services:
gemini-mcp:
build:
context: .
dockerfile: Dockerfile.dev # If you have a dev Dockerfile
volumes:
- .:/app # Mount source code for hot reload
environment:
- LOG_LEVEL=DEBUG
command: ["python", "-m", "server", "--reload"] # If you add reload support
```
#### Development Commands
```bash
# Start development environment
docker compose -f docker-compose.yml -f docker-compose.dev.yml up
# Run tests in container
docker compose exec gemini-mcp python -m pytest tests/ -v
# Access container shell
docker compose exec gemini-mcp bash
# View logs
docker compose logs -f gemini-mcp
```
## IDE Configuration
### VS Code
**Recommended extensions:**
- Python
- Pylance
- Black Formatter
- Ruff
- Docker
**Settings** (`.vscode/settings.json`):
```json
{
"python.defaultInterpreterPath": "./venv/bin/python",
"python.formatting.provider": "black",
"python.linting.enabled": true,
"python.linting.ruffEnabled": true,
"python.testing.pytestEnabled": true,
"python.testing.pytestArgs": [
"tests/",
"--ignore=tests/test_live_integration.py"
]
}
```
### PyCharm
1. **Configure interpreter**: Settings → Project → Python Interpreter
2. **Set up test runner**: Settings → Tools → Python Integrated Tools → Testing
3. **Configure code style**: Settings → Editor → Code Style → Python (use Black)
## Debugging
### Local Debugging
```python
# Add to your code for debugging
import pdb; pdb.set_trace()
# Or use your IDE's debugger
```
### Container Debugging
```bash
# Run container in debug mode
docker compose exec gemini-mcp python -m pdb server.py
# Or add debug prints
LOG_LEVEL=DEBUG docker compose up
```
### Testing with Claude Desktop
1. **Configure Claude Desktop** to use your development server
2. **Use development container**:
```json
{
"mcpServers": {
"gemini-dev": {
"command": "docker",
"args": [
"exec", "-i", "gemini-mcp-server",
"python", "server.py"
]
}
}
}
```
## Contributing Workflow
### 1. Create Feature Branch
```bash
git checkout -b feature/your-feature-name
```
### 2. Make Changes
Follow the coding standards and add tests for your changes.
### 3. Run Quality Checks
```bash
# Format code
black .
# Check linting
ruff check .
# Run tests
python -m pytest tests/ --ignore=tests/test_live_integration.py -v
# Test with live API
export GEMINI_API_KEY=your-key
python tests/test_live_integration.py
```
### 4. Commit Changes
```bash
git add .
git commit -m "feat: add new feature description"
```
### 5. Push and Create PR
```bash
git push origin feature/your-feature-name
# Create PR on GitHub
```
## Performance Considerations
### Profiling
```python
# Add profiling to your code
import cProfile
import pstats
def profile_function():
profiler = cProfile.Profile()
profiler.enable()
# Your code here
profiler.disable()
stats = pstats.Stats(profiler)
stats.sort_stats('cumulative')
stats.print_stats()
```
### Memory Usage
```bash
# Monitor memory usage
docker stats gemini-mcp-server
# Profile memory in Python
pip install memory-profiler
python -m memory_profiler your_script.py
```
## Troubleshooting Development Issues
### Common Issues
1. **Import errors**: Check your Python path and virtual environment
2. **API rate limits**: Use mocks in tests to avoid hitting limits
3. **Docker issues**: Check Docker Desktop is running and has enough resources
4. **Test failures**: Ensure you're using the correct Python version and dependencies
### Clean Environment
```bash
# Reset Python environment
rm -rf venv/
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt
# Reset Docker environment
docker compose down -v
docker system prune -f
./setup-docker.sh
```
---
**Next Steps:**
- Read [Development Workflows](workflows.md)
- Review [Code Style Guide](code-style.md)
- Understand [Testing Strategy](testing.md)

View File

@@ -0,0 +1,233 @@
# Configuration Guide
This guide covers all configuration options for the Gemini MCP Server.
## Environment Variables
### Required Configuration
| Variable | Description | Example |
|----------|-------------|---------|
| `GEMINI_API_KEY` | Your Gemini API key from Google AI Studio | `AIzaSyC...` |
### Optional Configuration
| Variable | Default | Description |
|----------|---------|-------------|
| `REDIS_URL` | `redis://localhost:6379/0` | Redis connection URL for conversation threading |
| `WORKSPACE_ROOT` | `$HOME` | Root directory mounted as `/workspace` in container |
| `LOG_LEVEL` | `INFO` | Logging verbosity: `DEBUG`, `INFO`, `WARNING`, `ERROR` |
| `GEMINI_MODEL` | `gemini-2.5-pro-preview-06-05` | Gemini model to use |
| `MAX_CONTEXT_TOKENS` | `1000000` | Maximum context window (1M tokens for Gemini Pro) |
## Claude Desktop Configuration
### MCP Server Configuration
Add to your Claude Desktop config file:
**Location:**
- **macOS**: `~/Library/Application Support/Claude/claude_desktop_config.json`
- **Windows (WSL)**: `/mnt/c/Users/USERNAME/AppData/Roaming/Claude/claude_desktop_config.json`
**Configuration:**
```json
{
"mcpServers": {
"gemini": {
"command": "docker",
"args": [
"exec",
"-i",
"gemini-mcp-server",
"python",
"server.py"
]
}
}
}
```
### Alternative: Claude Code CLI
```bash
# Add MCP server via CLI
claude mcp add gemini -s user -- docker exec -i gemini-mcp-server python server.py
# List servers
claude mcp list
# Remove server
claude mcp remove gemini -s user
```
## Docker Configuration
### Environment File (.env)
```bash
# Required
GEMINI_API_KEY=your-gemini-api-key-here
# Optional - Docker Compose defaults
REDIS_URL=redis://redis:6379/0
WORKSPACE_ROOT=/Users/yourname
LOG_LEVEL=INFO
```
### Docker Compose Overrides
Create `docker-compose.override.yml` for custom settings:
```yaml
services:
gemini-mcp:
environment:
- LOG_LEVEL=DEBUG
volumes:
- /custom/path:/workspace:ro
```
## Logging Configuration
### Log Levels
- **DEBUG**: Detailed operational messages, conversation threading, tool execution flow
- **INFO**: General operational messages (default)
- **WARNING**: Warnings and errors only
- **ERROR**: Errors only
### Viewing Logs
```bash
# Real-time logs
docker compose logs -f gemini-mcp
# Specific service logs
docker compose logs redis
docker compose logs log-monitor
```
## Security Configuration
### API Key Security
1. **Never commit API keys** to version control
2. **Use environment variables** or `.env` files
3. **Restrict key permissions** in Google AI Studio
4. **Rotate keys periodically**
### File Access Security
The container mounts your home directory as read-only. To restrict access:
```yaml
# In docker-compose.override.yml
services:
gemini-mcp:
environment:
- WORKSPACE_ROOT=/path/to/specific/project
volumes:
- /path/to/specific/project:/workspace:ro
```
## Performance Configuration
### Memory Limits
```yaml
# In docker-compose.override.yml
services:
gemini-mcp:
deploy:
resources:
limits:
memory: 2G
reservations:
memory: 512M
```
### Redis Configuration
Redis is pre-configured with optimal settings:
- 512MB memory limit
- LRU eviction policy
- Persistence enabled (saves every 60 seconds if data changed)
To customize Redis:
```yaml
# In docker-compose.override.yml
services:
redis:
command: redis-server --maxmemory 1g --maxmemory-policy allkeys-lru
```
## Troubleshooting Configuration
### Common Issues
1. **API Key Not Set**
```bash
# Check .env file
cat .env | grep GEMINI_API_KEY
```
2. **File Access Issues**
```bash
# Check mounted directory
docker exec -it gemini-mcp-server ls -la /workspace
```
3. **Redis Connection Issues**
```bash
# Test Redis connectivity
docker exec -it gemini-mcp-redis redis-cli ping
```
### Debug Mode
Enable debug logging for troubleshooting:
```bash
# In .env file
LOG_LEVEL=DEBUG
# Restart services
docker compose restart
```
## Advanced Configuration
### Custom Model Configuration
To use a different Gemini model, override in `.env`:
```bash
GEMINI_MODEL=gemini-2.5-pro-latest
```
### Network Configuration
For custom networking (advanced users):
```yaml
# In docker-compose.override.yml
networks:
custom_network:
driver: bridge
services:
gemini-mcp:
networks:
- custom_network
redis:
networks:
- custom_network
```
---
**See Also:**
- [Installation Guide](installation.md)
- [Troubleshooting Guide](troubleshooting.md)

View File

@@ -0,0 +1,412 @@
# Troubleshooting Guide
This guide helps you resolve common issues with the Gemini MCP Server.
## Quick Diagnostics
### Check System Status
```bash
# Verify containers are running
docker compose ps
# Check logs for errors
docker compose logs -f
# Test API connectivity
docker exec -it gemini-mcp-server python -c "import os; print('API Key set:', bool(os.getenv('GEMINI_API_KEY')))"
```
## Common Issues
### 1. "Connection failed" in Claude Desktop
**Symptoms:**
- Claude Desktop shows "Connection failed" when trying to use Gemini tools
- MCP server appears disconnected
**Diagnosis:**
```bash
# Check if containers are running
docker compose ps
# Should show both containers as 'Up'
```
**Solutions:**
1. **Containers not running:**
```bash
docker compose up -d
```
2. **Container name mismatch:**
```bash
# Check actual container name
docker ps --format "{{.Names}}"
# Update Claude Desktop config if needed
```
3. **Docker Desktop not running:**
- Ensure Docker Desktop is started
- Check Docker daemon status: `docker info`
### 2. "GEMINI_API_KEY environment variable is required"
**Symptoms:**
- Server logs show API key error
- Tools respond with authentication errors
**Solutions:**
1. **Check .env file:**
```bash
cat .env | grep GEMINI_API_KEY
```
2. **Update API key:**
```bash
nano .env
# Change: GEMINI_API_KEY=your_actual_api_key
# Restart services
docker compose restart
```
3. **Verify key is valid:**
- Check [Google AI Studio](https://makersuite.google.com/app/apikey)
- Ensure key has proper permissions
### 3. Redis Connection Issues
**Symptoms:**
- Conversation threading not working
- Error logs mention Redis connection failures
**Diagnosis:**
```bash
# Check Redis container
docker compose ps redis
# Test Redis connectivity
docker exec -it gemini-mcp-redis redis-cli ping
# Should return: PONG
```
**Solutions:**
1. **Start Redis container:**
```bash
docker compose up -d redis
```
2. **Reset Redis data:**
```bash
docker compose down
docker volume rm gemini-mcp-server_redis_data
docker compose up -d
```
3. **Check Redis logs:**
```bash
docker compose logs redis
```
### 4. Tools Not Responding / Hanging
**Symptoms:**
- Gemini tools start but never complete
- Long response times
- Timeout errors
**Diagnosis:**
```bash
# Check resource usage
docker stats
# Check for memory/CPU constraints
```
**Solutions:**
1. **Restart services:**
```bash
docker compose restart
```
2. **Increase memory limits:**
```yaml
# In docker-compose.override.yml
services:
gemini-mcp:
deploy:
resources:
limits:
memory: 4G
```
3. **Check API rate limits:**
- Verify your Gemini API quota
- Consider using a paid API key for higher limits
### 5. File Access Issues
**Symptoms:**
- "File not found" errors when using file paths
- Permission denied errors
**Diagnosis:**
```bash
# Check mounted directory
docker exec -it gemini-mcp-server ls -la /workspace
# Verify file permissions
ls -la /path/to/your/file
```
**Solutions:**
1. **Use absolute paths:**
```
✅ /Users/yourname/project/file.py
❌ ./file.py
```
2. **Check file exists in mounted directory:**
```bash
# Files must be within WORKSPACE_ROOT (default: $HOME)
echo $WORKSPACE_ROOT
```
3. **Fix permissions (Linux):**
```bash
sudo chown -R $USER:$USER /path/to/your/files
```
### 6. Port Conflicts
**Symptoms:**
- "Port already in use" errors
- Services fail to start
**Diagnosis:**
```bash
# Check what's using port 6379
lsof -i :6379
netstat -tulpn | grep 6379
```
**Solutions:**
1. **Stop conflicting services:**
```bash
# If you have local Redis running
sudo systemctl stop redis
# or
brew services stop redis
```
2. **Use different ports:**
```yaml
# In docker-compose.override.yml
services:
redis:
ports:
- "6380:6379"
```
## Platform-Specific Issues
### Windows (WSL2)
**Common Issues:**
- Docker Desktop WSL2 integration not enabled
- File path format issues
- Permission problems
**Solutions:**
1. **Enable WSL2 integration:**
- Docker Desktop → Settings → Resources → WSL Integration
- Enable integration for your WSL distribution
2. **Use WSL2 paths:**
```bash
# Run commands from within WSL2
cd /mnt/c/Users/yourname/project
./setup-docker.sh
```
3. **File permissions:**
```bash
# In WSL2
chmod +x setup-docker.sh
```
### macOS
**Common Issues:**
- Docker Desktop not allocated enough resources
- File sharing permissions
**Solutions:**
1. **Increase Docker resources:**
- Docker Desktop → Settings → Resources
- Increase memory to at least 4GB
2. **File sharing:**
- Docker Desktop → Settings → Resources → File Sharing
- Ensure your project directory is included
### Linux
**Common Issues:**
- Docker permission issues
- systemd conflicts
**Solutions:**
1. **Docker permissions:**
```bash
sudo usermod -aG docker $USER
# Log out and back in
```
2. **Start Docker daemon:**
```bash
sudo systemctl start docker
sudo systemctl enable docker
```
## Advanced Troubleshooting
### Debug Mode
Enable detailed logging:
```bash
# In .env file
LOG_LEVEL=DEBUG
# Restart with verbose output
docker compose down && docker compose up
```
### Container Debugging
Access container for inspection:
```bash
# Enter MCP server container
docker exec -it gemini-mcp-server bash
# Check Python environment
python --version
pip list
# Test Gemini API directly
python -c "
import google.generativeai as genai
import os
genai.configure(api_key=os.getenv('GEMINI_API_KEY'))
model = genai.GenerativeModel('gemini-pro')
print('API connection test successful')
"
```
### Network Debugging
Check container networking:
```bash
# Inspect Docker network
docker network ls
docker network inspect gemini-mcp-server_default
# Test container communication
docker exec -it gemini-mcp-server ping redis
```
### Clean Reset
Complete environment reset:
```bash
# Stop everything
docker compose down -v
# Remove images
docker rmi $(docker images "gemini-mcp-server*" -q)
# Clean setup
./setup-docker.sh
```
## Performance Optimization
### Resource Monitoring
```bash
# Monitor container resources
docker stats
# Check system resources
htop # or top
df -h # disk space
```
### Optimization Tips
1. **Allocate adequate memory:**
- Minimum: 2GB for Docker Desktop
- Recommended: 4GB+ for large projects
2. **Use SSD storage:**
- Docker volumes perform better on SSDs
3. **Limit context size:**
- Use specific file paths instead of entire directories
- Utilize thinking modes to control token usage
## Getting Help
### Collect Debug Information
Before seeking help, collect:
```bash
# System information
docker --version
docker compose --version
uname -a
# Container status
docker compose ps
docker compose logs --tail=100
# Configuration
cat .env | grep -v "GEMINI_API_KEY"
```
### Support Channels
- 📖 [Documentation](../README.md)
- 💬 [GitHub Discussions](https://github.com/BeehiveInnovations/gemini-mcp-server/discussions)
- 🐛 [Issue Tracker](https://github.com/BeehiveInnovations/gemini-mcp-server/issues)
### Creating Bug Reports
Include in your bug report:
1. System information (OS, Docker version)
2. Steps to reproduce
3. Expected vs actual behavior
4. Relevant log output
5. Configuration (without API keys)
---
**See Also:**
- [Installation Guide](installation.md)
- [Configuration Guide](configuration.md)