Files
my-pal-mcp-server/docs/api/mcp-protocol.md
PCITI c5313b170a docs+docker: Enhanced Docker configuration and workflow fixes (#4)
* addinte templates and user guide

* up docs

* up

* up claude.md

* add mb

* umb

* up workflow

* up settings claude

* adding detailed docs

* adding missing files docs

* add main readme for docs

* up main readme

* adding docs for tests

* Complete documentation integration with test structure analysis link

Adds link to comprehensive test structure documentation in main README.md,
finalizing the progressive disclosure strategy for project documentation.
This completes the documentation integration work that includes:
- Architecture documentation
- API reference documentation
- Contributing guidelines
- Detailed test analysis

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* removing folders from git

* up

* up

* up gitignore

* feat: Add automatic semantic versioning workflow

- Create GitHub Actions workflow for automatic version bumping based on PR title prefixes
- Add version bumping script (scripts/bump_version.py) for programmatic updates
- Update PR template with semantic versioning guidelines
- Document versioning workflow in contributing guide
- Integrate with existing Docker build workflow via git tags

This enables automatic version management:
- feat: triggers MINOR version bump
- fix: triggers PATCH version bump
- breaking: triggers MAJOR version bump
- docs/chore/test: no version bump

🤖 Generated with Claude Code

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: Separate Docker workflows for testing and publishing

- Add docker-test.yml for PR validation (build test only)
- Fix build_and_publish_docker.yml to trigger only on tags
- Remove problematic sha prefix causing invalid tag format
- Ensure proper workflow sequence: PR test → merge → version → publish

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* style: Fix black formatting issues in bump_version.py

- Fix spacing and indentation to pass black formatter
- Ensure code quality standards are met for CI workflow

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* style: Modernize type hints in bump_version.py

- Replace typing.Tuple with modern tuple syntax
- Remove deprecated typing imports per ruff suggestions
- Maintain Python 3.10+ compatibility

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: Remove invalid colon in bash else statement

- Fix bash syntax error in auto-version workflow
- Remove Python-style colon from else statement
- Resolves exit code 127 in version bump determination

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* feat: Add Docker build combinations for non-versioning prefixes

- Add support for prefix+docker combinations (docs+docker:, chore+docker:, etc.)
- Enable Docker build for non-versioning changes when requested
- Add repository_dispatch trigger for Docker workflow
- Update Docker tagging for PR-based builds (pr-X, main-sha)
- Update PR template with new prefix options

This allows contributors to force Docker builds for documentation,
maintenance, and other non-versioning changes when needed.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* docs: Add comprehensive PR prefix and automation documentation

- Update CONTRIBUTING.md with detailed PR prefix system explanation
- Add automation workflow documentation to docs/contributing/workflows.md
- Create new user-friendly contributing guide at docs/user-guides/contributing-guide.md
- Include Mermaid diagrams for workflow visualization
- Document Docker testing combinations and image tagging strategy
- Add best practices and common mistakes to avoid

This provides clear guidance for contributors on using the automated
versioning and Docker build system effectively.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* docs+docker: Complete documentation infrastructure with Docker automation testing (#2)

* fix: Remove invalid colon in bash else statement

- Fix bash syntax error in auto-version workflow
- Remove Python-style colon from else statement
- Resolves exit code 127 in version bump determination

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* feat: Add Docker build combinations for non-versioning prefixes

- Add support for prefix+docker combinations (docs+docker:, chore+docker:, etc.)
- Enable Docker build for non-versioning changes when requested
- Add repository_dispatch trigger for Docker workflow
- Update Docker tagging for PR-based builds (pr-X, main-sha)
- Update PR template with new prefix options

This allows contributors to force Docker builds for documentation,
maintenance, and other non-versioning changes when needed.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* docs: Add comprehensive PR prefix and automation documentation

- Update CONTRIBUTING.md with detailed PR prefix system explanation
- Add automation workflow documentation to docs/contributing/workflows.md
- Create new user-friendly contributing guide at docs/user-guides/contributing-guide.md
- Include Mermaid diagrams for workflow visualization
- Document Docker testing combinations and image tagging strategy
- Add best practices and common mistakes to avoid

This provides clear guidance for contributors on using the automated
versioning and Docker build system effectively.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Patryk Ciechanski <patryk.ciechanski@inetum.com>
Co-authored-by: Claude <noreply@anthropic.com>

* fix: Correct digest reference in Docker artifact attestation

- Add id to build step to capture outputs
- Fix subject-digest reference from steps.build.outputs.digest
- Resolves 'One of subject-path or subject-digest must be provided' error

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* docs: Add comprehensive Docker image usage instructions

- Add Option B (Published Docker Image) to main README.md
- Update installation guide with published image as fastest option
- Add comprehensive configuration examples for GHCR images
- Document image tagging strategy (latest, versioned, PR builds)
- Include version pinning examples for stability
- Highlight benefits: instant setup, no build, cross-platform

Users can now choose between:
1. Published image (fastest, no setup) - ghcr.io/patrykiti/gemini-mcp-server:latest
2. Local build (development, customization) - traditional setup

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* feat: Add automated Docker image usage instructions and PR comments

- Generate comprehensive usage instructions in workflow summary after Docker build
- Include exact docker pull commands with built image tags
- Auto-generate Claude Desktop configuration examples
- Add automatic PR comments with testing instructions for +docker builds
- Show expected image tags (pr-X, main-sha) in PR comments
- Include ready-to-use configuration snippets for immediate testing
- Link to GitHub Container Registry and Actions for monitoring

Now when Docker images are built, users get:
- Step-by-step usage instructions in workflow summary
- PR comments with exact pull commands and config
- Copy-paste ready Claude Desktop configurations
- Direct links to monitor build progress

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* feat: Add automatic README.md updating after Docker builds

- Updates Docker image references in README.md and documentation files
- Automatically commits and pushes changes after image builds
- Handles both release builds (version tags) and development builds (PR numbers)
- Ensures documentation always references the latest published images
- Uses sed pattern matching to update ghcr.io image references

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* correcting

* up

* fix: GitHub Actions workflows semantic errors

Fixed critical semantic and logic errors in auto-version and Docker workflows:

Auto-version.yml fixes:
- Removed duplicate echo statements for should_build_docker output
- Fixed malformed if/else structure (else after else)
- Removed redundant conditional blocks for docker: prefixes
- Cleaned up duplicate lines in summary generation

Build_and_publish_docker.yml fixes:
- Replaced hardcoded 'patrykiti' with dynamic ${{ github.repository_owner }}
- Enhanced regex pattern to support underscores in Docker tags: [a-zA-Z0-9\._-]*
- Fixed sed patterns for dynamic repository owner detection

These changes ensure workflows execute correctly and support any repository owner.

🤖 Generated with Claude Code

Co-Authored-By: Claude <noreply@anthropic.com>

* docs: Add advanced Docker configuration options to README

Added comprehensive configuration section with optional environment variables:

Docker Configuration Features:
- Advanced configuration example with all available env vars
- Complete table of environment variables with descriptions
- Practical examples for common configuration scenarios
- Clear documentation of config.py options for Docker users

Available Configuration Options:
- DEFAULT_MODEL: Choose between Pro (quality) vs Flash (speed)
- DEFAULT_THINKING_MODE_THINKDEEP: Control token costs with thinking depth
- LOG_LEVEL: Debug logging for troubleshooting
- MCP_PROJECT_ROOT: Security sandbox for file access
- REDIS_URL: Custom Redis configuration

Benefits:
- Users can customize server behavior without rebuilding images
- Better cost control through model and thinking mode selection
- Enhanced security through project root restrictions
- Improved debugging capabilities with configurable logging
- Complete transparency of available configuration options

This addresses user request for exposing config.py parameters via Docker environment variables.

🤖 Generated with Claude Code

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Patryk Ciechanski <patryk.ciechanski@inetum.com>
Co-authored-by: Claude <noreply@anthropic.com>
2025-06-12 12:10:27 +02:00

12 KiB

MCP Protocol Implementation

Overview

The Gemini MCP Server implements the Model Context Protocol (MCP) specification, providing Claude with standardized access to Google's Gemini AI models through a secure, tool-based interface.

Protocol Specification

MCP Version

  • Implemented Version: MCP v1.0
  • Transport: stdio (standard input/output)
  • Serialization: JSON-RPC 2.0
  • Authentication: Environment-based API key management

Core Protocol Flow

{
  "jsonrpc": "2.0",
  "id": 1,
  "method": "tools/list",
  "params": {}
}

Response:

{
  "jsonrpc": "2.0",
  "id": 1,
  "result": {
    "tools": [
      {
        "name": "chat",
        "description": "Quick questions and general collaboration",
        "inputSchema": {
          "type": "object",
          "properties": {
            "prompt": {"type": "string"},
            "continuation_id": {"type": "string", "optional": true}
          },
          "required": ["prompt"]
        }
      }
    ]
  }
}

Tool Registration System

Tool Discovery (server.py:67)

@server.list_tools()
async def list_tools() -> list[types.Tool]:
    """Dynamic tool discovery and registration"""
    tools = []
    
    # Scan tools directory for available tools
    for tool_module in REGISTERED_TOOLS:
        tool_instance = tool_module()
        schema = tool_instance.get_schema()
        tools.append(schema)
    
    return tools

Tool Schema Definition

Each tool must implement a standardized schema:

def get_schema(self) -> types.Tool:
    return types.Tool(
        name="analyze",
        description="Code exploration and understanding",
        inputSchema={
            "type": "object",
            "properties": {
                "files": {
                    "type": "array",
                    "items": {"type": "string"},
                    "description": "Files or directories to analyze"
                },
                "question": {
                    "type": "string", 
                    "description": "What to analyze or look for"
                },
                "analysis_type": {
                    "type": "string",
                    "enum": ["architecture", "performance", "security", "quality", "general"],
                    "default": "general"
                },
                "thinking_mode": {
                    "type": "string",
                    "enum": ["minimal", "low", "medium", "high", "max"],
                    "default": "medium"
                },
                "continuation_id": {
                    "type": "string",
                    "description": "Thread continuation ID for multi-turn conversations"
                }
            },
            "required": ["files", "question"]
        }
    )

Tool Execution Protocol

Request Processing (server.py:89)

@server.call_tool()
async def call_tool(name: str, arguments: dict) -> list[types.TextContent]:
    """Tool execution with comprehensive error handling"""
    
    try:
        # 1. Tool validation
        tool_class = TOOL_REGISTRY.get(name)
        if not tool_class:
            raise ToolNotFoundError(f"Tool '{name}' not found")
        
        # 2. Parameter validation
        tool_instance = tool_class()
        validated_args = tool_instance.validate_parameters(arguments)
        
        # 3. Security validation
        if 'files' in validated_args:
            validated_args['files'] = validate_file_paths(validated_args['files'])
        
        # 4. Tool execution
        result = await tool_instance.execute(validated_args)
        
        # 5. Response formatting
        return [types.TextContent(
            type="text",
            text=result.content
        )]
        
    except Exception as e:
        # Error response with context
        error_response = format_error_response(e, name, arguments)
        return [types.TextContent(
            type="text", 
            text=error_response
        )]

Response Standardization

All tools return standardized ToolOutput objects:

@dataclass
class ToolOutput:
    content: str
    metadata: Dict[str, Any]
    continuation_id: Optional[str] = None
    files_processed: List[str] = field(default_factory=list)
    thinking_tokens_used: int = 0
    status: str = "success"  # success, partial, error
    
    def to_mcp_response(self) -> str:
        """Convert to MCP-compatible response format"""
        response_parts = [self.content]
        
        if self.metadata:
            response_parts.append("\n## Metadata")
            for key, value in self.metadata.items():
                response_parts.append(f"- {key}: {value}")
        
        if self.files_processed:
            response_parts.append("\n## Files Processed")
            for file_path in self.files_processed:
                response_parts.append(f"- {file_path}")
        
        if self.continuation_id:
            response_parts.append(f"\n## Continuation ID: {self.continuation_id}")
        
        return '\n'.join(response_parts)

Individual Tool APIs

1. Chat Tool

Purpose: Quick questions, brainstorming, general discussion

API Specification:

{
  "name": "chat",
  "parameters": {
    "prompt": "string (required)",
    "continuation_id": "string (optional)",
    "temperature": "number (optional, 0.0-1.0, default: 0.5)",
    "thinking_mode": "string (optional, default: 'medium')"
  }
}

Example Request:

{
  "method": "tools/call",
  "params": {
    "name": "chat",
    "arguments": {
      "prompt": "Explain the benefits of using MCP protocol",
      "thinking_mode": "low"
    }
  }
}

Response Format:

{
  "result": [{
    "type": "text",
    "text": "The Model Context Protocol (MCP) provides several key benefits:\n\n1. **Standardization**: Unified interface across different AI tools...\n\n## Metadata\n- thinking_mode: low\n- tokens_used: 156\n- response_time: 1.2s"
  }]
}

2. ThinkDeep Tool

Purpose: Complex architecture, system design, strategic planning

API Specification:

{
  "name": "thinkdeep", 
  "parameters": {
    "current_analysis": "string (required)",
    "problem_context": "string (optional)",
    "focus_areas": "array of strings (optional)",
    "thinking_mode": "string (optional, default: 'high')",
    "files": "array of strings (optional)",
    "continuation_id": "string (optional)"
  }
}

Example Request:

{
  "method": "tools/call",
  "params": {
    "name": "thinkdeep",
    "arguments": {
      "current_analysis": "We have an MCP server with 6 specialized tools",
      "problem_context": "Need to scale to handle 100+ concurrent Claude sessions",
      "focus_areas": ["performance", "architecture", "resource_management"],
      "thinking_mode": "max"
    }
  }
}

3. Analyze Tool

Purpose: Code exploration, understanding existing systems

API Specification:

{
  "name": "analyze",
  "parameters": {
    "files": "array of strings (required)",
    "question": "string (required)", 
    "analysis_type": "enum: architecture|performance|security|quality|general",
    "thinking_mode": "string (optional, default: 'medium')",
    "continuation_id": "string (optional)"
  }
}

File Processing Behavior:

  • Directories: Recursively scanned for relevant files
  • Token Budget: Allocated based on file priority (source code > docs > logs)
  • Security: All paths validated and sandboxed to PROJECT_ROOT
  • Formatting: Line numbers added for precise code references

4. CodeReview Tool

Purpose: Code quality, security, bug detection

API Specification:

{
  "name": "codereview",
  "parameters": {
    "files": "array of strings (required)",
    "context": "string (required)",
    "review_type": "enum: full|security|performance|quick (default: full)",
    "severity_filter": "enum: critical|high|medium|all (default: all)", 
    "standards": "string (optional)",
    "thinking_mode": "string (optional, default: 'medium')"
  }
}

Response Includes:

  • Issue Categorization: Critical → High → Medium → Low
  • Specific Fixes: Concrete code suggestions with line numbers
  • Security Assessment: Vulnerability detection and mitigation
  • Performance Analysis: Optimization opportunities

5. Debug Tool

Purpose: Root cause analysis, error investigation

API Specification:

{
  "name": "debug",
  "parameters": {
    "error_description": "string (required)",
    "error_context": "string (optional)", 
    "files": "array of strings (optional)",
    "previous_attempts": "string (optional)",
    "runtime_info": "string (optional)",
    "thinking_mode": "string (optional, default: 'medium')"
  }
}

Diagnostic Capabilities:

  • Stack Trace Analysis: Multi-language error parsing
  • Root Cause Identification: Systematic error investigation
  • Reproduction Steps: Detailed debugging procedures
  • Fix Recommendations: Prioritized solution approaches

6. Precommit Tool

Purpose: Automated quality gates, validation before commits

API Specification:

{
  "name": "precommit",
  "parameters": {
    "path": "string (required, git repository root)",
    "include_staged": "boolean (default: true)",
    "include_unstaged": "boolean (default: true)",
    "review_type": "enum: full|security|performance|quick (default: full)",
    "original_request": "string (optional, user's intent)",
    "thinking_mode": "string (optional, default: 'medium')"
  }
}

Validation Process:

  1. Git Analysis: Staged/unstaged changes detection
  2. Quality Review: Comprehensive code analysis
  3. Security Scan: Vulnerability and secret detection
  4. Documentation Check: Ensures docs match code changes
  5. Test Validation: Recommends testing strategies
  6. Commit Readiness: Go/no-go recommendation

Error Handling & Status Codes

Standard Error Responses

{
  "jsonrpc": "2.0",
  "id": 1,
  "error": {
    "code": -32602,
    "message": "Invalid params",
    "data": {
      "validation_errors": [
        {
          "field": "files",
          "error": "Path outside sandbox: /etc/passwd"
        }
      ]
    }
  }
}

Error Categories

Security Errors (Code: -32001):

  • Path traversal attempts
  • Unauthorized file access
  • Sandbox boundary violations

Validation Errors (Code: -32602):

  • Missing required parameters
  • Invalid parameter types
  • Schema validation failures

Tool Errors (Code: -32603):

  • Tool execution failures
  • Gemini API errors
  • Resource exhaustion

System Errors (Code: -32000):

  • Redis connection failures
  • File system errors
  • Configuration issues

Performance & Limits

Request Limits

  • Maximum File Size: 10MB per file
  • Maximum Files: 50 files per request
  • Token Budget: 1M tokens total context
  • Thinking Tokens: 32K maximum per tool
  • Request Timeout: 300 seconds

Rate Limiting

# Per-client rate limiting (future implementation)
RATE_LIMITS = {
    'chat': '10/minute',
    'analyze': '5/minute', 
    'thinkdeep': '3/minute',
    'codereview': '5/minute',
    'debug': '5/minute',
    'precommit': '3/minute'
}

Optimization Features

  • File Deduplication: Avoid reprocessing same files across conversation
  • Context Caching: Redis-based conversation persistence
  • Priority Processing: Source code files processed first
  • Concurrent Execution: AsyncIO-based parallel processing

Security Considerations

Authentication

  • API Key: Gemini API key via environment variable
  • No User Auth: Runs in trusted Claude Desktop environment
  • Local Only: No network exposure beyond Gemini API

Data Protection

  • Sandbox Enforcement: PROJECT_ROOT boundary enforcement
  • Path Validation: Multi-layer dangerous path detection
  • Response Sanitization: Automatic sensitive data removal
  • Temporary Storage: Redis with TTL-based cleanup

Access Controls

  • Read-Only Default: Most operations are read-only
  • Explicit Write Gates: Write operations require explicit confirmation
  • Docker Isolation: Container-based runtime isolation

This MCP protocol implementation provides a secure, performant, and extensible foundation for AI-assisted development workflows while maintaining compatibility with Claude's expectations and requirements.