## 🚀 Major Improvements ### Docker Environment Simplification - **BREAKING**: Simplified Docker configuration by auto-detecting sandbox from WORKSPACE_ROOT - Removed redundant MCP_PROJECT_ROOT requirement for Docker setups - Updated all Docker config examples and setup scripts - Added security validation for dangerous WORKSPACE_ROOT paths ### Security Enhancements - **CRITICAL**: Fixed insecure PROJECT_ROOT fallback to use current directory instead of home - Enhanced path validation with proper Docker environment detection - Removed information disclosure in error messages - Strengthened symlink and path traversal protection ### File Handling Optimization - **PERFORMANCE**: Optimized read_files() to return content only (removed summary) - Unified file reading across all tools using standardized file_utils routines - Fixed review_changes tool to use consistent file loading patterns - Improved token management and reduced unnecessary processing ### Tool Improvements - **UX**: Enhanced ReviewCodeTool to require user context for targeted reviews - Removed deprecated _get_secure_container_path function and _sanitize_filename - Standardized file access patterns across analyze, review_changes, and other tools - Added contextual prompting to align reviews with user expectations ### Code Quality & Testing - Updated all tests for new function signatures and requirements - Added comprehensive Docker path integration tests - Achieved 100% test coverage (95 tests passing) - Full compliance with ruff, black, and isort linting standards ### Configuration & Deployment - Added pyproject.toml for modern Python packaging - Streamlined Docker setup removing redundant environment variables - Updated setup scripts across all platforms (Windows, macOS, Linux) - Improved error handling and validation throughout ## 🔧 Technical Changes - **Removed**: `_get_secure_container_path()`, `_sanitize_filename()`, unused SANDBOX_MODE - **Enhanced**: Path translation, security validation, token management - **Standardized**: File reading patterns, error handling, Docker detection - **Updated**: All tool prompts for better context alignment ## 🛡️ Security Notes This release significantly improves the security posture by: - Eliminating broad filesystem access defaults - Adding validation for Docker environment variables - Removing information disclosure in error paths - Strengthening path traversal and symlink protections 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
52 lines
2.4 KiB
Python
52 lines
2.4 KiB
Python
"""
|
|
Configuration and constants for Gemini MCP Server
|
|
|
|
This module centralizes all configuration settings for the Gemini MCP Server.
|
|
It defines model configurations, token limits, temperature defaults, and other
|
|
constants used throughout the application.
|
|
|
|
Configuration values can be overridden by environment variables where appropriate.
|
|
"""
|
|
|
|
# Version and metadata
|
|
# These values are used in server responses and for tracking releases
|
|
__version__ = "2.11.0" # Semantic versioning: MAJOR.MINOR.PATCH
|
|
__updated__ = "2025-06-10" # Last update date in ISO format
|
|
__author__ = "Fahad Gilani" # Primary maintainer
|
|
|
|
# Model configuration
|
|
# GEMINI_MODEL: The Gemini model used for all AI operations
|
|
# This should be a stable, high-performance model suitable for code analysis
|
|
GEMINI_MODEL = "gemini-2.5-pro-preview-06-05"
|
|
|
|
# MAX_CONTEXT_TOKENS: Maximum number of tokens that can be included in a single request
|
|
# This limit includes both the prompt and expected response
|
|
# Gemini Pro models support up to 1M tokens, but practical usage should reserve
|
|
# space for the model's response (typically 50K-100K tokens reserved)
|
|
MAX_CONTEXT_TOKENS = 1_000_000 # 1M tokens for Gemini Pro
|
|
|
|
# Temperature defaults for different tool types
|
|
# Temperature controls the randomness/creativity of model responses
|
|
# Lower values (0.0-0.3) produce more deterministic, focused responses
|
|
# Higher values (0.7-1.0) produce more creative, varied responses
|
|
|
|
# TEMPERATURE_ANALYTICAL: Used for tasks requiring precision and consistency
|
|
# Ideal for code review, debugging, and error analysis where accuracy is critical
|
|
TEMPERATURE_ANALYTICAL = 0.2 # For code review, debugging
|
|
|
|
# TEMPERATURE_BALANCED: Middle ground for general conversations
|
|
# Provides a good balance between consistency and helpful variety
|
|
TEMPERATURE_BALANCED = 0.5 # For general chat
|
|
|
|
# TEMPERATURE_CREATIVE: Higher temperature for exploratory tasks
|
|
# Used when brainstorming, exploring alternatives, or architectural discussions
|
|
TEMPERATURE_CREATIVE = 0.7 # For architecture, deep thinking
|
|
|
|
# MCP Protocol Limits
|
|
# MCP_PROMPT_SIZE_LIMIT: Maximum character size for prompts sent directly through MCP
|
|
# The MCP protocol has a combined request+response limit of ~25K tokens.
|
|
# To ensure we have enough space for responses, we limit direct prompt input
|
|
# to 50K characters (roughly ~10-12K tokens). Larger prompts must be sent
|
|
# as files to bypass MCP's token constraints.
|
|
MCP_PROMPT_SIZE_LIMIT = 50_000 # 50K characters
|