Files
my-pal-mcp-server/tools/review_code.py
Fahad 27add4d05d feat: Major refactoring and improvements v2.11.0
## 🚀 Major Improvements

### Docker Environment Simplification
- **BREAKING**: Simplified Docker configuration by auto-detecting sandbox from WORKSPACE_ROOT
- Removed redundant MCP_PROJECT_ROOT requirement for Docker setups
- Updated all Docker config examples and setup scripts
- Added security validation for dangerous WORKSPACE_ROOT paths

### Security Enhancements
- **CRITICAL**: Fixed insecure PROJECT_ROOT fallback to use current directory instead of home
- Enhanced path validation with proper Docker environment detection
- Removed information disclosure in error messages
- Strengthened symlink and path traversal protection

### File Handling Optimization
- **PERFORMANCE**: Optimized read_files() to return content only (removed summary)
- Unified file reading across all tools using standardized file_utils routines
- Fixed review_changes tool to use consistent file loading patterns
- Improved token management and reduced unnecessary processing

### Tool Improvements
- **UX**: Enhanced ReviewCodeTool to require user context for targeted reviews
- Removed deprecated _get_secure_container_path function and _sanitize_filename
- Standardized file access patterns across analyze, review_changes, and other tools
- Added contextual prompting to align reviews with user expectations

### Code Quality & Testing
- Updated all tests for new function signatures and requirements
- Added comprehensive Docker path integration tests
- Achieved 100% test coverage (95 tests passing)
- Full compliance with ruff, black, and isort linting standards

### Configuration & Deployment
- Added pyproject.toml for modern Python packaging
- Streamlined Docker setup removing redundant environment variables
- Updated setup scripts across all platforms (Windows, macOS, Linux)
- Improved error handling and validation throughout

## 🔧 Technical Changes

- **Removed**: `_get_secure_container_path()`, `_sanitize_filename()`, unused SANDBOX_MODE
- **Enhanced**: Path translation, security validation, token management
- **Standardized**: File reading patterns, error handling, Docker detection
- **Updated**: All tool prompts for better context alignment

## 🛡️ Security Notes

This release significantly improves the security posture by:
- Eliminating broad filesystem access defaults
- Adding validation for Docker environment variables
- Removing information disclosure in error paths
- Strengthening path traversal and symlink protections

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-10 09:50:05 +04:00

244 lines
9.3 KiB
Python

"""
Code Review tool - Comprehensive code analysis and review
This tool provides professional-grade code review capabilities using
Gemini's understanding of code patterns, best practices, and common issues.
It can analyze individual files or entire codebases, providing actionable
feedback categorized by severity.
Key Features:
- Multi-file and directory support
- Configurable review types (full, security, performance, quick)
- Severity-based issue filtering
- Custom focus areas and coding standards
- Structured output with specific remediation steps
"""
from typing import Any, Optional
from mcp.types import TextContent
from pydantic import Field
from config import TEMPERATURE_ANALYTICAL
from prompts import REVIEW_CODE_PROMPT
from utils import read_files
from .base import BaseTool, ToolRequest
from .models import ToolOutput
class ReviewCodeRequest(ToolRequest):
"""
Request model for the code review tool.
This model defines all parameters that can be used to customize
the code review process, from selecting files to specifying
review focus and standards.
"""
files: list[str] = Field(
...,
description="Code files or directories to review (must be absolute paths)",
)
context: str = Field(
...,
description="User's summary of what the code does, expected behavior, constraints, and review objectives",
)
review_type: str = Field("full", description="Type of review: full|security|performance|quick")
focus_on: Optional[str] = Field(None, description="Specific aspects to focus on during review")
standards: Optional[str] = Field(None, description="Coding standards or guidelines to enforce")
severity_filter: str = Field(
"all",
description="Minimum severity to report: critical|high|medium|all",
)
class ReviewCodeTool(BaseTool):
"""
Professional code review tool implementation.
This tool analyzes code for bugs, security vulnerabilities, performance
issues, and code quality problems. It provides detailed feedback with
severity ratings and specific remediation steps.
"""
def get_name(self) -> str:
return "review_code"
def get_description(self) -> str:
return (
"PROFESSIONAL CODE REVIEW - Comprehensive analysis for bugs, security, and quality. "
"Supports both individual files and entire directories/projects. "
"Use this for thorough code review with actionable feedback. "
"Triggers: 'review this code', 'check for issues', 'find bugs', 'security audit'. "
"I'll identify issues by severity (Critical→High→Medium→Low) with specific fixes. "
"Supports focused reviews: security, performance, or quick checks. "
"Choose thinking_mode based on review scope: 'low' for small code snippets, "
"'medium' for standard files/modules (default), 'high' for complex systems/architectures, "
"'max' for critical security audits or large codebases requiring deepest analysis."
)
def get_input_schema(self) -> dict[str, Any]:
return {
"type": "object",
"properties": {
"files": {
"type": "array",
"items": {"type": "string"},
"description": "Code files or directories to review (must be absolute paths)",
},
"context": {
"type": "string",
"description": "User's summary of what the code does, expected behavior, constraints, and review objectives",
},
"review_type": {
"type": "string",
"enum": ["full", "security", "performance", "quick"],
"default": "full",
"description": "Type of review to perform",
},
"focus_on": {
"type": "string",
"description": "Specific aspects to focus on",
},
"standards": {
"type": "string",
"description": "Coding standards to enforce",
},
"severity_filter": {
"type": "string",
"enum": ["critical", "high", "medium", "all"],
"default": "all",
"description": "Minimum severity level to report",
},
"temperature": {
"type": "number",
"description": "Temperature (0-1, default 0.2 for consistency)",
"minimum": 0,
"maximum": 1,
},
"thinking_mode": {
"type": "string",
"enum": ["minimal", "low", "medium", "high", "max"],
"description": "Thinking depth: minimal (128), low (2048), medium (8192), high (16384), max (32768)",
},
},
"required": ["files", "context"],
}
def get_system_prompt(self) -> str:
return REVIEW_CODE_PROMPT
def get_default_temperature(self) -> float:
return TEMPERATURE_ANALYTICAL
def get_request_model(self):
return ReviewCodeRequest
async def execute(self, arguments: dict[str, Any]) -> list[TextContent]:
"""Override execute to check focus_on size before processing"""
# First validate request
request_model = self.get_request_model()
request = request_model(**arguments)
# Check focus_on size if provided
if request.focus_on:
size_check = self.check_prompt_size(request.focus_on)
if size_check:
return [TextContent(type="text", text=ToolOutput(**size_check).model_dump_json())]
# Continue with normal execution
return await super().execute(arguments)
async def prepare_prompt(self, request: ReviewCodeRequest) -> str:
"""
Prepare the code review prompt with customized instructions.
This method reads the requested files, validates token limits,
and constructs a detailed prompt based on the review parameters.
Args:
request: The validated review request
Returns:
str: Complete prompt for the Gemini model
Raises:
ValueError: If the code exceeds token limits
"""
# Check for prompt.txt in files
prompt_content, updated_files = self.handle_prompt_file(request.files)
# If prompt.txt was found, use it as focus_on
if prompt_content:
request.focus_on = prompt_content
# Update request files list
if updated_files is not None:
request.files = updated_files
# Read all requested files, expanding directories as needed
file_content = read_files(request.files)
# Validate that the code fits within model context limits
self._validate_token_limit(file_content, "Code")
# Build customized review instructions based on review type
review_focus = []
if request.review_type == "security":
review_focus.append("Focus on security vulnerabilities and authentication issues")
elif request.review_type == "performance":
review_focus.append("Focus on performance bottlenecks and optimization opportunities")
elif request.review_type == "quick":
review_focus.append("Provide a quick review focusing on critical issues only")
# Add any additional focus areas specified by the user
if request.focus_on:
review_focus.append(f"Pay special attention to: {request.focus_on}")
# Include custom coding standards if provided
if request.standards:
review_focus.append(f"Enforce these standards: {request.standards}")
# Apply severity filtering to reduce noise if requested
if request.severity_filter != "all":
review_focus.append(f"Only report issues of {request.severity_filter} severity or higher")
focus_instruction = "\n".join(review_focus) if review_focus else ""
# Construct the complete prompt with system instructions and code
full_prompt = f"""{self.get_system_prompt()}
=== USER CONTEXT ===
{request.context}
=== END CONTEXT ===
{focus_instruction}
=== CODE TO REVIEW ===
{file_content}
=== END CODE ===
Please provide a code review aligned with the user's context and expectations, following the format specified in the system prompt."""
return full_prompt
def format_response(self, response: str, request: ReviewCodeRequest) -> str:
"""
Format the review response with appropriate headers.
Adds context about the review type and focus area to help
users understand the scope of the review.
Args:
response: The raw review from the model
request: The original request for context
Returns:
str: Formatted response with headers
"""
header = f"Code Review ({request.review_type.upper()})"
if request.focus_on:
header += f" - Focus: {request.focus_on}"
return f"{header}\n{'=' * 50}\n\n{response}\n\n---\n\n**Follow-up Actions:** Address critical issues first, then high priority ones. Consider running tests after fixes and re-reviewing if substantial changes were made."