Files
my-pal-mcp-server/tools/models.py
Fahad 7ea790ef88 fix: Docker path translation for review_changes and code deduplication
- Fixed review_changes tool to properly translate host paths to container paths in Docker
- Prevents "No such file or directory" errors when running in Docker containers
- Added proper error handling with clear messages when paths are inaccessible

refactor: Centralized token limit validation across all tools
- Added _validate_token_limit method to BaseTool to eliminate code duplication
- Reduced ~25 lines of duplicated code across 5 tools (analyze, chat, debug_issue, review_code, think_deeper)
- Maintains exact same error messages and behavior

feat: Enhanced large prompt handling
- Added support for prompts >50K chars by requesting file-based input
- Preserves MCP's ~25K token capacity for responses
- All tools now check prompt size before processing

test: Added comprehensive Docker path integration tests
- Tests for path translation, security validation, and error handling
- Tests for review_changes tool specifically with Docker paths
- Fixed failing think_deeper test (updated default from "max" to "high")

chore: Code quality improvements
- Applied black formatting across all files
- Fixed import sorting with isort
- All tests passing (96 tests)
- Standardized error handling follows MCP TextContent format

The changes ensure consistent behavior across all environments while reducing code duplication and improving maintainability.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-10 07:20:24 +04:00

63 lines
2.2 KiB
Python

"""
Data models for tool responses and interactions
"""
from typing import Any, Dict, List, Literal, Optional
from pydantic import BaseModel, Field
class ToolOutput(BaseModel):
"""Standardized output format for all tools"""
status: Literal[
"success", "error", "requires_clarification", "requires_file_prompt"
] = "success"
content: str = Field(..., description="The main content/response from the tool")
content_type: Literal["text", "markdown", "json"] = "text"
metadata: Optional[Dict[str, Any]] = Field(default_factory=dict)
class ClarificationRequest(BaseModel):
"""Request for additional context or clarification"""
question: str = Field(..., description="Question to ask Claude for more context")
files_needed: Optional[List[str]] = Field(
default_factory=list, description="Specific files that are needed for analysis"
)
suggested_next_action: Optional[Dict[str, Any]] = Field(
None,
description="Suggested tool call with parameters after getting clarification",
)
class DiagnosticHypothesis(BaseModel):
"""A debugging hypothesis with context and next steps"""
rank: int = Field(..., description="Ranking of this hypothesis (1 = most likely)")
confidence: Literal["high", "medium", "low"] = Field(
..., description="Confidence level"
)
hypothesis: str = Field(..., description="Description of the potential root cause")
reasoning: str = Field(..., description="Why this hypothesis is plausible")
next_step: str = Field(
..., description="Suggested action to test/validate this hypothesis"
)
class StructuredDebugResponse(BaseModel):
"""Enhanced debug response with multiple hypotheses"""
summary: str = Field(..., description="Brief summary of the issue")
hypotheses: List[DiagnosticHypothesis] = Field(
..., description="Ranked list of potential causes"
)
immediate_actions: List[str] = Field(
default_factory=list,
description="Immediate steps to take regardless of root cause",
)
additional_context_needed: Optional[List[str]] = Field(
default_factory=list,
description="Additional files or information that would help with analysis",
)