fix: Docker path translation for review_changes and code deduplication

- Fixed review_changes tool to properly translate host paths to container paths in Docker
- Prevents "No such file or directory" errors when running in Docker containers
- Added proper error handling with clear messages when paths are inaccessible

refactor: Centralized token limit validation across all tools
- Added _validate_token_limit method to BaseTool to eliminate code duplication
- Reduced ~25 lines of duplicated code across 5 tools (analyze, chat, debug_issue, review_code, think_deeper)
- Maintains exact same error messages and behavior

feat: Enhanced large prompt handling
- Added support for prompts >50K chars by requesting file-based input
- Preserves MCP's ~25K token capacity for responses
- All tools now check prompt size before processing

test: Added comprehensive Docker path integration tests
- Tests for path translation, security validation, and error handling
- Tests for review_changes tool specifically with Docker paths
- Fixed failing think_deeper test (updated default from "max" to "high")

chore: Code quality improvements
- Applied black formatting across all files
- Fixed import sorting with isort
- All tests passing (96 tests)
- Standardized error handling follows MCP TextContent format

The changes ensure consistent behavior across all environments while reducing code duplication and improving maintainability.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
Fahad
2025-06-10 07:20:24 +04:00
parent ab007fb4b2
commit 7ea790ef88
36 changed files with 1540 additions and 176 deletions

View File

@@ -25,7 +25,7 @@ class TestThinkingModes:
def test_default_thinking_modes(self):
"""Test that tools have correct default thinking modes"""
tools = [
(ThinkDeeperTool(), "max"),
(ThinkDeeperTool(), "high"),
(AnalyzeTool(), "medium"),
(ReviewCodeTool(), "medium"),
(DebugIssueTool(), "medium"),
@@ -156,14 +156,14 @@ class TestThinkingModes:
result = await tool.execute(
{
"current_analysis": "Initial analysis",
# Not specifying thinking_mode, should use default (max)
# Not specifying thinking_mode, should use default (high)
}
)
# Verify create_model was called with default thinking_mode
mock_create_model.assert_called_once()
args = mock_create_model.call_args[0]
assert args[2] == "max"
assert args[2] == "high"
assert "Extended Analysis by Gemini" in result[0].text