feat: add review_pending_changes tool and enforce absolute path security
- Add new review_pending_changes tool for comprehensive pre-commit reviews - Implement filesystem sandboxing with MCP_PROJECT_ROOT - Enforce absolute paths for all file/directory operations - Add comprehensive git utilities for repository management - Update all tools to use centralized path validation - Add extensive test coverage for new features and security model - Update documentation with new tool and path requirements - Remove obsolete demo and guide files 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
84
README.md
84
README.md
@@ -115,6 +115,7 @@ Just ask Claude naturally:
|
||||
**Quick Tool Selection Guide:**
|
||||
- **Need deeper thinking?** → `think_deeper` (extends Claude's analysis, finds edge cases)
|
||||
- **Code needs review?** → `review_code` (bugs, security, performance issues)
|
||||
- **Pre-commit validation?** → `review_pending_changes` (validate git changes before committing)
|
||||
- **Something's broken?** → `debug_issue` (root cause analysis, error tracing)
|
||||
- **Want to understand code?** → `analyze` (architecture, patterns, dependencies)
|
||||
- **Need a thinking partner?** → `chat` (brainstorm ideas, get second opinions, validate approaches)
|
||||
@@ -124,11 +125,12 @@ Just ask Claude naturally:
|
||||
**Tools Overview:**
|
||||
1. [`think_deeper`](#1-think_deeper---extended-reasoning-partner) - Extended reasoning and problem-solving
|
||||
2. [`review_code`](#2-review_code---professional-code-review) - Professional code review with severity levels
|
||||
3. [`debug_issue`](#3-debug_issue---expert-debugging-assistant) - Root cause analysis and debugging
|
||||
4. [`analyze`](#4-analyze---smart-file-analysis) - General-purpose file and code analysis
|
||||
5. [`chat`](#5-chat---general-development-chat--collaborative-thinking) - Collaborative thinking and development conversations
|
||||
6. [`list_models`](#6-list_models---see-available-gemini-models) - List available Gemini models
|
||||
7. [`get_version`](#7-get_version---server-information) - Get server version and configuration
|
||||
3. [`review_pending_changes`](#3-review_pending_changes---pre-commit-validation) - Validate git changes before committing
|
||||
4. [`debug_issue`](#4-debug_issue---expert-debugging-assistant) - Root cause analysis and debugging
|
||||
5. [`analyze`](#5-analyze---smart-file-analysis) - General-purpose file and code analysis
|
||||
6. [`chat`](#6-chat---general-development-chat--collaborative-thinking) - Collaborative thinking and development conversations
|
||||
7. [`list_models`](#7-list_models---see-available-gemini-models) - List available Gemini models
|
||||
8. [`get_version`](#8-get_version---server-information) - Get server version and configuration
|
||||
|
||||
### 1. `think_deeper` - Extended Reasoning Partner
|
||||
|
||||
@@ -203,7 +205,48 @@ make any necessary adjustments and show me the final secure implementation."
|
||||
|
||||
**Triggers:** review code, check for issues, find bugs, security check
|
||||
|
||||
### 3. `debug_issue` - Expert Debugging Assistant
|
||||
### 3. `review_pending_changes` - Pre-Commit Validation
|
||||
**Comprehensive review of staged/unstaged git changes across multiple repositories**
|
||||
|
||||
#### Example Prompts:
|
||||
|
||||
**Basic Usage:**
|
||||
```
|
||||
"Use gemini to review my pending changes before I commit"
|
||||
"Get gemini to validate all my git changes match the original requirements"
|
||||
"Review pending changes in the frontend/ directory"
|
||||
```
|
||||
|
||||
**Collaborative Workflow:**
|
||||
```
|
||||
"I've implemented the user authentication feature. Use gemini to review all pending changes
|
||||
across the codebase to ensure they align with the security requirements. Fix any issues
|
||||
gemini identifies before committing."
|
||||
|
||||
"Review all my changes for the API refactoring task. Get gemini to check for incomplete
|
||||
implementations or missing test coverage. Update the code based on gemini's findings."
|
||||
```
|
||||
|
||||
**Key Features:**
|
||||
- **Recursive repository discovery** - finds all git repos including nested ones
|
||||
- **Validates changes against requirements** - ensures implementation matches intent
|
||||
- **Detects incomplete changes** - finds added functions never called, missing tests, etc.
|
||||
- **Multi-repo support** - reviews changes across multiple repositories in one go
|
||||
- **Configurable scope** - review staged, unstaged, or compare against branches
|
||||
- **Security focused** - catches exposed secrets, vulnerabilities in new code
|
||||
- **Smart truncation** - handles large diffs without exceeding context limits
|
||||
|
||||
**Parameters:**
|
||||
- `path`: Starting directory to search for repos (default: current directory)
|
||||
- `original_request`: The requirements/ticket for context
|
||||
- `compare_to`: Compare against a branch/tag instead of local changes
|
||||
- `review_type`: full|security|performance|quick
|
||||
- `severity_filter`: Filter by issue severity
|
||||
- `max_depth`: How deep to search for nested repos
|
||||
|
||||
**Triggers:** review pending changes, check my changes, validate changes, pre-commit review
|
||||
|
||||
### 4. `debug_issue` - Expert Debugging Assistant
|
||||
**Root cause analysis for complex problems**
|
||||
|
||||
#### Example Prompts:
|
||||
@@ -235,7 +278,7 @@ suggest preventive measures."
|
||||
|
||||
**Triggers:** debug, error, failing, root cause, trace, not working
|
||||
|
||||
### 4. `analyze` - Smart File Analysis
|
||||
### 5. `analyze` - Smart File Analysis
|
||||
**General-purpose code understanding and exploration**
|
||||
|
||||
#### Example Prompts:
|
||||
@@ -264,7 +307,7 @@ Combine your findings with gemini's to create a comprehensive security report."
|
||||
|
||||
**Triggers:** analyze, examine, look at, understand, inspect
|
||||
|
||||
### 5. `chat` - General Development Chat & Collaborative Thinking
|
||||
### 6. `chat` - General Development Chat & Collaborative Thinking
|
||||
**Your thinking partner - bounce ideas, get second opinions, brainstorm collaboratively**
|
||||
|
||||
#### Example Prompts:
|
||||
@@ -296,16 +339,17 @@ Combine both perspectives to create a comprehensive caching implementation guide
|
||||
- Technology comparisons and best practices
|
||||
- Architecture and design discussions
|
||||
- Can reference files for context: `"Use gemini to explain this algorithm with context from algorithm.py"`
|
||||
- **Dynamic collaboration**: Gemini can request additional files or context during the conversation if needed for a more thorough response
|
||||
|
||||
**Triggers:** ask, explain, compare, suggest, what about, brainstorm, discuss, share my thinking, get opinion
|
||||
|
||||
### 6. `list_models` - See Available Gemini Models
|
||||
### 7. `list_models` - See Available Gemini Models
|
||||
```
|
||||
"Use gemini to list available models"
|
||||
"Get gemini to show me what models I can use"
|
||||
```
|
||||
|
||||
### 7. `get_version` - Server Information
|
||||
### 8. `get_version` - Server Information
|
||||
```
|
||||
"Use gemini for its version"
|
||||
"Get gemini to show server configuration"
|
||||
@@ -493,6 +537,26 @@ Different tools use optimized temperature settings:
|
||||
- **`TEMPERATURE_CREATIVE`**: `0.7` - Used for deep thinking and architecture (more creative)
|
||||
|
||||
|
||||
## File Path Requirements
|
||||
|
||||
**All file paths must be absolute paths.**
|
||||
|
||||
### Setup
|
||||
1. **Use absolute paths** in all tool calls:
|
||||
```
|
||||
✅ "Use gemini to analyze /Users/you/project/src/main.py"
|
||||
❌ "Use gemini to analyze ./src/main.py" (will be rejected)
|
||||
```
|
||||
|
||||
2. **Set MCP_PROJECT_ROOT** to your project directory for security:
|
||||
```json
|
||||
"env": {
|
||||
"GEMINI_API_KEY": "your-key",
|
||||
"MCP_PROJECT_ROOT": "/Users/you/project"
|
||||
}
|
||||
```
|
||||
The server only allows access to files within this directory.
|
||||
|
||||
## Installation
|
||||
|
||||
1. Clone the repository:
|
||||
|
||||
@@ -1,90 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Demo script showing how Claude-Gemini collaboration works
|
||||
with dynamic context requests.
|
||||
|
||||
This demonstrates how tools can request additional context
|
||||
and how Claude would handle these requests.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
import os
|
||||
from tools.debug_issue import DebugIssueTool
|
||||
|
||||
|
||||
async def simulate_collaboration():
|
||||
"""Simulate a Claude-Gemini collaboration workflow"""
|
||||
|
||||
print("🤝 Claude-Gemini Collaboration Demo\n")
|
||||
print("Scenario: Claude asks Gemini to debug an import error")
|
||||
print("-" * 50)
|
||||
|
||||
# Initialize the debug tool
|
||||
debug_tool = DebugIssueTool()
|
||||
|
||||
# Step 1: Initial request without full context
|
||||
print("\n1️⃣ Claude's initial request:")
|
||||
print(" 'Debug this ImportError - the app can't find the utils module'")
|
||||
|
||||
initial_request = {
|
||||
"error_description": "ImportError: cannot import name 'config' from 'utils'",
|
||||
"error_context": "Error occurs on line 5 of main.py when starting the application"
|
||||
}
|
||||
|
||||
print("\n Sending to Gemini...")
|
||||
result = await debug_tool.execute(initial_request)
|
||||
|
||||
# Parse the response
|
||||
response = json.loads(result[0].text)
|
||||
print(f"\n Gemini's response status: {response['status']}")
|
||||
|
||||
if response['status'] == 'requires_clarification':
|
||||
# Gemini needs more context
|
||||
clarification = json.loads(response['content'])
|
||||
print("\n2️⃣ Gemini requests additional context:")
|
||||
print(f" Question: {clarification.get('question', 'N/A')}")
|
||||
if 'files_needed' in clarification:
|
||||
print(f" Files needed: {clarification['files_needed']}")
|
||||
|
||||
# Step 2: Claude provides additional context
|
||||
print("\n3️⃣ Claude provides the requested files:")
|
||||
enhanced_request = {
|
||||
**initial_request,
|
||||
"files": clarification.get('files_needed', []),
|
||||
"runtime_info": "Python 3.11, project structure includes utils/ directory"
|
||||
}
|
||||
|
||||
print(" Re-sending with additional context...")
|
||||
result2 = await debug_tool.execute(enhanced_request)
|
||||
|
||||
final_response = json.loads(result2[0].text)
|
||||
print(f"\n4️⃣ Gemini's final analysis (status: {final_response['status']}):")
|
||||
if final_response['status'] == 'success':
|
||||
print("\n" + final_response['content'][:500] + "...")
|
||||
|
||||
else:
|
||||
# Gemini had enough context initially
|
||||
print("\n✅ Gemini provided analysis without needing additional context:")
|
||||
print("\n" + response['content'][:500] + "...")
|
||||
|
||||
print("\n" + "=" * 50)
|
||||
print("🎯 Key Points:")
|
||||
print("- Tools return structured JSON with status field")
|
||||
print("- Status 'requires_clarification' triggers context request")
|
||||
print("- Claude can then provide additional files/info")
|
||||
print("- Enables true collaborative problem-solving!")
|
||||
|
||||
|
||||
async def main():
|
||||
"""Run the demo"""
|
||||
# Check for API key
|
||||
if not os.environ.get("GEMINI_API_KEY"):
|
||||
print("⚠️ Note: This is a simulated demo. Set GEMINI_API_KEY for live testing.")
|
||||
print(" The actual behavior depends on Gemini's response.\n")
|
||||
|
||||
await simulate_collaboration()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
@@ -136,3 +136,69 @@ Always approach discussions as a peer - be direct, technical, and thorough. Your
|
||||
the ideal thinking partner who helps explore ideas deeply, validates approaches, and uncovers
|
||||
insights that might be missed in solo analysis. Think step by step through complex problems
|
||||
and don't hesitate to explore tangential but relevant considerations."""
|
||||
|
||||
REVIEW_PENDING_CHANGES_PROMPT = """You are an expert code change analyst specializing in pre-commit review of git diffs.
|
||||
Your role is to act as a seasoned senior developer performing a final review before code is committed.
|
||||
|
||||
IMPORTANT: If you need additional context (e.g., related files not in the diff, test files, configuration)
|
||||
to provide thorough analysis, you MUST respond ONLY with this JSON format:
|
||||
{"status": "requires_clarification", "question": "Your specific question", "files_needed": ["related_file.py", "tests/"]}
|
||||
|
||||
You will receive:
|
||||
1. Git diffs showing staged/unstaged changes or branch comparisons
|
||||
2. The original request/ticket describing what should be implemented
|
||||
3. File paths and repository structure context
|
||||
|
||||
Your review MUST focus on:
|
||||
|
||||
## Core Analysis (Standard Review)
|
||||
- **Bugs & Logic Errors:** Off-by-one errors, null references, race conditions, incorrect assumptions
|
||||
- **Security Vulnerabilities:** Injection flaws, authentication issues, exposed secrets (CRITICAL for new additions)
|
||||
- **Performance Issues:** N+1 queries, inefficient algorithms introduced in changes
|
||||
- **Code Quality:** DRY violations, SOLID principle adherence, complexity of new code
|
||||
|
||||
## Change-Specific Analysis (Your Unique Value)
|
||||
1. **Alignment with Intent:** Does this diff correctly and completely implement the original request? Flag any missed requirements.
|
||||
|
||||
2. **Incomplete Changes:**
|
||||
- New functions added but never called
|
||||
- API endpoints defined but no client code
|
||||
- Enums/constants added but switch/if statements not updated
|
||||
- Dependencies added but not properly used
|
||||
|
||||
3. **Test Coverage Gaps:** Flag new business logic lacking corresponding test changes
|
||||
|
||||
4. **Unintended Side Effects:** Could changes in file_A break module_B even if module_B wasn't changed?
|
||||
|
||||
5. **Documentation Mismatches:** Were docstrings/docs updated for changed function signatures?
|
||||
|
||||
6. **Configuration Risks:** What are downstream impacts of config changes?
|
||||
|
||||
7. **Scope Creep:** Flag changes unrelated to the original request
|
||||
|
||||
8. **Code Removal Risks:** Was removed code truly dead, or could removal break functionality?
|
||||
|
||||
## Output Format
|
||||
|
||||
### Repository Summary
|
||||
For each repository with changes:
|
||||
|
||||
**Repository: /path/to/repo**
|
||||
- Status: X files changed
|
||||
- Overall: Brief assessment and critical issues count
|
||||
|
||||
### Issues by Severity
|
||||
[CRITICAL] Descriptive title
|
||||
- File: path/to/file.py:line
|
||||
- Description: Clear explanation
|
||||
- Fix: Specific solution with code
|
||||
|
||||
[HIGH] Descriptive title
|
||||
...
|
||||
|
||||
### Recommendations
|
||||
- Top priority fixes before commit
|
||||
- Suggestions for improvement
|
||||
- Good practices to preserve
|
||||
|
||||
Be thorough but actionable. Every issue must have a clear fix. Acknowledge good changes when you see them."""
|
||||
|
||||
10
server.py
10
server.py
@@ -22,7 +22,14 @@ from config import (
|
||||
__updated__,
|
||||
__version__,
|
||||
)
|
||||
from tools import AnalyzeTool, ChatTool, DebugIssueTool, ReviewCodeTool, ThinkDeeperTool
|
||||
from tools import (
|
||||
AnalyzeTool,
|
||||
ChatTool,
|
||||
DebugIssueTool,
|
||||
ReviewCodeTool,
|
||||
ReviewPendingChanges,
|
||||
ThinkDeeperTool,
|
||||
)
|
||||
|
||||
# Configure logging
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
@@ -38,6 +45,7 @@ TOOLS = {
|
||||
"debug_issue": DebugIssueTool(),
|
||||
"analyze": AnalyzeTool(),
|
||||
"chat": ChatTool(),
|
||||
"review_pending_changes": ReviewPendingChanges(),
|
||||
}
|
||||
|
||||
|
||||
|
||||
@@ -15,6 +15,14 @@ if str(parent_dir) not in sys.path:
|
||||
if "GEMINI_API_KEY" not in os.environ:
|
||||
os.environ["GEMINI_API_KEY"] = "dummy-key-for-tests"
|
||||
|
||||
# Set MCP_PROJECT_ROOT to a temporary directory for tests
|
||||
# This provides a safe sandbox for file operations during testing
|
||||
import tempfile
|
||||
|
||||
# Create a temporary directory that will be used as the project root for all tests
|
||||
test_root = tempfile.mkdtemp(prefix="gemini_mcp_test_")
|
||||
os.environ["MCP_PROJECT_ROOT"] = test_root
|
||||
|
||||
# Configure asyncio for Windows compatibility
|
||||
if sys.platform == "win32":
|
||||
import asyncio
|
||||
@@ -22,6 +30,26 @@ if sys.platform == "win32":
|
||||
asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy())
|
||||
|
||||
|
||||
# Pytest fixtures
|
||||
import pytest
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def project_path(tmp_path):
|
||||
"""
|
||||
Provides a temporary directory within the PROJECT_ROOT sandbox for tests.
|
||||
This ensures all file operations during tests are within the allowed directory.
|
||||
"""
|
||||
# Get the test project root
|
||||
test_root = Path(os.environ.get("MCP_PROJECT_ROOT", "/tmp"))
|
||||
|
||||
# Create a subdirectory for this specific test
|
||||
test_dir = test_root / f"test_{tmp_path.name}"
|
||||
test_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
return test_dir
|
||||
|
||||
|
||||
# Pytest configuration
|
||||
def pytest_configure(config):
|
||||
"""Configure pytest with custom markers"""
|
||||
|
||||
@@ -28,11 +28,13 @@ class TestDynamicContextRequests:
|
||||
async def test_clarification_request_parsing(self, mock_create_model, analyze_tool):
|
||||
"""Test that tools correctly parse clarification requests"""
|
||||
# Mock model to return a clarification request
|
||||
clarification_json = json.dumps({
|
||||
"status": "requires_clarification",
|
||||
"question": "I need to see the package.json file to understand dependencies",
|
||||
"files_needed": ["package.json", "package-lock.json"]
|
||||
})
|
||||
clarification_json = json.dumps(
|
||||
{
|
||||
"status": "requires_clarification",
|
||||
"question": "I need to see the package.json file to understand dependencies",
|
||||
"files_needed": ["package.json", "package-lock.json"],
|
||||
}
|
||||
)
|
||||
|
||||
mock_model = Mock()
|
||||
mock_model.generate_content.return_value = Mock(
|
||||
@@ -40,10 +42,12 @@ class TestDynamicContextRequests:
|
||||
)
|
||||
mock_create_model.return_value = mock_model
|
||||
|
||||
result = await analyze_tool.execute({
|
||||
"files": ["src/index.js"],
|
||||
"question": "Analyze the dependencies used in this project"
|
||||
})
|
||||
result = await analyze_tool.execute(
|
||||
{
|
||||
"files": ["/absolute/path/src/index.js"],
|
||||
"question": "Analyze the dependencies used in this project",
|
||||
}
|
||||
)
|
||||
|
||||
assert len(result) == 1
|
||||
|
||||
@@ -54,12 +58,17 @@ class TestDynamicContextRequests:
|
||||
|
||||
# Parse the clarification request
|
||||
clarification = json.loads(response_data["content"])
|
||||
assert clarification["question"] == "I need to see the package.json file to understand dependencies"
|
||||
assert (
|
||||
clarification["question"]
|
||||
== "I need to see the package.json file to understand dependencies"
|
||||
)
|
||||
assert clarification["files_needed"] == ["package.json", "package-lock.json"]
|
||||
|
||||
@pytest.mark.asyncio
|
||||
@patch("tools.base.BaseTool.create_model")
|
||||
async def test_normal_response_not_parsed_as_clarification(self, mock_create_model, debug_tool):
|
||||
async def test_normal_response_not_parsed_as_clarification(
|
||||
self, mock_create_model, debug_tool
|
||||
):
|
||||
"""Test that normal responses are not mistaken for clarification requests"""
|
||||
normal_response = """
|
||||
## Summary
|
||||
@@ -77,9 +86,9 @@ class TestDynamicContextRequests:
|
||||
)
|
||||
mock_create_model.return_value = mock_model
|
||||
|
||||
result = await debug_tool.execute({
|
||||
"error_description": "NameError: name 'utils' is not defined"
|
||||
})
|
||||
result = await debug_tool.execute(
|
||||
{"error_description": "NameError: name 'utils' is not defined"}
|
||||
)
|
||||
|
||||
assert len(result) == 1
|
||||
|
||||
@@ -91,9 +100,13 @@ class TestDynamicContextRequests:
|
||||
|
||||
@pytest.mark.asyncio
|
||||
@patch("tools.base.BaseTool.create_model")
|
||||
async def test_malformed_clarification_request_treated_as_normal(self, mock_create_model, analyze_tool):
|
||||
async def test_malformed_clarification_request_treated_as_normal(
|
||||
self, mock_create_model, analyze_tool
|
||||
):
|
||||
"""Test that malformed JSON clarification requests are treated as normal responses"""
|
||||
malformed_json = '{"status": "requires_clarification", "question": "Missing closing brace"'
|
||||
malformed_json = (
|
||||
'{"status": "requires_clarification", "question": "Missing closing brace"'
|
||||
)
|
||||
|
||||
mock_model = Mock()
|
||||
mock_model.generate_content.return_value = Mock(
|
||||
@@ -101,10 +114,9 @@ class TestDynamicContextRequests:
|
||||
)
|
||||
mock_create_model.return_value = mock_model
|
||||
|
||||
result = await analyze_tool.execute({
|
||||
"files": ["test.py"],
|
||||
"question": "What does this do?"
|
||||
})
|
||||
result = await analyze_tool.execute(
|
||||
{"files": ["/absolute/path/test.py"], "question": "What does this do?"}
|
||||
)
|
||||
|
||||
assert len(result) == 1
|
||||
|
||||
@@ -115,20 +127,28 @@ class TestDynamicContextRequests:
|
||||
|
||||
@pytest.mark.asyncio
|
||||
@patch("tools.base.BaseTool.create_model")
|
||||
async def test_clarification_with_suggested_action(self, mock_create_model, debug_tool):
|
||||
async def test_clarification_with_suggested_action(
|
||||
self, mock_create_model, debug_tool
|
||||
):
|
||||
"""Test clarification request with suggested next action"""
|
||||
clarification_json = json.dumps({
|
||||
"status": "requires_clarification",
|
||||
"question": "I need to see the database configuration to diagnose the connection error",
|
||||
"files_needed": ["config/database.yml", "src/db.py"],
|
||||
"suggested_next_action": {
|
||||
"tool": "debug_issue",
|
||||
"args": {
|
||||
"error_description": "Connection timeout to database",
|
||||
"files": ["config/database.yml", "src/db.py", "logs/error.log"]
|
||||
}
|
||||
clarification_json = json.dumps(
|
||||
{
|
||||
"status": "requires_clarification",
|
||||
"question": "I need to see the database configuration to diagnose the connection error",
|
||||
"files_needed": ["config/database.yml", "src/db.py"],
|
||||
"suggested_next_action": {
|
||||
"tool": "debug_issue",
|
||||
"args": {
|
||||
"error_description": "Connection timeout to database",
|
||||
"files": [
|
||||
"/config/database.yml",
|
||||
"/src/db.py",
|
||||
"/logs/error.log",
|
||||
],
|
||||
},
|
||||
},
|
||||
}
|
||||
})
|
||||
)
|
||||
|
||||
mock_model = Mock()
|
||||
mock_model.generate_content.return_value = Mock(
|
||||
@@ -136,10 +156,12 @@ class TestDynamicContextRequests:
|
||||
)
|
||||
mock_create_model.return_value = mock_model
|
||||
|
||||
result = await debug_tool.execute({
|
||||
"error_description": "Connection timeout to database",
|
||||
"files": ["logs/error.log"]
|
||||
})
|
||||
result = await debug_tool.execute(
|
||||
{
|
||||
"error_description": "Connection timeout to database",
|
||||
"files": ["/absolute/logs/error.log"],
|
||||
}
|
||||
)
|
||||
|
||||
assert len(result) == 1
|
||||
|
||||
@@ -156,7 +178,7 @@ class TestDynamicContextRequests:
|
||||
status="success",
|
||||
content="Test content",
|
||||
content_type="markdown",
|
||||
metadata={"tool_name": "test", "execution_time": 1.5}
|
||||
metadata={"tool_name": "test", "execution_time": 1.5},
|
||||
)
|
||||
|
||||
json_str = output.model_dump_json()
|
||||
@@ -172,7 +194,7 @@ class TestDynamicContextRequests:
|
||||
request = ClarificationRequest(
|
||||
question="Need more context",
|
||||
files_needed=["file1.py", "file2.py"],
|
||||
suggested_next_action={"tool": "analyze", "args": {}}
|
||||
suggested_next_action={"tool": "analyze", "args": {}},
|
||||
)
|
||||
|
||||
assert request.question == "Need more context"
|
||||
@@ -185,10 +207,9 @@ class TestDynamicContextRequests:
|
||||
"""Test error response format"""
|
||||
mock_create_model.side_effect = Exception("API connection failed")
|
||||
|
||||
result = await analyze_tool.execute({
|
||||
"files": ["test.py"],
|
||||
"question": "Analyze this"
|
||||
})
|
||||
result = await analyze_tool.execute(
|
||||
{"files": ["/absolute/path/test.py"], "question": "Analyze this"}
|
||||
)
|
||||
|
||||
assert len(result) == 1
|
||||
|
||||
@@ -208,11 +229,13 @@ class TestCollaborationWorkflow:
|
||||
tool = AnalyzeTool()
|
||||
|
||||
# Mock Gemini to request package.json when asked about dependencies
|
||||
clarification_json = json.dumps({
|
||||
"status": "requires_clarification",
|
||||
"question": "I need to see the package.json file to analyze npm dependencies",
|
||||
"files_needed": ["package.json", "package-lock.json"]
|
||||
})
|
||||
clarification_json = json.dumps(
|
||||
{
|
||||
"status": "requires_clarification",
|
||||
"question": "I need to see the package.json file to analyze npm dependencies",
|
||||
"files_needed": ["package.json", "package-lock.json"],
|
||||
}
|
||||
)
|
||||
|
||||
mock_model = Mock()
|
||||
mock_model.generate_content.return_value = Mock(
|
||||
@@ -221,18 +244,22 @@ class TestCollaborationWorkflow:
|
||||
mock_create_model.return_value = mock_model
|
||||
|
||||
# Ask about dependencies with only source files
|
||||
result = await tool.execute({
|
||||
"files": ["src/index.js"],
|
||||
"question": "What npm packages and versions does this project use?"
|
||||
})
|
||||
result = await tool.execute(
|
||||
{
|
||||
"files": ["/absolute/path/src/index.js"],
|
||||
"question": "What npm packages and versions does this project use?",
|
||||
}
|
||||
)
|
||||
|
||||
response = json.loads(result[0].text)
|
||||
assert response["status"] == "requires_clarification", \
|
||||
"Should request clarification when asked about dependencies without package files"
|
||||
assert (
|
||||
response["status"] == "requires_clarification"
|
||||
), "Should request clarification when asked about dependencies without package files"
|
||||
|
||||
clarification = json.loads(response["content"])
|
||||
assert "package.json" in str(clarification["files_needed"]), \
|
||||
"Should specifically request package.json"
|
||||
assert "package.json" in str(
|
||||
clarification["files_needed"]
|
||||
), "Should specifically request package.json"
|
||||
|
||||
@pytest.mark.asyncio
|
||||
@patch("tools.base.BaseTool.create_model")
|
||||
@@ -241,11 +268,13 @@ class TestCollaborationWorkflow:
|
||||
tool = DebugIssueTool()
|
||||
|
||||
# Step 1: Initial request returns clarification needed
|
||||
clarification_json = json.dumps({
|
||||
"status": "requires_clarification",
|
||||
"question": "I need to see the configuration file to understand the connection settings",
|
||||
"files_needed": ["config.py"]
|
||||
})
|
||||
clarification_json = json.dumps(
|
||||
{
|
||||
"status": "requires_clarification",
|
||||
"question": "I need to see the configuration file to understand the connection settings",
|
||||
"files_needed": ["config.py"],
|
||||
}
|
||||
)
|
||||
|
||||
mock_model = Mock()
|
||||
mock_model.generate_content.return_value = Mock(
|
||||
@@ -253,10 +282,12 @@ class TestCollaborationWorkflow:
|
||||
)
|
||||
mock_create_model.return_value = mock_model
|
||||
|
||||
result1 = await tool.execute({
|
||||
"error_description": "Database connection timeout",
|
||||
"error_context": "Timeout after 30s"
|
||||
})
|
||||
result1 = await tool.execute(
|
||||
{
|
||||
"error_description": "Database connection timeout",
|
||||
"error_context": "Timeout after 30s",
|
||||
}
|
||||
)
|
||||
|
||||
response1 = json.loads(result1[0].text)
|
||||
assert response1["status"] == "requires_clarification"
|
||||
@@ -277,11 +308,13 @@ class TestCollaborationWorkflow:
|
||||
candidates=[Mock(content=Mock(parts=[Mock(text=final_response)]))]
|
||||
)
|
||||
|
||||
result2 = await tool.execute({
|
||||
"error_description": "Database connection timeout",
|
||||
"error_context": "Timeout after 30s",
|
||||
"files": ["config.py"] # Additional context provided
|
||||
})
|
||||
result2 = await tool.execute(
|
||||
{
|
||||
"error_description": "Database connection timeout",
|
||||
"error_context": "Timeout after 30s",
|
||||
"files": ["/absolute/path/config.py"], # Additional context provided
|
||||
}
|
||||
)
|
||||
|
||||
response2 = json.loads(result2[0].text)
|
||||
assert response2["status"] == "success"
|
||||
|
||||
@@ -83,33 +83,44 @@ async def run_manual_live_tests():
|
||||
analyze_tool = AnalyzeTool()
|
||||
|
||||
# Ask about dependencies without providing package files
|
||||
result = await analyze_tool.execute({
|
||||
"files": [temp_path], # Only Python file, no package.json
|
||||
"question": "What npm packages and their versions does this project depend on? List all dependencies.",
|
||||
"thinking_mode": "minimal" # Fast test
|
||||
})
|
||||
result = await analyze_tool.execute(
|
||||
{
|
||||
"files": [temp_path], # Only Python file, no package.json
|
||||
"question": "What npm packages and their versions does this project depend on? List all dependencies.",
|
||||
"thinking_mode": "minimal", # Fast test
|
||||
}
|
||||
)
|
||||
|
||||
if result and result[0].text:
|
||||
response_data = json.loads(result[0].text)
|
||||
print(f" Response status: {response_data['status']}")
|
||||
|
||||
if response_data['status'] == 'requires_clarification':
|
||||
if response_data["status"] == "requires_clarification":
|
||||
print("✅ Dynamic context request successfully triggered!")
|
||||
clarification = json.loads(response_data['content'])
|
||||
clarification = json.loads(response_data["content"])
|
||||
print(f" Gemini asks: {clarification.get('question', 'N/A')}")
|
||||
if 'files_needed' in clarification:
|
||||
if "files_needed" in clarification:
|
||||
print(f" Files requested: {clarification['files_needed']}")
|
||||
# Verify it's asking for package-related files
|
||||
expected_files = ['package.json', 'package-lock.json', 'yarn.lock']
|
||||
if any(f in str(clarification['files_needed']) for f in expected_files):
|
||||
expected_files = [
|
||||
"package.json",
|
||||
"package-lock.json",
|
||||
"yarn.lock",
|
||||
]
|
||||
if any(
|
||||
f in str(clarification["files_needed"])
|
||||
for f in expected_files
|
||||
):
|
||||
print(" ✅ Correctly identified missing package files!")
|
||||
else:
|
||||
print(" ⚠️ Unexpected files requested")
|
||||
else:
|
||||
# This is a failure - we specifically designed this to need clarification
|
||||
print("❌ Expected clarification request but got direct response")
|
||||
print(" This suggests the dynamic context feature may not be working")
|
||||
print(" Response:", response_data.get('content', '')[:200])
|
||||
print(
|
||||
" This suggests the dynamic context feature may not be working"
|
||||
)
|
||||
print(" Response:", response_data.get("content", "")[:200])
|
||||
return False
|
||||
else:
|
||||
print("❌ Collaboration test failed - no response")
|
||||
|
||||
255
tests/test_review_pending_changes.py
Normal file
255
tests/test_review_pending_changes.py
Normal file
@@ -0,0 +1,255 @@
|
||||
"""
|
||||
Tests for the review_pending_changes tool
|
||||
"""
|
||||
|
||||
import json
|
||||
from unittest.mock import MagicMock, Mock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
from tools.review_pending_changes import (
|
||||
ReviewPendingChanges,
|
||||
ReviewPendingChangesRequest,
|
||||
)
|
||||
|
||||
|
||||
class TestReviewPendingChangesTool:
|
||||
"""Test the review_pending_changes tool"""
|
||||
|
||||
@pytest.fixture
|
||||
def tool(self):
|
||||
"""Create tool instance"""
|
||||
return ReviewPendingChanges()
|
||||
|
||||
def test_tool_metadata(self, tool):
|
||||
"""Test tool metadata"""
|
||||
assert tool.get_name() == "review_pending_changes"
|
||||
assert "REVIEW PENDING GIT CHANGES" in tool.get_description()
|
||||
assert "pre-commit review" in tool.get_description()
|
||||
|
||||
# Check schema
|
||||
schema = tool.get_input_schema()
|
||||
assert schema["type"] == "object"
|
||||
assert "path" in schema["properties"]
|
||||
assert "original_request" in schema["properties"]
|
||||
assert "compare_to" in schema["properties"]
|
||||
assert "review_type" in schema["properties"]
|
||||
|
||||
def test_request_model_defaults(self):
|
||||
"""Test request model default values"""
|
||||
request = ReviewPendingChangesRequest(path="/some/absolute/path")
|
||||
assert request.path == "/some/absolute/path"
|
||||
assert request.original_request is None
|
||||
assert request.compare_to is None
|
||||
assert request.include_staged is True
|
||||
assert request.include_unstaged is True
|
||||
assert request.review_type == "full"
|
||||
assert request.severity_filter == "all"
|
||||
assert request.max_depth == 5
|
||||
|
||||
def test_sanitize_filename(self, tool):
|
||||
"""Test filename sanitization"""
|
||||
# Test path separators
|
||||
assert tool._sanitize_filename("src/main.py") == "src_main.py"
|
||||
assert tool._sanitize_filename("src\\main.py") == "src_main.py"
|
||||
|
||||
# Test spaces
|
||||
assert tool._sanitize_filename("my file.py") == "my_file.py"
|
||||
|
||||
# Test special characters
|
||||
assert tool._sanitize_filename("file@#$.py") == "file.py"
|
||||
|
||||
# Test length limit
|
||||
long_name = "a" * 150
|
||||
sanitized = tool._sanitize_filename(long_name)
|
||||
assert len(sanitized) == 100
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_relative_path_rejected(self, tool):
|
||||
"""Test that relative paths are rejected"""
|
||||
result = await tool.execute(
|
||||
{"path": "./relative/path", "original_request": "Test"}
|
||||
)
|
||||
assert len(result) == 1
|
||||
response = json.loads(result[0].text)
|
||||
assert response["status"] == "error"
|
||||
assert "must be absolute" in response["content"]
|
||||
assert "./relative/path" in response["content"]
|
||||
|
||||
@pytest.mark.asyncio
|
||||
@patch("tools.review_pending_changes.find_git_repositories")
|
||||
async def test_no_repositories_found(self, mock_find_repos, tool):
|
||||
"""Test when no git repositories are found"""
|
||||
mock_find_repos.return_value = []
|
||||
|
||||
request = ReviewPendingChangesRequest(path="/absolute/path/no-git")
|
||||
result = await tool.prepare_prompt(request)
|
||||
|
||||
assert result == "No git repositories found in the specified path."
|
||||
mock_find_repos.assert_called_once_with("/absolute/path/no-git", 5)
|
||||
|
||||
@pytest.mark.asyncio
|
||||
@patch("tools.review_pending_changes.find_git_repositories")
|
||||
@patch("tools.review_pending_changes.get_git_status")
|
||||
@patch("tools.review_pending_changes.run_git_command")
|
||||
async def test_no_changes_found(
|
||||
self, mock_run_git, mock_status, mock_find_repos, tool
|
||||
):
|
||||
"""Test when repositories have no changes"""
|
||||
mock_find_repos.return_value = ["/test/repo"]
|
||||
mock_status.return_value = {
|
||||
"branch": "main",
|
||||
"ahead": 0,
|
||||
"behind": 0,
|
||||
"staged_files": [],
|
||||
"unstaged_files": [],
|
||||
"untracked_files": [],
|
||||
}
|
||||
|
||||
# No staged or unstaged files
|
||||
mock_run_git.side_effect = [
|
||||
(True, ""), # staged files (empty)
|
||||
(True, ""), # unstaged files (empty)
|
||||
]
|
||||
|
||||
request = ReviewPendingChangesRequest(path="/absolute/repo/path")
|
||||
result = await tool.prepare_prompt(request)
|
||||
|
||||
assert result == "No pending changes found in any of the git repositories."
|
||||
|
||||
@pytest.mark.asyncio
|
||||
@patch("tools.review_pending_changes.find_git_repositories")
|
||||
@patch("tools.review_pending_changes.get_git_status")
|
||||
@patch("tools.review_pending_changes.run_git_command")
|
||||
async def test_staged_changes_review(
|
||||
self,
|
||||
mock_run_git,
|
||||
mock_status,
|
||||
mock_find_repos,
|
||||
tool,
|
||||
):
|
||||
"""Test reviewing staged changes"""
|
||||
mock_find_repos.return_value = ["/test/repo"]
|
||||
mock_status.return_value = {
|
||||
"branch": "feature",
|
||||
"ahead": 1,
|
||||
"behind": 0,
|
||||
"staged_files": ["main.py"],
|
||||
"unstaged_files": [],
|
||||
"untracked_files": [],
|
||||
}
|
||||
|
||||
# Mock git commands
|
||||
mock_run_git.side_effect = [
|
||||
(True, "main.py\n"), # staged files
|
||||
(
|
||||
True,
|
||||
"diff --git a/main.py b/main.py\n+print('hello')",
|
||||
), # diff for main.py
|
||||
(True, ""), # unstaged files (empty)
|
||||
]
|
||||
|
||||
request = ReviewPendingChangesRequest(
|
||||
path="/absolute/repo/path",
|
||||
original_request="Add hello message",
|
||||
review_type="security",
|
||||
)
|
||||
result = await tool.prepare_prompt(request)
|
||||
|
||||
# Verify result structure
|
||||
assert "## Original Request/Ticket" in result
|
||||
assert "Add hello message" in result
|
||||
assert "## Review Parameters" in result
|
||||
assert "Review Type: security" in result
|
||||
assert "## Repository Changes Summary" in result
|
||||
assert "Branch: feature" in result
|
||||
assert "## Git Diffs" in result
|
||||
|
||||
@pytest.mark.asyncio
|
||||
@patch("tools.review_pending_changes.find_git_repositories")
|
||||
@patch("tools.review_pending_changes.get_git_status")
|
||||
@patch("tools.review_pending_changes.run_git_command")
|
||||
async def test_compare_to_invalid_ref(
|
||||
self, mock_run_git, mock_status, mock_find_repos, tool
|
||||
):
|
||||
"""Test comparing to an invalid git ref"""
|
||||
mock_find_repos.return_value = ["/test/repo"]
|
||||
mock_status.return_value = {"branch": "main"}
|
||||
|
||||
# Mock git commands - ref validation fails
|
||||
mock_run_git.side_effect = [
|
||||
(False, "fatal: not a valid ref"), # rev-parse fails
|
||||
]
|
||||
|
||||
request = ReviewPendingChangesRequest(
|
||||
path="/absolute/repo/path", compare_to="invalid-branch"
|
||||
)
|
||||
result = await tool.prepare_prompt(request)
|
||||
|
||||
# When all repos have errors and no changes, we get this message
|
||||
assert "No pending changes found in any of the git repositories." in result
|
||||
|
||||
@pytest.mark.asyncio
|
||||
@patch("tools.review_pending_changes.ReviewPendingChanges.execute")
|
||||
async def test_execute_integration(self, mock_execute, tool):
|
||||
"""Test execute method integration"""
|
||||
# Mock the execute to return a standardized response
|
||||
mock_execute.return_value = [
|
||||
Mock(
|
||||
text='{"status": "success", "content": "Review complete", "content_type": "text"}'
|
||||
)
|
||||
]
|
||||
|
||||
result = await tool.execute({"path": ".", "review_type": "full"})
|
||||
|
||||
assert len(result) == 1
|
||||
mock_execute.assert_called_once()
|
||||
|
||||
def test_default_temperature(self, tool):
|
||||
"""Test default temperature setting"""
|
||||
from config import TEMPERATURE_ANALYTICAL
|
||||
|
||||
assert tool.get_default_temperature() == TEMPERATURE_ANALYTICAL
|
||||
|
||||
@pytest.mark.asyncio
|
||||
@patch("tools.review_pending_changes.find_git_repositories")
|
||||
@patch("tools.review_pending_changes.get_git_status")
|
||||
@patch("tools.review_pending_changes.run_git_command")
|
||||
async def test_mixed_staged_unstaged_changes(
|
||||
self,
|
||||
mock_run_git,
|
||||
mock_status,
|
||||
mock_find_repos,
|
||||
tool,
|
||||
):
|
||||
"""Test reviewing both staged and unstaged changes"""
|
||||
mock_find_repos.return_value = ["/test/repo"]
|
||||
mock_status.return_value = {
|
||||
"branch": "develop",
|
||||
"ahead": 2,
|
||||
"behind": 1,
|
||||
"staged_files": ["file1.py"],
|
||||
"unstaged_files": ["file2.py"],
|
||||
}
|
||||
|
||||
# Mock git commands
|
||||
mock_run_git.side_effect = [
|
||||
(True, "file1.py\n"), # staged files
|
||||
(True, "diff --git a/file1.py..."), # diff for file1.py
|
||||
(True, "file2.py\n"), # unstaged files
|
||||
(True, "diff --git a/file2.py..."), # diff for file2.py
|
||||
]
|
||||
|
||||
|
||||
request = ReviewPendingChangesRequest(
|
||||
path="/absolute/repo/path",
|
||||
focus_on="error handling",
|
||||
severity_filter="high",
|
||||
)
|
||||
result = await tool.prepare_prompt(request)
|
||||
|
||||
# Verify all sections are present
|
||||
assert "Review Type: full" in result
|
||||
assert "Severity Filter: high" in result
|
||||
assert "Focus Areas: error handling" in result
|
||||
assert "Reviewing: staged and unstaged changes" in result
|
||||
@@ -25,11 +25,12 @@ class TestServerTools:
|
||||
assert "debug_issue" in tool_names
|
||||
assert "analyze" in tool_names
|
||||
assert "chat" in tool_names
|
||||
assert "review_pending_changes" in tool_names
|
||||
assert "list_models" in tool_names
|
||||
assert "get_version" in tool_names
|
||||
|
||||
# Should have exactly 7 tools
|
||||
assert len(tools) == 7
|
||||
# Should have exactly 8 tools
|
||||
assert len(tools) == 8
|
||||
|
||||
# Check descriptions are verbose
|
||||
for tool in tools:
|
||||
|
||||
@@ -51,7 +51,7 @@ class TestThinkingModes:
|
||||
tool = AnalyzeTool()
|
||||
result = await tool.execute(
|
||||
{
|
||||
"files": ["test.py"],
|
||||
"files": ["/absolute/path/test.py"],
|
||||
"question": "What is this?",
|
||||
"thinking_mode": "minimal",
|
||||
}
|
||||
@@ -80,7 +80,9 @@ class TestThinkingModes:
|
||||
mock_create_model.return_value = mock_model
|
||||
|
||||
tool = ReviewCodeTool()
|
||||
result = await tool.execute({"files": ["test.py"], "thinking_mode": "low"})
|
||||
result = await tool.execute(
|
||||
{"files": ["/absolute/path/test.py"], "thinking_mode": "low"}
|
||||
)
|
||||
|
||||
# Verify create_model was called with correct thinking_mode
|
||||
mock_create_model.assert_called_once()
|
||||
@@ -129,7 +131,7 @@ class TestThinkingModes:
|
||||
tool = AnalyzeTool()
|
||||
await tool.execute(
|
||||
{
|
||||
"files": ["complex.py"],
|
||||
"files": ["/absolute/path/complex.py"],
|
||||
"question": "Analyze architecture",
|
||||
"thinking_mode": "high",
|
||||
}
|
||||
|
||||
@@ -2,11 +2,12 @@
|
||||
Tests for individual tool implementations
|
||||
"""
|
||||
|
||||
import json
|
||||
from unittest.mock import Mock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
from tools import AnalyzeTool, DebugIssueTool, ReviewCodeTool, ThinkDeeperTool
|
||||
from tools import AnalyzeTool, DebugIssueTool, ReviewCodeTool, ThinkDeeperTool, ChatTool
|
||||
|
||||
|
||||
class TestThinkDeeperTool:
|
||||
@@ -187,3 +188,110 @@ class TestAnalyzeTool:
|
||||
assert "ARCHITECTURE Analysis" in result[0].text
|
||||
assert "Analyzed 1 file(s)" in result[0].text
|
||||
assert "Architecture analysis" in result[0].text
|
||||
|
||||
|
||||
class TestAbsolutePathValidation:
|
||||
"""Test absolute path validation across all tools"""
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_analyze_tool_relative_path_rejected(self):
|
||||
"""Test that analyze tool rejects relative paths"""
|
||||
tool = AnalyzeTool()
|
||||
result = await tool.execute(
|
||||
{
|
||||
"files": ["./relative/path.py", "/absolute/path.py"],
|
||||
"question": "What does this do?",
|
||||
}
|
||||
)
|
||||
|
||||
assert len(result) == 1
|
||||
response = json.loads(result[0].text)
|
||||
assert response["status"] == "error"
|
||||
assert "must be absolute" in response["content"]
|
||||
assert "./relative/path.py" in response["content"]
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_review_code_tool_relative_path_rejected(self):
|
||||
"""Test that review_code tool rejects relative paths"""
|
||||
tool = ReviewCodeTool()
|
||||
result = await tool.execute(
|
||||
{"files": ["../parent/file.py"], "review_type": "full"}
|
||||
)
|
||||
|
||||
assert len(result) == 1
|
||||
response = json.loads(result[0].text)
|
||||
assert response["status"] == "error"
|
||||
assert "must be absolute" in response["content"]
|
||||
assert "../parent/file.py" in response["content"]
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_debug_issue_tool_relative_path_rejected(self):
|
||||
"""Test that debug_issue tool rejects relative paths"""
|
||||
tool = DebugIssueTool()
|
||||
result = await tool.execute(
|
||||
{
|
||||
"error_description": "Something broke",
|
||||
"files": ["src/main.py"], # relative path
|
||||
}
|
||||
)
|
||||
|
||||
assert len(result) == 1
|
||||
response = json.loads(result[0].text)
|
||||
assert response["status"] == "error"
|
||||
assert "must be absolute" in response["content"]
|
||||
assert "src/main.py" in response["content"]
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_think_deeper_tool_relative_path_rejected(self):
|
||||
"""Test that think_deeper tool rejects relative paths"""
|
||||
tool = ThinkDeeperTool()
|
||||
result = await tool.execute(
|
||||
{"current_analysis": "My analysis", "files": ["./local/file.py"]}
|
||||
)
|
||||
|
||||
assert len(result) == 1
|
||||
response = json.loads(result[0].text)
|
||||
assert response["status"] == "error"
|
||||
assert "must be absolute" in response["content"]
|
||||
assert "./local/file.py" in response["content"]
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_chat_tool_relative_path_rejected(self):
|
||||
"""Test that chat tool rejects relative paths"""
|
||||
tool = ChatTool()
|
||||
result = await tool.execute(
|
||||
{
|
||||
"prompt": "Explain this code",
|
||||
"files": ["code.py"], # relative path without ./
|
||||
}
|
||||
)
|
||||
|
||||
assert len(result) == 1
|
||||
response = json.loads(result[0].text)
|
||||
assert response["status"] == "error"
|
||||
assert "must be absolute" in response["content"]
|
||||
assert "code.py" in response["content"]
|
||||
|
||||
@pytest.mark.asyncio
|
||||
@patch("tools.AnalyzeTool.create_model")
|
||||
async def test_analyze_tool_accepts_absolute_paths(self, mock_model):
|
||||
"""Test that analyze tool accepts absolute paths"""
|
||||
tool = AnalyzeTool()
|
||||
|
||||
# Mock the model response
|
||||
mock_response = Mock()
|
||||
mock_response.candidates = [Mock()]
|
||||
mock_response.candidates[0].content.parts = [Mock(text="Analysis complete")]
|
||||
|
||||
mock_instance = Mock()
|
||||
mock_instance.generate_content.return_value = mock_response
|
||||
mock_model.return_value = mock_instance
|
||||
|
||||
result = await tool.execute(
|
||||
{"files": ["/absolute/path/file.py"], "question": "What does this do?"}
|
||||
)
|
||||
|
||||
assert len(result) == 1
|
||||
response = json.loads(result[0].text)
|
||||
assert response["status"] == "success"
|
||||
assert "Analysis complete" in response["content"]
|
||||
|
||||
@@ -8,9 +8,9 @@ from utils import check_token_limit, estimate_tokens, read_file_content, read_fi
|
||||
class TestFileUtils:
|
||||
"""Test file reading utilities"""
|
||||
|
||||
def test_read_file_content_success(self, tmp_path):
|
||||
def test_read_file_content_success(self, project_path):
|
||||
"""Test successful file reading"""
|
||||
test_file = tmp_path / "test.py"
|
||||
test_file = project_path / "test.py"
|
||||
test_file.write_text("def hello():\n return 'world'", encoding="utf-8")
|
||||
|
||||
content, tokens = read_file_content(str(test_file))
|
||||
@@ -20,25 +20,43 @@ class TestFileUtils:
|
||||
assert "return 'world'" in content
|
||||
assert tokens > 0 # Should have estimated tokens
|
||||
|
||||
def test_read_file_content_not_found(self):
|
||||
def test_read_file_content_not_found(self, project_path):
|
||||
"""Test reading non-existent file"""
|
||||
content, tokens = read_file_content("/nonexistent/file.py")
|
||||
# Use a non-existent file within the project path
|
||||
nonexistent = project_path / "nonexistent" / "file.py"
|
||||
content, tokens = read_file_content(str(nonexistent))
|
||||
assert "--- FILE NOT FOUND:" in content
|
||||
assert "Error: File does not exist" in content
|
||||
assert tokens > 0
|
||||
|
||||
def test_read_file_content_directory(self, tmp_path):
|
||||
def test_read_file_content_outside_project_root(self):
|
||||
"""Test that paths outside project root are rejected"""
|
||||
# Try to read a file outside the project root
|
||||
content, tokens = read_file_content("/etc/passwd")
|
||||
assert "--- ERROR ACCESSING FILE:" in content
|
||||
assert "Path outside project root" in content
|
||||
assert tokens > 0
|
||||
|
||||
def test_read_file_content_relative_path_rejected(self):
|
||||
"""Test that relative paths are rejected"""
|
||||
# Try to use a relative path
|
||||
content, tokens = read_file_content("./some/relative/path.py")
|
||||
assert "--- ERROR ACCESSING FILE:" in content
|
||||
assert "Relative paths are not supported" in content
|
||||
assert tokens > 0
|
||||
|
||||
def test_read_file_content_directory(self, project_path):
|
||||
"""Test reading a directory"""
|
||||
content, tokens = read_file_content(str(tmp_path))
|
||||
content, tokens = read_file_content(str(project_path))
|
||||
assert "--- NOT A FILE:" in content
|
||||
assert "Error: Path is not a file" in content
|
||||
assert tokens > 0
|
||||
|
||||
def test_read_files_multiple(self, tmp_path):
|
||||
def test_read_files_multiple(self, project_path):
|
||||
"""Test reading multiple files"""
|
||||
file1 = tmp_path / "file1.py"
|
||||
file1 = project_path / "file1.py"
|
||||
file1.write_text("print('file1')", encoding="utf-8")
|
||||
file2 = tmp_path / "file2.py"
|
||||
file2 = project_path / "file2.py"
|
||||
file2.write_text("print('file2')", encoding="utf-8")
|
||||
|
||||
content, summary = read_files([str(file1), str(file2)])
|
||||
@@ -62,23 +80,23 @@ class TestFileUtils:
|
||||
|
||||
assert "Direct code:" in summary
|
||||
|
||||
def test_read_files_directory_support(self, tmp_path):
|
||||
def test_read_files_directory_support(self, project_path):
|
||||
"""Test reading all files from a directory"""
|
||||
# Create directory structure
|
||||
(tmp_path / "file1.py").write_text("print('file1')", encoding="utf-8")
|
||||
(tmp_path / "file2.js").write_text("console.log('file2')", encoding="utf-8")
|
||||
(tmp_path / "readme.md").write_text("# README", encoding="utf-8")
|
||||
(project_path / "file1.py").write_text("print('file1')", encoding="utf-8")
|
||||
(project_path / "file2.js").write_text("console.log('file2')", encoding="utf-8")
|
||||
(project_path / "readme.md").write_text("# README", encoding="utf-8")
|
||||
|
||||
# Create subdirectory
|
||||
subdir = tmp_path / "src"
|
||||
subdir = project_path / "src"
|
||||
subdir.mkdir()
|
||||
(subdir / "module.py").write_text("class Module: pass", encoding="utf-8")
|
||||
|
||||
# Create hidden file (should be skipped)
|
||||
(tmp_path / ".hidden").write_text("secret", encoding="utf-8")
|
||||
(project_path / ".hidden").write_text("secret", encoding="utf-8")
|
||||
|
||||
# Read the directory
|
||||
content, summary = read_files([str(tmp_path)])
|
||||
content, summary = read_files([str(project_path)])
|
||||
|
||||
# Check files are included
|
||||
assert "file1.py" in content
|
||||
@@ -102,14 +120,14 @@ class TestFileUtils:
|
||||
assert "Processed 1 dir(s)" in summary
|
||||
assert "Read 4 file(s)" in summary
|
||||
|
||||
def test_read_files_mixed_paths(self, tmp_path):
|
||||
def test_read_files_mixed_paths(self, project_path):
|
||||
"""Test reading mix of files and directories"""
|
||||
# Create files
|
||||
file1 = tmp_path / "direct.py"
|
||||
file1 = project_path / "direct.py"
|
||||
file1.write_text("# Direct file", encoding="utf-8")
|
||||
|
||||
# Create directory with files
|
||||
subdir = tmp_path / "subdir"
|
||||
subdir = project_path / "subdir"
|
||||
subdir.mkdir()
|
||||
(subdir / "sub1.py").write_text("# Sub file 1", encoding="utf-8")
|
||||
(subdir / "sub2.py").write_text("# Sub file 2", encoding="utf-8")
|
||||
@@ -127,19 +145,19 @@ class TestFileUtils:
|
||||
assert "Processed 1 dir(s)" in summary
|
||||
assert "Read 3 file(s)" in summary
|
||||
|
||||
def test_read_files_token_limit(self, tmp_path):
|
||||
def test_read_files_token_limit(self, project_path):
|
||||
"""Test token limit handling"""
|
||||
# Create files with known token counts
|
||||
# ~250 tokens each (1000 chars)
|
||||
large_content = "x" * 1000
|
||||
|
||||
for i in range(5):
|
||||
(tmp_path / f"file{i}.txt").write_text(large_content, encoding="utf-8")
|
||||
(project_path / f"file{i}.txt").write_text(large_content, encoding="utf-8")
|
||||
|
||||
# Read with small token limit (should skip some files)
|
||||
# Reserve 50k tokens, limit to 51k total = 1k available
|
||||
# Each file ~250 tokens, so should read ~3-4 files
|
||||
content, summary = read_files([str(tmp_path)], max_tokens=51_000)
|
||||
content, summary = read_files([str(project_path)], max_tokens=51_000)
|
||||
|
||||
assert "Skipped" in summary
|
||||
assert "token limit" in summary
|
||||
@@ -149,10 +167,10 @@ class TestFileUtils:
|
||||
read_count = content.count("--- BEGIN FILE:")
|
||||
assert 2 <= read_count <= 4 # Should read some but not all
|
||||
|
||||
def test_read_files_large_file(self, tmp_path):
|
||||
def test_read_files_large_file(self, project_path):
|
||||
"""Test handling of large files"""
|
||||
# Create a file larger than max_size (1MB)
|
||||
large_file = tmp_path / "large.txt"
|
||||
large_file = project_path / "large.txt"
|
||||
large_file.write_text("x" * 2_000_000, encoding="utf-8") # 2MB
|
||||
|
||||
content, summary = read_files([str(large_file)])
|
||||
@@ -161,15 +179,15 @@ class TestFileUtils:
|
||||
assert "2,000,000 bytes" in content
|
||||
assert "Read 1 file(s)" in summary # File is counted but shows error message
|
||||
|
||||
def test_read_files_file_extensions(self, tmp_path):
|
||||
def test_read_files_file_extensions(self, project_path):
|
||||
"""Test file extension filtering"""
|
||||
# Create various file types
|
||||
(tmp_path / "code.py").write_text("python", encoding="utf-8")
|
||||
(tmp_path / "style.css").write_text("css", encoding="utf-8")
|
||||
(tmp_path / "binary.exe").write_text("exe", encoding="utf-8")
|
||||
(tmp_path / "image.jpg").write_text("jpg", encoding="utf-8")
|
||||
(project_path / "code.py").write_text("python", encoding="utf-8")
|
||||
(project_path / "style.css").write_text("css", encoding="utf-8")
|
||||
(project_path / "binary.exe").write_text("exe", encoding="utf-8")
|
||||
(project_path / "image.jpg").write_text("jpg", encoding="utf-8")
|
||||
|
||||
content, summary = read_files([str(tmp_path)])
|
||||
content, summary = read_files([str(project_path)])
|
||||
|
||||
# Code files should be included
|
||||
assert "code.py" in content
|
||||
|
||||
@@ -1,225 +0,0 @@
|
||||
"""
|
||||
Tool Selection Guide for Gemini MCP Server
|
||||
|
||||
This module provides guidance for Claude on which tool to use for different scenarios.
|
||||
"""
|
||||
|
||||
TOOL_BOUNDARIES = {
|
||||
"analyze": {
|
||||
"purpose": "Understanding and exploration (read-only analysis)",
|
||||
"best_for": [
|
||||
"Understanding code structure and architecture",
|
||||
"Exploring unfamiliar codebases",
|
||||
"Identifying patterns and dependencies",
|
||||
"Documenting existing functionality",
|
||||
"Learning how systems work",
|
||||
],
|
||||
"avoid_for": [
|
||||
"Finding bugs or security issues (use review_code)",
|
||||
"Debugging errors (use debug_issue)",
|
||||
"Extending existing analysis (use think_deeper)",
|
||||
],
|
||||
"output": "Descriptive explanations and architectural insights",
|
||||
},
|
||||
"review_code": {
|
||||
"purpose": "Finding issues and suggesting fixes (prescriptive analysis)",
|
||||
"best_for": [
|
||||
"Finding bugs, security vulnerabilities, performance issues",
|
||||
"Code quality assessment with actionable feedback",
|
||||
"Pre-merge code reviews",
|
||||
"Security audits",
|
||||
"Performance optimization recommendations",
|
||||
],
|
||||
"avoid_for": [
|
||||
"Understanding how code works (use analyze)",
|
||||
"Debugging runtime errors (use debug_issue)",
|
||||
"Architectural discussions (use think_deeper or chat)",
|
||||
],
|
||||
"output": "Severity-ranked issues with specific fixes",
|
||||
},
|
||||
"debug_issue": {
|
||||
"purpose": "Root cause analysis for errors (diagnostic analysis)",
|
||||
"best_for": [
|
||||
"Analyzing runtime errors and exceptions",
|
||||
"Troubleshooting failing tests",
|
||||
"Investigating performance problems",
|
||||
"Tracing execution issues",
|
||||
"System-level debugging",
|
||||
],
|
||||
"avoid_for": [
|
||||
"Code quality issues (use review_code)",
|
||||
"Understanding working code (use analyze)",
|
||||
"Design discussions (use think_deeper or chat)",
|
||||
],
|
||||
"output": "Ranked hypotheses with validation steps",
|
||||
},
|
||||
"think_deeper": {
|
||||
"purpose": "Extending and validating specific analysis (collaborative validation)",
|
||||
"best_for": [
|
||||
"Getting second opinion on Claude's analysis",
|
||||
"Challenging assumptions and finding edge cases",
|
||||
"Validating architectural decisions",
|
||||
"Exploring alternative approaches",
|
||||
"Risk assessment for proposed changes",
|
||||
],
|
||||
"avoid_for": [
|
||||
"Initial analysis (use analyze first)",
|
||||
"Bug hunting (use review_code)",
|
||||
"Open-ended brainstorming (use chat)",
|
||||
],
|
||||
"output": "Extended analysis building on existing work",
|
||||
},
|
||||
"chat": {
|
||||
"purpose": "Open-ended collaboration and brainstorming (exploratory discussion)",
|
||||
"best_for": [
|
||||
"Brainstorming solutions and approaches",
|
||||
"Technology comparisons and recommendations",
|
||||
"Discussing trade-offs and design decisions",
|
||||
"Getting opinions on implementation strategies",
|
||||
"General development questions and explanations",
|
||||
],
|
||||
"avoid_for": [
|
||||
"Analyzing specific code files (use analyze)",
|
||||
"Finding bugs in code (use review_code)",
|
||||
"Debugging specific errors (use debug_issue)",
|
||||
],
|
||||
"output": "Conversational insights and recommendations",
|
||||
},
|
||||
}
|
||||
|
||||
DECISION_FLOWCHART = """
|
||||
Tool Selection Decision Flow:
|
||||
|
||||
1. Do you have a specific error/exception to debug?
|
||||
→ YES: Use debug_issue
|
||||
|
||||
2. Do you want to find bugs/issues in code?
|
||||
→ YES: Use review_code
|
||||
|
||||
3. Do you want to understand how code works?
|
||||
→ YES: Use analyze
|
||||
|
||||
4. Do you have existing analysis that needs extension/validation?
|
||||
→ YES: Use think_deeper
|
||||
|
||||
5. Do you want to brainstorm, discuss, or get opinions?
|
||||
→ YES: Use chat
|
||||
"""
|
||||
|
||||
COMMON_OVERLAPS = {
|
||||
"analyze vs review_code": {
|
||||
"confusion": "Both examine code quality",
|
||||
"distinction": "analyze explains, review_code prescribes fixes",
|
||||
"rule": "Use analyze to understand, review_code to improve",
|
||||
},
|
||||
"chat vs think_deeper": {
|
||||
"confusion": "Both provide collaborative thinking",
|
||||
"distinction": "chat is open-ended, think_deeper builds on specific analysis",
|
||||
"rule": "Use think_deeper to extend analysis, chat for open discussion",
|
||||
},
|
||||
"debug_issue vs review_code": {
|
||||
"confusion": "Both find problems in code",
|
||||
"distinction": "debug_issue diagnoses runtime errors, review_code finds static issues",
|
||||
"rule": "Use debug_issue for 'why is this failing?', review_code for 'what could go wrong?'",
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
def get_tool_recommendation(intent: str, context: str = "") -> dict:
|
||||
"""
|
||||
Recommend the best tool based on user intent and context.
|
||||
|
||||
Args:
|
||||
intent: What the user wants to accomplish
|
||||
context: Additional context about the situation
|
||||
|
||||
Returns:
|
||||
Dictionary with recommended tool and reasoning
|
||||
"""
|
||||
|
||||
# Keywords that strongly indicate specific tools
|
||||
debug_keywords = [
|
||||
"error",
|
||||
"exception",
|
||||
"failing",
|
||||
"crash",
|
||||
"bug",
|
||||
"not working",
|
||||
"trace",
|
||||
]
|
||||
review_keywords = [
|
||||
"review",
|
||||
"issues",
|
||||
"problems",
|
||||
"security",
|
||||
"vulnerabilities",
|
||||
"quality",
|
||||
]
|
||||
analyze_keywords = [
|
||||
"understand",
|
||||
"how does",
|
||||
"what is",
|
||||
"structure",
|
||||
"architecture",
|
||||
"explain",
|
||||
]
|
||||
deeper_keywords = [
|
||||
"extend",
|
||||
"validate",
|
||||
"challenge",
|
||||
"alternative",
|
||||
"edge case",
|
||||
"think deeper",
|
||||
]
|
||||
chat_keywords = [
|
||||
"brainstorm",
|
||||
"discuss",
|
||||
"opinion",
|
||||
"compare",
|
||||
"recommend",
|
||||
"what about",
|
||||
]
|
||||
|
||||
intent_lower = intent.lower()
|
||||
|
||||
if any(keyword in intent_lower for keyword in debug_keywords):
|
||||
return {
|
||||
"tool": "debug_issue",
|
||||
"confidence": "high",
|
||||
"reasoning": "Intent indicates debugging/troubleshooting a specific issue",
|
||||
}
|
||||
|
||||
if any(keyword in intent_lower for keyword in review_keywords):
|
||||
return {
|
||||
"tool": "review_code",
|
||||
"confidence": "high",
|
||||
"reasoning": "Intent indicates finding issues or reviewing code quality",
|
||||
}
|
||||
|
||||
if any(keyword in intent_lower for keyword in analyze_keywords):
|
||||
return {
|
||||
"tool": "analyze",
|
||||
"confidence": "high",
|
||||
"reasoning": "Intent indicates understanding or exploring code",
|
||||
}
|
||||
|
||||
if any(keyword in intent_lower for keyword in deeper_keywords):
|
||||
return {
|
||||
"tool": "think_deeper",
|
||||
"confidence": "medium",
|
||||
"reasoning": "Intent suggests extending or validating existing analysis",
|
||||
}
|
||||
|
||||
if any(keyword in intent_lower for keyword in chat_keywords):
|
||||
return {
|
||||
"tool": "chat",
|
||||
"confidence": "medium",
|
||||
"reasoning": "Intent suggests open-ended discussion or brainstorming",
|
||||
}
|
||||
|
||||
# Default to chat for ambiguous requests
|
||||
return {
|
||||
"tool": "chat",
|
||||
"confidence": "low",
|
||||
"reasoning": "Intent unclear, defaulting to conversational tool",
|
||||
}
|
||||
@@ -6,6 +6,7 @@ from .analyze import AnalyzeTool
|
||||
from .chat import ChatTool
|
||||
from .debug_issue import DebugIssueTool
|
||||
from .review_code import ReviewCodeTool
|
||||
from .review_pending_changes import ReviewPendingChanges
|
||||
from .think_deeper import ThinkDeeperTool
|
||||
|
||||
__all__ = [
|
||||
@@ -14,4 +15,5 @@ __all__ = [
|
||||
"DebugIssueTool",
|
||||
"AnalyzeTool",
|
||||
"ChatTool",
|
||||
"ReviewPendingChanges",
|
||||
]
|
||||
|
||||
@@ -16,7 +16,9 @@ from .base import BaseTool, ToolRequest
|
||||
class AnalyzeRequest(ToolRequest):
|
||||
"""Request model for analyze tool"""
|
||||
|
||||
files: List[str] = Field(..., description="Files or directories to analyze")
|
||||
files: List[str] = Field(
|
||||
..., description="Files or directories to analyze (must be absolute paths)"
|
||||
)
|
||||
question: str = Field(..., description="What to analyze or look for")
|
||||
analysis_type: Optional[str] = Field(
|
||||
None,
|
||||
@@ -50,7 +52,7 @@ class AnalyzeTool(BaseTool):
|
||||
"files": {
|
||||
"type": "array",
|
||||
"items": {"type": "string"},
|
||||
"description": "Files or directories to analyze",
|
||||
"description": "Files or directories to analyze (must be absolute paths)",
|
||||
},
|
||||
"question": {
|
||||
"type": "string",
|
||||
|
||||
@@ -71,6 +71,32 @@ class BaseTool(ABC):
|
||||
"""Return the Pydantic model for request validation"""
|
||||
pass
|
||||
|
||||
def validate_file_paths(self, request) -> Optional[str]:
|
||||
"""
|
||||
Validate that all file paths in the request are absolute.
|
||||
Returns error message if validation fails, None if all paths are valid.
|
||||
"""
|
||||
# Check if request has 'files' attribute
|
||||
if hasattr(request, "files") and request.files:
|
||||
for file_path in request.files:
|
||||
if not os.path.isabs(file_path):
|
||||
return (
|
||||
f"Error: All file paths must be absolute. "
|
||||
f"Received relative path: {file_path}\n"
|
||||
f"Please provide the full absolute path starting with '/'"
|
||||
)
|
||||
|
||||
# Check if request has 'path' attribute (for review_pending_changes)
|
||||
if hasattr(request, "path") and request.path:
|
||||
if not os.path.isabs(request.path):
|
||||
return (
|
||||
f"Error: Path must be absolute. "
|
||||
f"Received relative path: {request.path}\n"
|
||||
f"Please provide the full absolute path starting with '/'"
|
||||
)
|
||||
|
||||
return None
|
||||
|
||||
async def execute(self, arguments: Dict[str, Any]) -> List[TextContent]:
|
||||
"""Execute the tool with given arguments"""
|
||||
try:
|
||||
@@ -78,6 +104,16 @@ class BaseTool(ABC):
|
||||
request_model = self.get_request_model()
|
||||
request = request_model(**arguments)
|
||||
|
||||
# Validate file paths
|
||||
path_error = self.validate_file_paths(request)
|
||||
if path_error:
|
||||
error_output = ToolOutput(
|
||||
status="error",
|
||||
content=path_error,
|
||||
content_type="text",
|
||||
)
|
||||
return [TextContent(type="text", text=error_output.model_dump_json())]
|
||||
|
||||
# Prepare the prompt
|
||||
prompt = await self.prepare_prompt(request)
|
||||
|
||||
|
||||
@@ -20,8 +20,9 @@ class ChatRequest(ToolRequest):
|
||||
...,
|
||||
description="Your question, topic, or current thinking to discuss with Gemini",
|
||||
)
|
||||
context_files: Optional[List[str]] = Field(
|
||||
default_factory=list, description="Optional files for context"
|
||||
files: Optional[List[str]] = Field(
|
||||
default_factory=list,
|
||||
description="Optional files for context (must be absolute paths)",
|
||||
)
|
||||
|
||||
|
||||
@@ -49,10 +50,10 @@ class ChatTool(BaseTool):
|
||||
"type": "string",
|
||||
"description": "Your question, topic, or current thinking to discuss with Gemini",
|
||||
},
|
||||
"context_files": {
|
||||
"files": {
|
||||
"type": "array",
|
||||
"items": {"type": "string"},
|
||||
"description": "Optional files for context",
|
||||
"description": "Optional files for context (must be absolute paths)",
|
||||
},
|
||||
"temperature": {
|
||||
"type": "number",
|
||||
@@ -83,8 +84,8 @@ class ChatTool(BaseTool):
|
||||
user_content = request.prompt
|
||||
|
||||
# Add context files if provided
|
||||
if request.context_files:
|
||||
file_content, _ = read_files(request.context_files)
|
||||
if request.files:
|
||||
file_content, _ = read_files(request.files)
|
||||
user_content = f"{request.prompt}\n\n=== CONTEXT FILES ===\n{file_content}\n=== END CONTEXT ==="
|
||||
|
||||
# Check token limits
|
||||
|
||||
@@ -23,7 +23,8 @@ class DebugIssueRequest(ToolRequest):
|
||||
None, description="Stack trace, logs, or additional error context"
|
||||
)
|
||||
files: Optional[List[str]] = Field(
|
||||
None, description="Files or directories that might be related to the issue"
|
||||
None,
|
||||
description="Files or directories that might be related to the issue (must be absolute paths)",
|
||||
)
|
||||
runtime_info: Optional[str] = Field(
|
||||
None, description="Environment, versions, or runtime information"
|
||||
@@ -63,7 +64,7 @@ class DebugIssueTool(BaseTool):
|
||||
"files": {
|
||||
"type": "array",
|
||||
"items": {"type": "string"},
|
||||
"description": "Files or directories that might be related to the issue",
|
||||
"description": "Files or directories that might be related to the issue (must be absolute paths)",
|
||||
},
|
||||
"runtime_info": {
|
||||
"type": "string",
|
||||
|
||||
@@ -16,7 +16,10 @@ from .base import BaseTool, ToolRequest
|
||||
class ReviewCodeRequest(ToolRequest):
|
||||
"""Request model for review_code tool"""
|
||||
|
||||
files: List[str] = Field(..., description="Code files or directories to review")
|
||||
files: List[str] = Field(
|
||||
...,
|
||||
description="Code files or directories to review (must be absolute paths)",
|
||||
)
|
||||
review_type: str = Field(
|
||||
"full", description="Type of review: full|security|performance|quick"
|
||||
)
|
||||
@@ -55,7 +58,7 @@ class ReviewCodeTool(BaseTool):
|
||||
"files": {
|
||||
"type": "array",
|
||||
"items": {"type": "string"},
|
||||
"description": "Code files or directories to review",
|
||||
"description": "Code files or directories to review (must be absolute paths)",
|
||||
},
|
||||
"review_type": {
|
||||
"type": "string",
|
||||
|
||||
334
tools/review_pending_changes.py
Normal file
334
tools/review_pending_changes.py
Normal file
@@ -0,0 +1,334 @@
|
||||
"""
|
||||
Tool for reviewing pending git changes across multiple repositories.
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
from typing import Any, Dict, List, Literal, Optional
|
||||
|
||||
from pydantic import Field
|
||||
|
||||
from config import MAX_CONTEXT_TOKENS
|
||||
from prompts.tool_prompts import REVIEW_PENDING_CHANGES_PROMPT
|
||||
from utils.git_utils import find_git_repositories, get_git_status, run_git_command
|
||||
from utils.token_utils import estimate_tokens
|
||||
|
||||
from .base import BaseTool, ToolRequest
|
||||
|
||||
|
||||
class ReviewPendingChangesRequest(ToolRequest):
|
||||
"""Request model for review_pending_changes tool"""
|
||||
|
||||
path: str = Field(
|
||||
...,
|
||||
description="Starting directory to search for git repositories (must be absolute path).",
|
||||
)
|
||||
original_request: Optional[str] = Field(
|
||||
None,
|
||||
description="The original user request or ticket description for the changes. Provides critical context for the review.",
|
||||
)
|
||||
compare_to: Optional[str] = Field(
|
||||
None,
|
||||
description="Optional: A git ref (branch, tag, commit hash) to compare against. If not provided, reviews local staged and unstaged changes.",
|
||||
)
|
||||
include_staged: bool = Field(
|
||||
True,
|
||||
description="Include staged changes in the review. Only applies if 'compare_to' is not set.",
|
||||
)
|
||||
include_unstaged: bool = Field(
|
||||
True,
|
||||
description="Include uncommitted (unstaged) changes in the review. Only applies if 'compare_to' is not set.",
|
||||
)
|
||||
focus_on: Optional[str] = Field(
|
||||
None,
|
||||
description="Specific aspects to focus on (e.g., 'logic for user authentication', 'database query efficiency').",
|
||||
)
|
||||
review_type: Literal["full", "security", "performance", "quick"] = Field(
|
||||
"full", description="Type of review to perform on the changes."
|
||||
)
|
||||
severity_filter: Literal["critical", "high", "medium", "all"] = Field(
|
||||
"all",
|
||||
description="Minimum severity level to report on the changes.",
|
||||
)
|
||||
max_depth: int = Field(
|
||||
5,
|
||||
description="Maximum depth to search for nested git repositories to prevent excessive recursion.",
|
||||
)
|
||||
temperature: Optional[float] = Field(
|
||||
None,
|
||||
description="Temperature for the response (0.0 to 1.0). Lower values are more focused and deterministic.",
|
||||
ge=0.0,
|
||||
le=1.0,
|
||||
)
|
||||
thinking_mode: Optional[Literal["minimal", "low", "medium", "high", "max"]] = Field(
|
||||
None, description="Thinking depth mode for the assistant."
|
||||
)
|
||||
|
||||
|
||||
class ReviewPendingChanges(BaseTool):
|
||||
"""Tool for reviewing pending git changes across multiple repositories."""
|
||||
|
||||
def get_name(self) -> str:
|
||||
return "review_pending_changes"
|
||||
|
||||
def get_description(self) -> str:
|
||||
return (
|
||||
"REVIEW PENDING GIT CHANGES - Comprehensive pre-commit review of staged/unstaged changes "
|
||||
"or branch comparisons across multiple repositories. Searches recursively for git repos "
|
||||
"and analyzes diffs for bugs, security issues, incomplete implementations, and alignment "
|
||||
"with original requirements. Perfect for final review before committing. "
|
||||
"Triggers: 'review pending changes', 'check my changes', 'validate changes', 'pre-commit review'. "
|
||||
"Use this when you want to ensure changes are complete, correct, and ready to commit."
|
||||
)
|
||||
|
||||
def get_input_schema(self) -> Dict[str, Any]:
|
||||
return self.get_request_model().model_json_schema()
|
||||
|
||||
def get_system_prompt(self) -> str:
|
||||
return REVIEW_PENDING_CHANGES_PROMPT
|
||||
|
||||
def get_request_model(self):
|
||||
return ReviewPendingChangesRequest
|
||||
|
||||
def get_default_temperature(self) -> float:
|
||||
"""Use analytical temperature for code review."""
|
||||
from config import TEMPERATURE_ANALYTICAL
|
||||
|
||||
return TEMPERATURE_ANALYTICAL
|
||||
|
||||
def _sanitize_filename(self, name: str) -> str:
|
||||
"""Sanitize a string to be a valid filename."""
|
||||
# Replace path separators and other problematic characters
|
||||
name = name.replace("/", "_").replace("\\", "_").replace(" ", "_")
|
||||
# Remove any remaining non-alphanumeric characters except dots, dashes, underscores
|
||||
name = re.sub(r"[^a-zA-Z0-9._-]", "", name)
|
||||
# Limit length to avoid filesystem issues
|
||||
return name[:100]
|
||||
|
||||
async def prepare_prompt(self, request: ReviewPendingChangesRequest) -> str:
|
||||
"""Prepare the prompt with git diff information."""
|
||||
# Find all git repositories
|
||||
repositories = find_git_repositories(request.path, request.max_depth)
|
||||
|
||||
if not repositories:
|
||||
return "No git repositories found in the specified path."
|
||||
|
||||
# Collect all diffs directly
|
||||
all_diffs = []
|
||||
repo_summaries = []
|
||||
total_tokens = 0
|
||||
max_tokens = MAX_CONTEXT_TOKENS - 50000 # Reserve tokens for prompt and response
|
||||
|
||||
for repo_path in repositories:
|
||||
repo_name = os.path.basename(repo_path) or "root"
|
||||
repo_name = self._sanitize_filename(repo_name)
|
||||
|
||||
# Get status information
|
||||
status = get_git_status(repo_path)
|
||||
changed_files = []
|
||||
|
||||
# Process based on mode
|
||||
if request.compare_to:
|
||||
# Validate the ref
|
||||
is_valid_ref, err_msg = run_git_command(
|
||||
repo_path,
|
||||
["rev-parse", "--verify", "--quiet", request.compare_to],
|
||||
)
|
||||
if not is_valid_ref:
|
||||
repo_summaries.append(
|
||||
{
|
||||
"path": repo_path,
|
||||
"error": f"Invalid or unknown git ref '{request.compare_to}': {err_msg}",
|
||||
"changed_files": 0,
|
||||
}
|
||||
)
|
||||
continue
|
||||
|
||||
# Get list of changed files
|
||||
success, files_output = run_git_command(
|
||||
repo_path,
|
||||
["diff", "--name-only", f"{request.compare_to}...HEAD"],
|
||||
)
|
||||
if success and files_output.strip():
|
||||
changed_files = [
|
||||
f for f in files_output.strip().split("\n") if f
|
||||
]
|
||||
|
||||
# Generate per-file diffs
|
||||
for file_path in changed_files:
|
||||
success, diff = run_git_command(
|
||||
repo_path,
|
||||
[
|
||||
"diff",
|
||||
f"{request.compare_to}...HEAD",
|
||||
"--",
|
||||
file_path,
|
||||
],
|
||||
)
|
||||
if success and diff.strip():
|
||||
# Format diff with file header
|
||||
safe_file_name = self._sanitize_filename(file_path)
|
||||
diff_header = f"\n--- BEGIN DIFF: {repo_name} / {file_path} (compare to {request.compare_to}) ---\n"
|
||||
diff_footer = f"\n--- END DIFF: {repo_name} / {file_path} ---\n"
|
||||
formatted_diff = diff_header + diff + diff_footer
|
||||
|
||||
# Check token limit
|
||||
diff_tokens = estimate_tokens(formatted_diff)
|
||||
if total_tokens + diff_tokens <= max_tokens:
|
||||
all_diffs.append(formatted_diff)
|
||||
total_tokens += diff_tokens
|
||||
else:
|
||||
# Handle staged/unstaged changes
|
||||
staged_files = []
|
||||
unstaged_files = []
|
||||
|
||||
if request.include_staged:
|
||||
success, files_output = run_git_command(
|
||||
repo_path, ["diff", "--name-only", "--cached"]
|
||||
)
|
||||
if success and files_output.strip():
|
||||
staged_files = [
|
||||
f for f in files_output.strip().split("\n") if f
|
||||
]
|
||||
|
||||
# Generate per-file diffs for staged changes
|
||||
for file_path in staged_files:
|
||||
success, diff = run_git_command(
|
||||
repo_path, ["diff", "--cached", "--", file_path]
|
||||
)
|
||||
if success and diff.strip():
|
||||
safe_file_name = self._sanitize_filename(file_path)
|
||||
diff_header = f"\n--- BEGIN DIFF: {repo_name} / {file_path} (staged) ---\n"
|
||||
diff_footer = f"\n--- END DIFF: {repo_name} / {file_path} ---\n"
|
||||
formatted_diff = diff_header + diff + diff_footer
|
||||
|
||||
# Check token limit
|
||||
from utils import estimate_tokens
|
||||
diff_tokens = estimate_tokens(formatted_diff)
|
||||
if total_tokens + diff_tokens <= max_tokens:
|
||||
all_diffs.append(formatted_diff)
|
||||
total_tokens += diff_tokens
|
||||
|
||||
if request.include_unstaged:
|
||||
success, files_output = run_git_command(
|
||||
repo_path, ["diff", "--name-only"]
|
||||
)
|
||||
if success and files_output.strip():
|
||||
unstaged_files = [
|
||||
f for f in files_output.strip().split("\n") if f
|
||||
]
|
||||
|
||||
# Generate per-file diffs for unstaged changes
|
||||
for file_path in unstaged_files:
|
||||
success, diff = run_git_command(
|
||||
repo_path, ["diff", "--", file_path]
|
||||
)
|
||||
if success and diff.strip():
|
||||
safe_file_name = self._sanitize_filename(file_path)
|
||||
diff_header = f"\n--- BEGIN DIFF: {repo_name} / {file_path} (unstaged) ---\n"
|
||||
diff_footer = f"\n--- END DIFF: {repo_name} / {file_path} ---\n"
|
||||
formatted_diff = diff_header + diff + diff_footer
|
||||
|
||||
# Check token limit
|
||||
from utils import estimate_tokens
|
||||
diff_tokens = estimate_tokens(formatted_diff)
|
||||
if total_tokens + diff_tokens <= max_tokens:
|
||||
all_diffs.append(formatted_diff)
|
||||
total_tokens += diff_tokens
|
||||
|
||||
# Combine unique files
|
||||
changed_files = list(set(staged_files + unstaged_files))
|
||||
|
||||
# Add repository summary
|
||||
if changed_files:
|
||||
repo_summaries.append(
|
||||
{
|
||||
"path": repo_path,
|
||||
"branch": status["branch"],
|
||||
"ahead": status["ahead"],
|
||||
"behind": status["behind"],
|
||||
"changed_files": len(changed_files),
|
||||
"files": changed_files[:20], # First 20 for summary
|
||||
}
|
||||
)
|
||||
|
||||
if not all_diffs:
|
||||
return "No pending changes found in any of the git repositories."
|
||||
|
||||
# Build the final prompt
|
||||
prompt_parts = []
|
||||
|
||||
# Add original request context if provided
|
||||
if request.original_request:
|
||||
prompt_parts.append(
|
||||
f"## Original Request/Ticket\n\n{request.original_request}\n"
|
||||
)
|
||||
|
||||
# Add review parameters
|
||||
prompt_parts.append(f"## Review Parameters\n")
|
||||
prompt_parts.append(f"- Review Type: {request.review_type}")
|
||||
prompt_parts.append(f"- Severity Filter: {request.severity_filter}")
|
||||
|
||||
if request.focus_on:
|
||||
prompt_parts.append(f"- Focus Areas: {request.focus_on}")
|
||||
|
||||
if request.compare_to:
|
||||
prompt_parts.append(f"- Comparing Against: {request.compare_to}")
|
||||
else:
|
||||
review_scope = []
|
||||
if request.include_staged:
|
||||
review_scope.append("staged")
|
||||
if request.include_unstaged:
|
||||
review_scope.append("unstaged")
|
||||
prompt_parts.append(
|
||||
f"- Reviewing: {' and '.join(review_scope)} changes"
|
||||
)
|
||||
|
||||
# Add repository summary
|
||||
prompt_parts.append(f"\n## Repository Changes Summary\n")
|
||||
prompt_parts.append(
|
||||
f"Found {len(repo_summaries)} repositories with changes:\n"
|
||||
)
|
||||
|
||||
for idx, summary in enumerate(repo_summaries, 1):
|
||||
prompt_parts.append(f"\n### Repository {idx}: {summary['path']}")
|
||||
if "error" in summary:
|
||||
prompt_parts.append(f"⚠️ Error: {summary['error']}")
|
||||
else:
|
||||
prompt_parts.append(f"- Branch: {summary['branch']}")
|
||||
if summary["ahead"] or summary["behind"]:
|
||||
prompt_parts.append(
|
||||
f"- Ahead: {summary['ahead']}, Behind: {summary['behind']}"
|
||||
)
|
||||
prompt_parts.append(f"- Changed Files: {summary['changed_files']}")
|
||||
|
||||
if summary["files"]:
|
||||
prompt_parts.append("\nChanged files:")
|
||||
for file in summary["files"]:
|
||||
prompt_parts.append(f" - {file}")
|
||||
if summary["changed_files"] > len(summary["files"]):
|
||||
prompt_parts.append(
|
||||
f" ... and {summary['changed_files'] - len(summary['files'])} more files"
|
||||
)
|
||||
|
||||
# Add token usage summary
|
||||
if total_tokens > 0:
|
||||
prompt_parts.append(f"\nTotal diff tokens: ~{total_tokens:,}")
|
||||
|
||||
# Add the diff contents
|
||||
prompt_parts.append("\n## Git Diffs\n")
|
||||
if all_diffs:
|
||||
prompt_parts.extend(all_diffs)
|
||||
else:
|
||||
prompt_parts.append("--- NO DIFFS FOUND ---")
|
||||
|
||||
# Add review instructions
|
||||
prompt_parts.append("\n## Review Instructions\n")
|
||||
prompt_parts.append(
|
||||
"Please review these changes according to the system prompt guidelines. "
|
||||
"Pay special attention to alignment with the original request, completeness of implementation, "
|
||||
"potential bugs, security issues, and any edge cases not covered."
|
||||
)
|
||||
|
||||
return "\n".join(prompt_parts)
|
||||
@@ -27,7 +27,8 @@ class ThinkDeeperRequest(ToolRequest):
|
||||
description="Specific aspects to focus on (architecture, performance, security, etc.)",
|
||||
)
|
||||
files: Optional[List[str]] = Field(
|
||||
None, description="Optional file paths or directories for additional context"
|
||||
None,
|
||||
description="Optional file paths or directories for additional context (must be absolute paths)",
|
||||
)
|
||||
|
||||
|
||||
@@ -66,7 +67,7 @@ class ThinkDeeperTool(BaseTool):
|
||||
"files": {
|
||||
"type": "array",
|
||||
"items": {"type": "string"},
|
||||
"description": "Optional file paths or directories for additional context",
|
||||
"description": "Optional file paths or directories for additional context (must be absolute paths)",
|
||||
},
|
||||
"temperature": {
|
||||
"type": "number",
|
||||
|
||||
@@ -5,9 +5,21 @@ File reading utilities with directory support and token management
|
||||
import os
|
||||
from pathlib import Path
|
||||
from typing import List, Optional, Tuple, Set
|
||||
import sys
|
||||
|
||||
from .token_utils import estimate_tokens, MAX_CONTEXT_TOKENS
|
||||
|
||||
# Get project root from environment or use current directory
|
||||
# This defines the sandbox directory where file access is allowed
|
||||
PROJECT_ROOT = Path(os.environ.get("MCP_PROJECT_ROOT", os.getcwd())).resolve()
|
||||
|
||||
# Security: Prevent running with overly permissive root
|
||||
if str(PROJECT_ROOT) == "/":
|
||||
raise RuntimeError(
|
||||
"Security Error: MCP_PROJECT_ROOT cannot be set to '/'. "
|
||||
"This would give access to the entire filesystem."
|
||||
)
|
||||
|
||||
|
||||
# Common code file extensions
|
||||
CODE_EXTENSIONS = {
|
||||
@@ -60,6 +72,46 @@ CODE_EXTENSIONS = {
|
||||
}
|
||||
|
||||
|
||||
def resolve_and_validate_path(path_str: str) -> Path:
|
||||
"""
|
||||
Validates that a path is absolute and resolves it.
|
||||
|
||||
Args:
|
||||
path_str: Path string (must be absolute)
|
||||
|
||||
Returns:
|
||||
Resolved Path object
|
||||
|
||||
Raises:
|
||||
ValueError: If path is not absolute
|
||||
PermissionError: If path is outside allowed directory
|
||||
"""
|
||||
# Create a Path object from the user-provided path
|
||||
user_path = Path(path_str)
|
||||
|
||||
# Require absolute paths
|
||||
if not user_path.is_absolute():
|
||||
raise ValueError(
|
||||
f"Relative paths are not supported. Please provide an absolute path.\n"
|
||||
f"Received: {path_str}"
|
||||
)
|
||||
|
||||
# Resolve the absolute path
|
||||
resolved_path = user_path.resolve()
|
||||
|
||||
# Security check: ensure the resolved path is within PROJECT_ROOT
|
||||
try:
|
||||
resolved_path.relative_to(PROJECT_ROOT)
|
||||
except ValueError:
|
||||
raise PermissionError(
|
||||
f"Path outside project root: {path_str}\n"
|
||||
f"Project root: {PROJECT_ROOT}\n"
|
||||
f"Resolved path: {resolved_path}"
|
||||
)
|
||||
|
||||
return resolved_path
|
||||
|
||||
|
||||
def expand_paths(paths: List[str], extensions: Optional[Set[str]] = None) -> List[str]:
|
||||
"""
|
||||
Expand paths to individual files, handling both files and directories.
|
||||
@@ -78,7 +130,11 @@ def expand_paths(paths: List[str], extensions: Optional[Set[str]] = None) -> Lis
|
||||
seen = set()
|
||||
|
||||
for path in paths:
|
||||
path_obj = Path(path)
|
||||
try:
|
||||
path_obj = resolve_and_validate_path(path)
|
||||
except (ValueError, PermissionError):
|
||||
# Skip invalid paths
|
||||
continue
|
||||
|
||||
if not path_obj.exists():
|
||||
continue
|
||||
@@ -121,13 +177,17 @@ def read_file_content(file_path: str, max_size: int = 1_000_000) -> Tuple[str, i
|
||||
Read a single file and format it for Gemini.
|
||||
|
||||
Args:
|
||||
file_path: Path to file
|
||||
file_path: Path to file (must be absolute)
|
||||
max_size: Maximum file size to read
|
||||
|
||||
Returns:
|
||||
(formatted_content, estimated_tokens)
|
||||
"""
|
||||
path = Path(file_path)
|
||||
try:
|
||||
path = resolve_and_validate_path(file_path)
|
||||
except (ValueError, PermissionError) as e:
|
||||
content = f"\n--- ERROR ACCESSING FILE: {file_path} ---\nError: {str(e)}\n--- END FILE ---\n"
|
||||
return content, estimate_tokens(content)
|
||||
|
||||
try:
|
||||
# Check if path exists and is a file
|
||||
|
||||
164
utils/git_utils.py
Normal file
164
utils/git_utils.py
Normal file
@@ -0,0 +1,164 @@
|
||||
"""
|
||||
Git utilities for finding repositories and generating diffs.
|
||||
"""
|
||||
|
||||
import os
|
||||
import subprocess
|
||||
from typing import Dict, List, Optional, Tuple
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
# Directories to ignore when searching for git repositories
|
||||
IGNORED_DIRS = {
|
||||
"node_modules",
|
||||
"__pycache__",
|
||||
"venv",
|
||||
"env",
|
||||
"build",
|
||||
"dist",
|
||||
"target",
|
||||
".tox",
|
||||
".pytest_cache",
|
||||
}
|
||||
|
||||
|
||||
def find_git_repositories(start_path: str, max_depth: int = 5) -> List[str]:
|
||||
"""
|
||||
Recursively find all git repositories starting from the given path.
|
||||
|
||||
Args:
|
||||
start_path: Directory to start searching from
|
||||
max_depth: Maximum depth to search (prevents excessive recursion)
|
||||
|
||||
Returns:
|
||||
List of absolute paths to git repositories
|
||||
"""
|
||||
repositories = []
|
||||
start_path = Path(start_path).resolve()
|
||||
|
||||
def _find_repos(current_path: Path, current_depth: int):
|
||||
if current_depth > max_depth:
|
||||
return
|
||||
|
||||
try:
|
||||
# Check if current directory is a git repo
|
||||
git_dir = current_path / ".git"
|
||||
if git_dir.exists() and git_dir.is_dir():
|
||||
repositories.append(str(current_path))
|
||||
# Don't search inside .git directory
|
||||
return
|
||||
|
||||
# Search subdirectories
|
||||
for item in current_path.iterdir():
|
||||
if item.is_dir() and not item.name.startswith("."):
|
||||
# Skip common non-code directories
|
||||
if item.name in IGNORED_DIRS:
|
||||
continue
|
||||
_find_repos(item, current_depth + 1)
|
||||
|
||||
except PermissionError:
|
||||
# Skip directories we can't access
|
||||
pass
|
||||
|
||||
_find_repos(start_path, 0)
|
||||
return sorted(repositories)
|
||||
|
||||
|
||||
def run_git_command(repo_path: str, command: List[str]) -> Tuple[bool, str]:
|
||||
"""
|
||||
Run a git command in the specified repository.
|
||||
|
||||
Args:
|
||||
repo_path: Path to the git repository
|
||||
command: Git command as a list of arguments
|
||||
|
||||
Returns:
|
||||
Tuple of (success, output/error)
|
||||
"""
|
||||
try:
|
||||
result = subprocess.run(
|
||||
["git"] + command, cwd=repo_path, capture_output=True, text=True, timeout=30
|
||||
)
|
||||
|
||||
if result.returncode == 0:
|
||||
return True, result.stdout
|
||||
else:
|
||||
return False, result.stderr
|
||||
|
||||
except subprocess.TimeoutExpired:
|
||||
return False, "Command timed out"
|
||||
except Exception as e:
|
||||
return False, str(e)
|
||||
|
||||
|
||||
def get_git_status(repo_path: str) -> Dict[str, any]:
|
||||
"""
|
||||
Get the current git status of a repository.
|
||||
|
||||
Args:
|
||||
repo_path: Path to the git repository
|
||||
|
||||
Returns:
|
||||
Dictionary with status information
|
||||
"""
|
||||
status = {
|
||||
"branch": "",
|
||||
"ahead": 0,
|
||||
"behind": 0,
|
||||
"staged_files": [],
|
||||
"unstaged_files": [],
|
||||
"untracked_files": [],
|
||||
}
|
||||
|
||||
# Get current branch
|
||||
success, branch = run_git_command(repo_path, ["branch", "--show-current"])
|
||||
if success:
|
||||
status["branch"] = branch.strip()
|
||||
|
||||
# Get ahead/behind info
|
||||
if status["branch"]:
|
||||
success, ahead_behind = run_git_command(
|
||||
repo_path,
|
||||
[
|
||||
"rev-list",
|
||||
"--count",
|
||||
"--left-right",
|
||||
f'{status["branch"]}@{{upstream}}...HEAD',
|
||||
],
|
||||
)
|
||||
if success:
|
||||
if ahead_behind.strip():
|
||||
parts = ahead_behind.strip().split()
|
||||
if len(parts) == 2:
|
||||
status["behind"] = int(parts[0])
|
||||
status["ahead"] = int(parts[1])
|
||||
# else: Could not get ahead/behind status (branch may not have upstream)
|
||||
|
||||
# Get file status
|
||||
success, status_output = run_git_command(repo_path, ["status", "--porcelain"])
|
||||
if success:
|
||||
for line in status_output.strip().split("\n"):
|
||||
if not line:
|
||||
continue
|
||||
|
||||
status_code = line[:2]
|
||||
path_info = line[3:]
|
||||
|
||||
# Handle staged changes
|
||||
if status_code[0] == "R":
|
||||
# Format is "old_path -> new_path" for renamed files
|
||||
if " -> " in path_info:
|
||||
_, new_path = path_info.split(" -> ", 1)
|
||||
status["staged_files"].append(new_path)
|
||||
else:
|
||||
status["staged_files"].append(path_info)
|
||||
elif status_code[0] in ["M", "A", "D", "C"]:
|
||||
status["staged_files"].append(path_info)
|
||||
|
||||
# Handle unstaged changes
|
||||
if status_code[1] in ["M", "D"]:
|
||||
status["unstaged_files"].append(path_info)
|
||||
elif status_code == "??":
|
||||
status["untracked_files"].append(path_info)
|
||||
|
||||
return status
|
||||
Reference in New Issue
Block a user