New tool: "tracer" helps with static analysis / call-flow generation. Does NOT use external models. Used as a quick prompt generator to aid in call-flow / dependency-chart generation. Can be used as an input into another tool / model for extended analysis and deeper thought.

Faster docker restarts
This commit is contained in:
Fahad
2025-06-15 18:42:10 +04:00
parent 6f8d3059a1
commit dfed6f0cbd
6 changed files with 490 additions and 858 deletions

View File

@@ -53,7 +53,8 @@ and review into consideration to aid with its pre-commit review.
- [`debug`](#5-debug---expert-debugging-assistant) - Debugging help
- [`analyze`](#6-analyze---smart-file-analysis) - File analysis
- [`refactor`](#7-refactor---intelligent-code-refactoring) - Code refactoring with decomposition focus
- [`testgen`](#8-testgen---comprehensive-test-generation) - Test generation with edge cases
- [`tracer`](#8-tracer---static-code-analysis-prompt-generator) - Call-flow mapping and dependency tracing
- [`testgen`](#9-testgen---comprehensive-test-generation) - Test generation with edge cases
- [`your custom tool`](#add-your-own-tools) - Create custom tools for specialized workflows
- **Advanced Usage**
@@ -261,8 +262,9 @@ Just ask Claude naturally:
- **Pre-commit validation?** → `precommit` (validate git changes before committing)
- **Something's broken?** → `debug` (root cause analysis, error tracing)
- **Want to understand code?** → `analyze` (architecture, patterns, dependencies)
- **Need comprehensive tests?** → `testgen` (generates test suites with edge cases)
- **Code needs refactoring?** → `refactor` (intelligent refactoring with decomposition focus)
- **Need call-flow analysis?** → `tracer` (generates prompts for execution tracing and dependency mapping)
- **Need comprehensive tests?** → `testgen` (generates test suites with edge cases)
- **Server info?** → `version` (version and configuration details)
**Auto Mode:** When `DEFAULT_MODEL=auto`, Claude automatically picks the best model for each task. You can override with: "Use flash for quick analysis" or "Use o3 to debug this".
@@ -284,8 +286,9 @@ Just ask Claude naturally:
5. [`debug`](#5-debug---expert-debugging-assistant) - Root cause analysis and debugging
6. [`analyze`](#6-analyze---smart-file-analysis) - General-purpose file and code analysis
7. [`refactor`](#7-refactor---intelligent-code-refactoring) - Code refactoring with decomposition focus
8. [`testgen`](#8-testgen---comprehensive-test-generation) - Comprehensive test generation with edge case coverage
9. [`version`](#9-version---server-information) - Get server version and configuration
8. [`tracer`](#8-tracer---static-code-analysis-prompt-generator) - Static code analysis prompt generator for call-flow mapping
9. [`testgen`](#9-testgen---comprehensive-test-generation) - Comprehensive test generation with edge case coverage
10. [`version`](#10-version---server-information) - Get server version and configuration
### 1. `chat` - General Development Chat & Collaborative Thinking
**Your thinking partner - bounce ideas, get second opinions, brainstorm collaboratively**
@@ -507,7 +510,32 @@ did *not* discover.
**Progressive Analysis:** The tool performs a top-down check (worse → bad → better) and refuses to work on lower-priority issues if critical decomposition is needed first. It understands that massive files and classes create cognitive overload that must be addressed before detail work can be effective. Legacy code that cannot be safely decomposed is handled with higher tolerance thresholds and context-sensitive exemptions.
### 8. `testgen` - Comprehensive Test Generation
### 8. `tracer` - Static Code Analysis Prompt Generator
**Creates detailed analysis prompts for call-flow mapping and dependency tracing**
This is a specialized prompt-generation tool that creates structured analysis requests for Claude to perform comprehensive static code analysis.
Rather than passing entire projects to another model, this tool generates focused prompts that
Claude can use to efficiently trace execution flows and map dependencies within the codebase.
**Two Analysis Modes:**
- **`precision`**: For methods/functions - traces execution flow, call chains, and usage patterns with detailed branching analysis and side effects
- **`dependencies`**: For classes/modules/protocols - maps bidirectional dependencies and structural relationships
**Key Features:**
- Generates comprehensive analysis prompts instead of performing analysis directly
- Faster and more efficient than full project analysis by external models
- Creates structured instructions for call-flow graph generation
- Provides detailed formatting requirements for consistent output
- Supports any programming language with automatic convention detection
- Output can be used as an input into another tool, such as `chat` along with related code files to perform a logical call-flow analysis
#### Example Prompts:
```
"Use zen tracer to analyze how UserAuthManager.authenticate is used and why" -> uses `precision` mode
"Use zen to generate a dependency trace for the PaymentProcessor class to understand its relationships" -> uses `dependencies` mode
```
### 9. `testgen` - Comprehensive Test Generation
**Generates thorough test suites with edge case coverage** based on existing code and test framework used.
**Thinking Mode (Extended thinking models):** Default is `medium` (8,192 tokens). Use `high` for complex systems with many interactions or `max` for critical systems requiring exhaustive test coverage.
@@ -535,7 +563,7 @@ suites that cover realistic failure scenarios and integration points that shorte
- Can reference existing test files: `"Generate tests following patterns from tests/unit/"`
- Specific code coverage - target specific functions/classes rather than testing everything
### 9. `version` - Server Information
### 10. `version` - Server Information
```
"Get zen to show its version"
```

View File

@@ -3,16 +3,12 @@ services:
image: redis:7-alpine
container_name: zen-mcp-redis
restart: unless-stopped
stop_grace_period: 3s
ports:
- "6379:6379"
volumes:
- redis_data:/data
command: redis-server --save 60 1 --loglevel warning --maxmemory 64mb --maxmemory-policy allkeys-lru
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 30s
timeout: 3s
retries: 3
deploy:
resources:
limits:
@@ -25,9 +21,9 @@ services:
image: zen-mcp-server:latest
container_name: zen-mcp-server
restart: unless-stopped
stop_grace_period: 5s
depends_on:
redis:
condition: service_healthy
- redis
environment:
- GEMINI_API_KEY=${GEMINI_API_KEY:-}
- OPENAI_API_KEY=${OPENAI_API_KEY:-}
@@ -69,6 +65,7 @@ services:
image: zen-mcp-server:latest
container_name: zen-mcp-log-monitor
restart: unless-stopped
stop_grace_period: 3s
depends_on:
- zen-mcp
environment:

View File

@@ -10,7 +10,6 @@ from .precommit_prompt import PRECOMMIT_PROMPT
from .refactor_prompt import REFACTOR_PROMPT
from .testgen_prompt import TESTGEN_PROMPT
from .thinkdeep_prompt import THINKDEEP_PROMPT
from .tracer_prompt import TRACER_PROMPT
__all__ = [
"THINKDEEP_PROMPT",
@@ -21,5 +20,4 @@ __all__ = [
"PRECOMMIT_PROMPT",
"REFACTOR_PROMPT",
"TESTGEN_PROMPT",
"TRACER_PROMPT",
]

View File

@@ -1,169 +0,0 @@
"""
Tracer tool system prompt
"""
TRACER_PROMPT = """
ROLE
You are a principal software analysis engine. You examine source code across a multi-language repository and statically analyze the behavior of a method, function, or class.
Your task is to return either a full **execution flow trace** (`precision`) or a **bidirectional dependency map** (`dependencies`) based solely on code — never speculation.
You must respond in strict JSON that Claude (the receiving model) can use to visualize, query, and validate.
CRITICAL: You MUST respond ONLY in valid JSON format. NO explanations, introductions, or text outside JSON structure.
Claude cannot parse your response if you include any non-JSON content.
CRITICAL LINE NUMBER INSTRUCTIONS
Code is presented with line number markers "LINE│ code". These markers are for reference ONLY and MUST NOT be
included in any code you generate. Always reference specific line numbers for Claude to locate exact positions.
Include context_start_text and context_end_text as backup references. Never include "LINE│" markers in generated code
snippets.
TRACE MODES
1. **precision** Follow the actual code path from a given method across functions, classes, and modules.
Resolve method calls, branching, type dispatch, and potential side effects. If parameters are provided, use them to resolve branching; if not, flag ambiguous paths.
2. **dependencies** Analyze all dependencies flowing into and out from the method/class, including method calls, state usage, class-level imports, and inheritance.
Show both **incoming** (what uses this) and **outgoing** (what it uses) connections.
INPUT FORMAT
You will receive:
- Method/class name
- Code with File Names
- Optional parameters (used only in precision mode)
IF MORE INFORMATION IS NEEDED OR CONTEXT IS MISSING
If you cannot analyze accurately, respond ONLY with this JSON (and ABSOLUTELY nothing else - no text before or after).
Do NOT ask for the same file you've been provided unless its content is missing or incomplete:
{"status": "clarification_required", "question": "<your brief question>", "files_needed": ["[file name here]", "[or some folder/]"]}
OUTPUT FORMAT
Respond ONLY with the following JSON format depending on the trace mode.
MODE: precision
EXPECTED OUTPUT:
{
"status": "trace_complete",
"trace_type": "precision",
"entry_point": {
"file": "/absolute/path/to/file.ext",
"class_or_struct": "ClassOrModuleName",
"method": "methodName",
"signature": "func methodName(param1: Type1, param2: Type2) -> ReturnType",
"parameters": {
"param1": "value_or_type",
"param2": "value_or_type"
}
},
"call_path": [
{
"from": {
"file": "/file/path",
"class": "ClassName",
"method": "methodName",
"line": 42
},
"to": {
"file": "/file/path",
"class": "ClassName",
"method": "calledMethod",
"line": 123
},
"reason": "direct call / protocol dispatch / conditional branch",
"condition": "if param.isEnabled", // null if unconditional
"ambiguous": false
}
],
"branching_points": [
{
"file": "/file/path",
"method": "methodName",
"line": 77,
"condition": "if user.role == .admin",
"branches": ["audit()", "restrict()"],
"ambiguous": true
}
],
"side_effects": [
{
"type": "database|network|filesystem|state|log|ui|external",
"description": "calls remote endpoint / modifies user record",
"file": "/file/path",
"method": "methodName",
"line": 88
}
],
"unresolved": [
{
"reason": "param.userRole not provided",
"affected_file": "/file/path",
"line": 77
}
]
}
MODE: dependencies
EXPECTED OUTPUT:
{
"status": "trace_complete",
"trace_type": "dependencies",
"target": {
"file": "/absolute/path/to/file.ext",
"class_or_struct": "ClassOrModuleName",
"method": "methodName",
"signature": "func methodName(param1: Type1, param2: Type2) -> ReturnType"
},
"incoming_dependencies": [
{
"from_file": "/file/path",
"from_class": "CallingClass",
"from_method": "callerMethod",
"line": 101,
"type": "direct_call|protocol_impl|event_handler|override|reflection"
}
],
"outgoing_dependencies": [
{
"to_file": "/file/path",
"to_class": "DependencyClass",
"to_method": "calledMethod",
"line": 57,
"type": "method_call|instantiates|uses_constant|reads_property|writes_property|network|db|log"
}
],
"type_dependencies": [
{
"dependency_type": "extends|implements|conforms_to|uses_generic|imports",
"source_file": "/file/path",
"source_entity": "ClassOrStruct",
"target": "TargetProtocolOrClass"
}
],
"state_access": [
{
"file": "/file/path",
"method": "methodName",
"access_type": "reads|writes|mutates|injects",
"state_entity": "user.balance"
}
]
}
RULES
- All data must come from the actual codebase. No invented paths or method guesses.
- If parameters are missing in precision mode, include all possible branches and mark them "ambiguous": true.
- Use full file paths, class names, method names, and line numbers exactly as they appear.
- Use the "reason" field to explain why the call or dependency exists.
- In dependencies mode, the incoming_dependencies list may be empty if nothing in the repo currently calls the target.
GOAL
Enable Claude and the user to clearly visualize how a method:
- Flows across the system (in precision mode)
- Connects with other classes and modules (in dependencies mode)
FINAL REMINDER: CRITICAL OUTPUT FORMAT ENFORCEMENT
Your response MUST start with "{" and end with "}". NO other text is allowed.
If you include ANY text outside the JSON structure, Claude will be unable to parse your response and the tool will fail.
DO NOT provide explanations, introductions, conclusions, or reasoning outside the JSON.
ALL information must be contained within the JSON structure itself.
"""

View File

@@ -2,8 +2,6 @@
Tests for the tracer tool functionality
"""
from unittest.mock import Mock, patch
import pytest
from tools.models import ToolModelCategory
@@ -18,47 +16,6 @@ class TestTracerTool:
"""Create a tracer tool instance for testing"""
return TracerTool()
@pytest.fixture
def mock_model_response(self):
"""Create a mock model response for call path analysis"""
def _create_response(content=None):
if content is None:
content = """## Call Path Summary
1. 🟢 `BookingManager::finalizeInvoice()` at booking.py:45 → calls `PaymentProcessor.process()`
2. 🟢 `PaymentProcessor::process()` at payment.py:123 → calls `validation.validate_payment()`
3. 🟡 `validation.validate_payment()` at validation.py:67 → conditionally calls `Logger.log()`
## Value-Driven Flow Analysis
**Scenario 1**: `invoice_id=123, payment_method="credit_card"`
- Path: BookingManager → PaymentProcessor → CreditCardValidator → StripeGateway
- Key decision at payment.py:156: routes to Stripe integration
## Side Effects & External Dependencies
### Database Interactions
- **Transaction.save()** at models.py:234 → inserts payment record
### Network Calls
- **StripeGateway.charge()** → HTTPS POST to Stripe API
## Code Anchors
- Entry point: `BookingManager::finalizeInvoice` at booking.py:45
- Critical branch: Payment method selection at payment.py:156
"""
return Mock(
content=content,
usage={"input_tokens": 150, "output_tokens": 300, "total_tokens": 450},
model_name="test-model",
metadata={"finish_reason": "STOP"},
)
return _create_response
def test_get_name(self, tracer_tool):
"""Test that the tool returns the correct name"""
assert tracer_tool.get_name() == "tracer"
@@ -66,355 +23,266 @@ class TestTracerTool:
def test_get_description(self, tracer_tool):
"""Test that the tool returns a comprehensive description"""
description = tracer_tool.get_description()
assert "STATIC CODE ANALYSIS" in description
assert "execution flow" in description
assert "dependency mappings" in description
assert "ANALYSIS PROMPT GENERATOR" in description
assert "precision" in description
assert "dependencies" in description
assert "static code analysis" in description
def test_get_input_schema(self, tracer_tool):
"""Test that the input schema includes all required fields"""
"""Test that the input schema includes required fields"""
schema = tracer_tool.get_input_schema()
assert schema["type"] == "object"
assert "prompt" in schema["properties"]
assert "files" in schema["properties"]
assert "trace_mode" in schema["properties"]
# Check required fields
required_fields = schema["required"]
assert "prompt" in required_fields
assert "files" in required_fields
assert "trace_mode" in required_fields
# Check trace_mode enum values
trace_enum = schema["properties"]["trace_mode"]["enum"]
assert "precision" in trace_enum
assert "dependencies" in trace_enum
# Check enum values for trace_mode
trace_mode_enum = schema["properties"]["trace_mode"]["enum"]
assert "precision" in trace_mode_enum
assert "dependencies" in trace_mode_enum
# Check required fields
assert set(schema["required"]) == {"prompt", "trace_mode"}
def test_get_model_category(self, tracer_tool):
"""Test that the tool uses extended reasoning category"""
"""Test that the tracer tool uses FAST_RESPONSE category"""
category = tracer_tool.get_model_category()
assert category == ToolModelCategory.EXTENDED_REASONING
assert category == ToolModelCategory.FAST_RESPONSE
def test_request_model_validation(self):
"""Test request model validation"""
def test_request_model_validation(self, tracer_tool):
"""Test TracerRequest model validation"""
# Valid request
request = TracerRequest(
prompt="Trace BookingManager::finalizeInvoice method with invoice_id=123",
files=["/test/booking.py", "/test/payment.py"],
prompt="BookingManager finalizeInvoice method",
trace_mode="precision",
)
assert request.prompt == "Trace BookingManager::finalizeInvoice method with invoice_id=123"
assert len(request.files) == 2
assert request.prompt == "BookingManager finalizeInvoice method"
assert request.trace_mode == "precision"
# Invalid request (missing required fields)
# Test invalid trace_mode
with pytest.raises(ValueError):
TracerRequest(files=["/test/file.py"]) # Missing prompt and trace_mode
# Invalid trace_mode value
with pytest.raises(ValueError):
TracerRequest(prompt="Test", files=["/test/file.py"], trace_mode="invalid_type")
def test_language_detection_python(self, tracer_tool):
"""Test language detection for Python files"""
files = ["/test/booking.py", "/test/payment.py", "/test/utils.py"]
language = tracer_tool.detect_primary_language(files)
assert language == "python"
def test_language_detection_javascript(self, tracer_tool):
"""Test language detection for JavaScript files"""
files = ["/test/app.js", "/test/component.jsx", "/test/utils.js"]
language = tracer_tool.detect_primary_language(files)
assert language == "javascript"
def test_language_detection_typescript(self, tracer_tool):
"""Test language detection for TypeScript files"""
files = ["/test/app.ts", "/test/component.tsx", "/test/utils.ts"]
language = tracer_tool.detect_primary_language(files)
assert language == "typescript"
def test_language_detection_csharp(self, tracer_tool):
"""Test language detection for C# files"""
files = ["/test/BookingService.cs", "/test/PaymentProcessor.cs"]
language = tracer_tool.detect_primary_language(files)
assert language == "csharp"
def test_language_detection_java(self, tracer_tool):
"""Test language detection for Java files"""
files = ["/test/BookingManager.java", "/test/PaymentService.java"]
language = tracer_tool.detect_primary_language(files)
assert language == "java"
def test_language_detection_mixed(self, tracer_tool):
"""Test language detection for mixed language files"""
files = ["/test/app.py", "/test/service.js", "/test/model.java"]
language = tracer_tool.detect_primary_language(files)
assert language == "mixed"
def test_language_detection_unknown(self, tracer_tool):
"""Test language detection for unknown extensions"""
files = ["/test/config.xml", "/test/readme.txt"]
language = tracer_tool.detect_primary_language(files)
assert language == "unknown"
# Removed parse_entry_point tests as method no longer exists in simplified interface
TracerRequest(
prompt="Test",
trace_mode="invalid_mode",
)
@pytest.mark.asyncio
async def test_prepare_prompt_basic(self, tracer_tool):
"""Test basic prompt preparation"""
request = TracerRequest(
prompt="Trace BookingManager::finalizeInvoice method with invoice_id=123",
files=["/test/booking.py"],
trace_mode="precision",
)
async def test_execute_precision_mode(self, tracer_tool):
"""Test executing tracer with precision mode"""
request_args = {
"prompt": "BookingManager finalizeInvoice method",
"trace_mode": "precision",
}
# Mock file content preparation
with patch.object(tracer_tool, "_prepare_file_content_for_prompt") as mock_prep:
mock_prep.return_value = "def finalizeInvoice(self, invoice_id):\n pass"
with patch.object(tracer_tool, "check_prompt_size") as mock_check:
mock_check.return_value = None
prompt = await tracer_tool.prepare_prompt(request)
result = await tracer_tool.execute(request_args)
assert "ANALYSIS REQUEST" in prompt
assert "Trace BookingManager::finalizeInvoice method" in prompt
assert len(result) == 1
output = result[0]
assert output.type == "text"
# Check content includes expected sections
content = output.text
assert "Enhanced Analysis Prompt" in content
assert "Analysis Instructions" in content
assert "BookingManager finalizeInvoice method" in content
assert "precision" in content
assert "CALL FLOW DIAGRAM" in content
@pytest.mark.asyncio
async def test_execute_dependencies_mode(self, tracer_tool):
"""Test executing tracer with dependencies mode"""
request_args = {
"prompt": "payment processing flow",
"trace_mode": "dependencies",
}
result = await tracer_tool.execute(request_args)
assert len(result) == 1
output = result[0]
assert output.type == "text"
# Check content includes expected sections
content = output.text
assert "Enhanced Analysis Prompt" in content
assert "payment processing flow" in content
assert "dependencies" in content
assert "DEPENDENCY FLOW DIAGRAM" in content
def test_create_enhanced_prompt_precision(self, tracer_tool):
"""Test enhanced prompt creation for precision mode"""
prompt = tracer_tool._create_enhanced_prompt("BookingManager::finalizeInvoice", "precision")
assert "STATIC CODE ANALYSIS REQUEST" in prompt
assert "BookingManager::finalizeInvoice" in prompt
assert "precision" in prompt
assert "CODE TO ANALYZE" in prompt
assert "execution path" in prompt
assert "method calls" in prompt
assert "line numbers" in prompt
@pytest.mark.asyncio
async def test_prepare_prompt_with_dependencies(self, tracer_tool):
"""Test prompt preparation with dependencies type"""
request = TracerRequest(
prompt="Analyze dependencies for payment.process_payment function with amount=100.50",
files=["/test/payment.py"],
trace_mode="dependencies",
)
def test_create_enhanced_prompt_dependencies(self, tracer_tool):
"""Test enhanced prompt creation for dependencies mode"""
prompt = tracer_tool._create_enhanced_prompt("validation function", "dependencies")
with patch.object(tracer_tool, "_prepare_file_content_for_prompt") as mock_prep:
mock_prep.return_value = "def process_payment(amount, method):\n pass"
with patch.object(tracer_tool, "check_prompt_size") as mock_check:
mock_check.return_value = None
prompt = await tracer_tool.prepare_prompt(request)
assert "STATIC CODE ANALYSIS REQUEST" in prompt
assert "validation function" in prompt
assert "dependencies" in prompt
assert "bidirectional dependencies" in prompt
assert "incoming" in prompt
assert "outgoing" in prompt
assert "Analyze dependencies for payment.process_payment" in prompt
assert "Trace Mode: dependencies" in prompt
def test_get_rendering_instructions_precision(self, tracer_tool):
"""Test rendering instructions for precision mode"""
instructions = tracer_tool._get_rendering_instructions("precision")
@pytest.mark.asyncio
async def test_prepare_prompt_with_security_context(self, tracer_tool):
"""Test prompt preparation with security context"""
request = TracerRequest(
prompt="Trace UserService::authenticate method focusing on security implications and potential vulnerabilities",
files=["/test/auth.py"],
trace_mode="precision",
)
assert "PRECISION TRACE" in instructions
assert "CALL FLOW DIAGRAM" in instructions
assert "ADDITIONAL ANALYSIS VIEWS" in instructions
assert "ClassName::MethodName" in instructions
assert "" in instructions
with patch.object(tracer_tool, "_prepare_file_content_for_prompt") as mock_prep:
mock_prep.return_value = "def authenticate(self, username, password):\n pass"
with patch.object(tracer_tool, "check_prompt_size") as mock_check:
mock_check.return_value = None
prompt = await tracer_tool.prepare_prompt(request)
def test_get_rendering_instructions_dependencies(self, tracer_tool):
"""Test rendering instructions for dependencies mode"""
instructions = tracer_tool._get_rendering_instructions("dependencies")
assert "security implications and potential vulnerabilities" in prompt
assert "Trace Mode: precision" in prompt
def test_format_response_precision(self, tracer_tool):
"""Test response formatting for precision trace"""
request = TracerRequest(
prompt="Trace BookingManager::finalizeInvoice method", files=["/test/booking.py"], trace_mode="precision"
)
response = '{"status": "trace_complete", "trace_type": "precision"}'
model_info = {"model_response": Mock(friendly_name="Gemini Pro")}
formatted = tracer_tool.format_response(response, request, model_info)
assert response in formatted
assert "Analysis Complete" in formatted
assert "Gemini Pro" in formatted
assert "precision analysis" in formatted
assert "CALL FLOW DIAGRAM" in formatted
assert "BRANCHING & SIDE EFFECT TABLE" in formatted
def test_format_response_dependencies(self, tracer_tool):
"""Test response formatting for dependencies trace"""
request = TracerRequest(
prompt="Analyze dependencies for payment.process function",
files=["/test/payment.py"],
trace_mode="dependencies",
)
response = '{"status": "trace_complete", "trace_type": "dependencies"}'
formatted = tracer_tool.format_response(response, request)
assert response in formatted
assert "dependencies analysis" in formatted
assert "DEPENDENCY FLOW GRAPH" in formatted
assert "DEPENDENCY TABLE" in formatted
# Removed PlantUML test as export_format is no longer a parameter
def test_get_default_temperature(self, tracer_tool):
"""Test that the tool uses analytical temperature"""
from config import TEMPERATURE_ANALYTICAL
assert tracer_tool.get_default_temperature() == TEMPERATURE_ANALYTICAL
def test_wants_line_numbers_by_default(self, tracer_tool):
"""Test that line numbers are enabled by default"""
# The base class should enable line numbers by default for precise references
# We test that this isn't overridden to disable them
assert hasattr(tracer_tool, "wants_line_numbers_by_default")
def test_trace_mode_validation(self):
"""Test trace mode validation"""
# Valid trace modes
request1 = TracerRequest(prompt="Test precision", files=["/test/file.py"], trace_mode="precision")
assert request1.trace_mode == "precision"
request2 = TracerRequest(prompt="Test dependencies", files=["/test/file.py"], trace_mode="dependencies")
assert request2.trace_mode == "dependencies"
# Invalid trace mode should raise ValidationError
with pytest.raises(ValueError):
TracerRequest(prompt="Test", files=["/test/file.py"], trace_mode="invalid_type")
def test_get_rendering_instructions(self, tracer_tool):
"""Test the main rendering instructions dispatcher method"""
# Test precision mode
precision_instructions = tracer_tool._get_rendering_instructions("precision")
assert "MANDATORY RENDERING INSTRUCTIONS FOR PRECISION TRACE" in precision_instructions
assert "CALL FLOW DIAGRAM" in precision_instructions
assert "BRANCHING & SIDE EFFECT TABLE" in precision_instructions
# Test dependencies mode
dependencies_instructions = tracer_tool._get_rendering_instructions("dependencies")
assert "MANDATORY RENDERING INSTRUCTIONS FOR DEPENDENCIES TRACE" in dependencies_instructions
assert "DEPENDENCY FLOW GRAPH" in dependencies_instructions
assert "DEPENDENCY TABLE" in dependencies_instructions
assert "DEPENDENCIES TRACE" in instructions
assert "DEPENDENCY FLOW DIAGRAM" in instructions
assert "DEPENDENCY TABLE" in instructions
assert "INCOMING DEPENDENCIES" in instructions
assert "OUTGOING DEPENDENCIES" in instructions
assert "" in instructions
assert "" in instructions
def test_get_precision_rendering_instructions(self, tracer_tool):
"""Test precision mode rendering instructions"""
"""Test precision rendering instructions content"""
instructions = tracer_tool._get_precision_rendering_instructions()
# Check for required sections
assert "MANDATORY RENDERING INSTRUCTIONS FOR PRECISION TRACE" in instructions
assert "1. CALL FLOW DIAGRAM (TOP-DOWN)" in instructions
assert "2. BRANCHING & SIDE EFFECT TABLE" in instructions
# Check for specific formatting requirements
assert "[Class::Method] (file: /path, line: ##)" in instructions
assert "Chain each call using ↓ or → for readability" in instructions
assert "If ambiguous, mark with `⚠️ ambiguous branch`" in instructions
assert "Side Effects:" in instructions
assert "[database] description (File.ext:##)" in instructions
# Check for critical rules
assert "CRITICAL RULES:" in instructions
assert "Use exact filenames, class names, and line numbers from JSON" in instructions
assert "DO NOT invent function names or examples" in instructions
assert "MANDATORY RENDERING INSTRUCTIONS" in instructions
assert "ADDITIONAL ANALYSIS VIEWS" in instructions
assert "CALL FLOW DIAGRAM" in instructions
assert "line number" in instructions
assert "ambiguous branch" in instructions
assert "SIDE EFFECTS" in instructions
def test_get_dependencies_rendering_instructions(self, tracer_tool):
"""Test dependencies mode rendering instructions"""
"""Test dependencies rendering instructions content"""
instructions = tracer_tool._get_dependencies_rendering_instructions()
# Check for required sections
assert "MANDATORY RENDERING INSTRUCTIONS FOR DEPENDENCIES TRACE" in instructions
assert "1. DEPENDENCY FLOW GRAPH" in instructions
assert "2. DEPENDENCY TABLE" in instructions
# Check for specific formatting requirements
assert "Called by:" in instructions
assert "[CallerClass::callerMethod] ← /path/file.ext:##" in instructions
assert "Calls:" in instructions
assert "[Logger::logAction] → /utils/log.ext:##" in instructions
assert "Type Dependencies:" in instructions
assert "State Access:" in instructions
# Check for arrow rules
assert "`←` for incoming (who calls this)" in instructions
assert "`→` for outgoing (what this calls)" in instructions
# Check for dependency table format
assert "| Type | From/To | Method | File | Line |" in instructions
assert "| direct_call | From: CallerClass | callerMethod |" in instructions
# Check for critical rules
assert "CRITICAL RULES:" in instructions
assert "Use exact filenames, class names, and line numbers from JSON" in instructions
assert "Show directional dependencies with proper arrows" in instructions
def test_format_response_uses_private_methods(self, tracer_tool):
"""Test that format_response correctly uses the refactored private methods"""
# Test precision mode
precision_request = TracerRequest(prompt="Test precision", files=["/test/file.py"], trace_mode="precision")
precision_response = tracer_tool.format_response('{"test": "response"}', precision_request)
# Should contain precision-specific instructions
assert "CALL FLOW DIAGRAM" in precision_response
assert "BRANCHING & SIDE EFFECT TABLE" in precision_response
assert "precision analysis" in precision_response
# Test dependencies mode
dependencies_request = TracerRequest(
prompt="Test dependencies", files=["/test/file.py"], trace_mode="dependencies"
)
dependencies_response = tracer_tool.format_response('{"test": "response"}', dependencies_request)
# Should contain dependencies-specific instructions
assert "DEPENDENCY FLOW GRAPH" in dependencies_response
assert "DEPENDENCY TABLE" in dependencies_response
assert "dependencies analysis" in dependencies_response
assert "MANDATORY RENDERING INSTRUCTIONS" in instructions
assert "Bidirectional Arrow Flow Style" in instructions
assert "CallerClass::callerMethod" in instructions
assert "FirstDependency::method" in instructions
assert "TYPE RELATIONSHIPS" in instructions
assert "DEPENDENCY TABLE" in instructions
def test_rendering_instructions_consistency(self, tracer_tool):
"""Test that private methods return consistent instructions"""
# Get instructions through both paths
precision_direct = tracer_tool._get_precision_rendering_instructions()
precision_via_dispatcher = tracer_tool._get_rendering_instructions("precision")
"""Test that rendering instructions are consistent between modes"""
precision_instructions = tracer_tool._get_precision_rendering_instructions()
dependencies_instructions = tracer_tool._get_dependencies_rendering_instructions()
dependencies_direct = tracer_tool._get_dependencies_rendering_instructions()
dependencies_via_dispatcher = tracer_tool._get_rendering_instructions("dependencies")
# Both should have mandatory instructions
assert "MANDATORY RENDERING INSTRUCTIONS" in precision_instructions
assert "MANDATORY RENDERING INSTRUCTIONS" in dependencies_instructions
# Should be identical
assert precision_direct == precision_via_dispatcher
assert dependencies_direct == dependencies_via_dispatcher
# Both should have specific styling requirements
assert "ONLY" in precision_instructions
assert "ONLY" in dependencies_instructions
# Both should have absolute requirements
assert "ABSOLUTE REQUIREMENTS" in precision_instructions
assert "ABSOLUTE REQUIREMENTS" in dependencies_instructions
def test_rendering_instructions_completeness(self, tracer_tool):
"""Test that rendering instructions contain all required elements"""
precision_instructions = tracer_tool._get_precision_rendering_instructions()
dependencies_instructions = tracer_tool._get_dependencies_rendering_instructions()
"""Test that rendering instructions include all necessary elements"""
precision = tracer_tool._get_precision_rendering_instructions()
dependencies = tracer_tool._get_dependencies_rendering_instructions()
# Both should have mandatory sections
for instructions in [precision_instructions, dependencies_instructions]:
assert "MANDATORY RENDERING INSTRUCTIONS" in instructions
assert "You MUST render" in instructions
assert "exactly two views" in instructions
assert "CRITICAL RULES:" in instructions
assert "ALWAYS render both views unless data is missing" in instructions
assert "Use exact filenames, class names, and line numbers from JSON" in instructions
assert "DO NOT invent function names or examples" in instructions
# Precision mode should include call flow and additional analysis views
assert "CALL FLOW DIAGRAM" in precision
assert "ADDITIONAL ANALYSIS VIEWS" in precision
# Dependencies mode should include flow diagram and table
assert "DEPENDENCY FLOW DIAGRAM" in dependencies
assert "DEPENDENCY TABLE" in dependencies
def test_rendering_instructions_mode_specific_content(self, tracer_tool):
"""Test that each mode has unique content"""
precision_instructions = tracer_tool._get_precision_rendering_instructions()
dependencies_instructions = tracer_tool._get_dependencies_rendering_instructions()
"""Test that each mode has its specific content requirements"""
precision = tracer_tool._get_precision_rendering_instructions()
dependencies = tracer_tool._get_dependencies_rendering_instructions()
# Precision-specific content should not be in dependencies
assert "CALL FLOW DIAGRAM" in precision_instructions
assert "CALL FLOW DIAGRAM" not in dependencies_instructions
assert "BRANCHING & SIDE EFFECT TABLE" in precision_instructions
assert "BRANCHING & SIDE EFFECT TABLE" not in dependencies_instructions
# Precision-specific content
assert "USAGE POINTS" in precision
assert "ENTRY POINTS" in precision
# Dependencies-specific content should not be in precision
assert "DEPENDENCY FLOW GRAPH" in dependencies_instructions
assert "DEPENDENCY FLOW GRAPH" not in precision_instructions
assert "DEPENDENCY TABLE" in dependencies_instructions
assert "DEPENDENCY TABLE" not in precision_instructions
# Dependencies-specific content
assert "INCOMING DEPENDENCIES" in dependencies
assert "OUTGOING DEPENDENCIES" in dependencies
assert "Bidirectional Arrow" in dependencies
# Mode-specific symbols and patterns
assert "" in precision_instructions # Flow arrows
assert "" in dependencies_instructions # Incoming arrow
assert "" in dependencies_instructions # Outgoing arrow
assert "Side Effects:" in precision_instructions
assert "Called by:" in dependencies_instructions
@pytest.mark.asyncio
async def test_execute_returns_textcontent_format(self, tracer_tool):
"""Test that execute returns proper TextContent format for MCP protocol"""
from mcp.types import TextContent
request_args = {
"prompt": "test method analysis",
"trace_mode": "precision",
}
result = await tracer_tool.execute(request_args)
# Verify structure
assert isinstance(result, list)
assert len(result) == 1
# Verify TextContent format
output = result[0]
assert isinstance(output, TextContent)
assert hasattr(output, "type")
assert hasattr(output, "text")
assert output.type == "text"
assert isinstance(output.text, str)
assert len(output.text) > 0
@pytest.mark.asyncio
async def test_mcp_protocol_compatibility(self, tracer_tool):
"""Test that the tool output is compatible with MCP protocol expectations"""
request_args = {
"prompt": "analyze method dependencies",
"trace_mode": "dependencies",
}
result = await tracer_tool.execute(request_args)
# Should return list of TextContent objects
assert isinstance(result, list)
for item in result:
# Each item should be TextContent with required fields
assert hasattr(item, "type")
assert hasattr(item, "text")
# Verify it can be serialized (MCP requirement)
serialized = item.model_dump()
assert "type" in serialized
assert "text" in serialized
assert serialized["type"] == "text"
def test_mode_selection_guidance(self, tracer_tool):
"""Test that the schema provides clear guidance on when to use each mode"""
schema = tracer_tool.get_input_schema()
trace_mode_desc = schema["properties"]["trace_mode"]["description"]
# Should clearly indicate precision is for methods/functions
assert "methods/functions" in trace_mode_desc
assert "execution flow" in trace_mode_desc
assert "usage patterns" in trace_mode_desc
# Should clearly indicate dependencies is for classes/modules/protocols
assert "classes/modules/protocols" in trace_mode_desc
assert "structural relationships" in trace_mode_desc
# Should provide clear examples in prompt description
prompt_desc = schema["properties"]["prompt"]["description"]
assert "method" in prompt_desc and "precision mode" in prompt_desc
assert "class" in prompt_desc and "dependencies mode" in prompt_desc

View File

@@ -1,51 +1,32 @@
"""
Tracer tool - Static call path prediction and control flow analysis
Tracer tool - Prompt generator for static code analysis workflows
This tool analyzes code to predict and explain full call paths and control flow without executing code.
Given a method name, its owning class/module, and parameter combinations or runtime values, it predicts
the complete chain of method/function calls that would be triggered.
Key Features:
- Static call path prediction with confidence levels
- Polymorphism and dynamic dispatch analysis
- Value-driven flow analysis based on parameter combinations
- Side effects identification (database, network, filesystem)
- Branching analysis for conditional logic
- Hybrid AI-first approach with optional AST preprocessing for enhanced accuracy
This tool generates structured prompts and instructions for static code analysis.
It helps Claude create focused analysis requests and provides detailed rendering
instructions for visualizing call paths and dependency mappings.
"""
import logging
import os
from typing import Any, Literal, Optional
from typing import Any, Literal
from pydantic import Field
from config import TEMPERATURE_ANALYTICAL
from systemprompts import TRACER_PROMPT
from .base import BaseTool, ToolRequest
logger = logging.getLogger(__name__)
class TracerRequest(ToolRequest):
"""
Request model for the tracer tool.
This model defines the simplified parameters for static code analysis.
This model defines the parameters for generating analysis prompts.
"""
prompt: str = Field(
...,
description="Description of what to trace including method/function name and class/file context (e.g., 'Trace BookingManager::finalizeInvoice method' or 'Analyze dependencies for validate_input function in utils module')",
)
files: list[str] = Field(
...,
description="Code files or directories to analyze (must be absolute paths)",
description="Detailed description of what to trace and WHY you need this analysis. Include context about what you're trying to understand, debug, or analyze. For precision mode: describe the specific method/function and what aspect of its execution flow you need to understand. For dependencies mode: describe the class/module and what relationships you need to map. Example: 'I need to understand how BookingManager.finalizeInvoice method is called throughout the system and what side effects it has, as I'm debugging payment processing issues' rather than just 'BookingManager finalizeInvoice method'",
)
trace_mode: Literal["precision", "dependencies"] = Field(
...,
description="Trace mode: 'precision' (follows actual code execution path from entry point) or 'dependencies' (analyzes bidirectional dependency mapping showing what calls this target and what it calls)",
description="Trace mode: 'precision' (for methods/functions - shows execution flow and usage patterns) or 'dependencies' (for classes/modules/protocols - shows structural relationships)",
)
@@ -53,8 +34,8 @@ class TracerTool(BaseTool):
"""
Tracer tool implementation.
This tool analyzes code to predict static call paths and control flow without execution.
Uses a hybrid AI-first approach with optional AST preprocessing for enhanced accuracy.
This tool generates structured prompts and instructions for static code analysis.
It creates detailed requests and provides rendering instructions for Claude.
"""
def get_name(self) -> str:
@@ -62,278 +43,131 @@ class TracerTool(BaseTool):
def get_description(self) -> str:
return (
"STATIC CODE ANALYSIS - Analyzes code to provide either execution flow traces or dependency mappings without executing code. "
"Type 'precision': Follows the actual code path from a specified method/function, resolving calls, branching, and side effects. "
"Type 'dependencies': Analyzes bidirectional dependencies showing what calls the target and what it calls, including imports and inheritance. "
"Perfect for: understanding complex code flows, impact analysis, debugging assistance, architecture review. "
"Responds in structured JSON format for easy parsing and visualization. "
"Choose thinking_mode based on code complexity: 'medium' for standard analysis (default), "
"'high' for complex systems, 'max' for legacy codebases requiring deep analysis. "
"Note: If you're not currently using a top-tier model such as Opus 4 or above, these tools can provide enhanced capabilities."
"ANALYSIS PROMPT GENERATOR - Creates structured prompts for static code analysis. "
"Helps generate detailed analysis requests with specific method/function names, file paths, and component context. "
"Type 'precision': For methods/functions - traces execution flow, call chains, call stacks, and shows when/how they are used. "
"Type 'dependencies': For classes/modules/protocols - maps structural relationships and bidirectional dependencies. "
"Returns detailed instructions on how to perform the analysis and format the results. "
"Use this to create focused analysis requests that can be fed back to Claude with the appropriate code files. "
)
def get_input_schema(self) -> dict[str, Any]:
schema = {
return {
"type": "object",
"properties": {
"prompt": {
"type": "string",
"description": "Description of what to trace including method/function name and class/file context (e.g., 'Trace BookingManager::finalizeInvoice method' or 'Analyze dependencies for validate_input function in utils module')",
},
"files": {
"type": "array",
"items": {"type": "string"},
"description": "Code files or directories to analyze (must be absolute paths)",
"description": "Detailed description of what to trace and WHY you need this analysis. Include context about what you're trying to understand, debug, or analyze. For precision mode: describe the specific method/function and what aspect of its execution flow you need to understand. For dependencies mode: describe the class/module and what relationships you need to map. Example: 'I need to understand how BookingManager.finalizeInvoice method is called throughout the system and what side effects it has, as I'm debugging payment processing issues' rather than just 'BookingManager finalizeInvoice method'",
},
"trace_mode": {
"type": "string",
"enum": ["precision", "dependencies"],
"description": "Trace mode: 'precision' (follows actual code execution path from entry point) or 'dependencies' (analyzes bidirectional dependency mapping showing what calls this target and what it calls)",
},
"model": self.get_model_field_schema(),
"temperature": {
"type": "number",
"description": "Temperature (0-1, default 0.2 for analytical precision)",
"minimum": 0,
"maximum": 1,
},
"thinking_mode": {
"type": "string",
"enum": ["minimal", "low", "medium", "high", "max"],
"description": "Thinking depth: minimal (0.5% of model max), low (8%), medium (33%), high (67%), max (100% of model max)",
},
"use_websearch": {
"type": "boolean",
"description": "Enable web search for framework documentation and patterns",
"default": True,
},
"continuation_id": {
"type": "string",
"description": "Thread continuation ID for multi-turn conversations across tools",
"description": "Trace mode: 'precision' (for methods/functions - shows execution flow and usage patterns) or 'dependencies' (for classes/modules/protocols - shows structural relationships)",
},
},
"required": ["prompt", "files", "trace_mode"] + (["model"] if self.is_effective_auto_mode() else []),
"required": ["prompt", "trace_mode"],
}
return schema
def get_system_prompt(self) -> str:
return TRACER_PROMPT
def get_default_temperature(self) -> float:
return TEMPERATURE_ANALYTICAL
# Line numbers are enabled by default for precise code references
def get_model_category(self):
"""Tracer requires extended reasoning for complex flow analysis"""
"""Tracer is a simple prompt generator"""
from tools.models import ToolModelCategory
return ToolModelCategory.EXTENDED_REASONING
return ToolModelCategory.FAST_RESPONSE
def get_request_model(self):
return TracerRequest
def detect_primary_language(self, file_paths: list[str]) -> str:
"""
Detect the primary programming language from file extensions.
Args:
file_paths: List of file paths to analyze
Returns:
str: Detected language or "mixed" if multiple languages found
"""
# Language detection based on file extensions
language_extensions = {
"python": {".py", ".pyx", ".pyi"},
"javascript": {".js", ".jsx", ".mjs", ".cjs"},
"typescript": {".ts", ".tsx", ".mts", ".cts"},
"java": {".java"},
"csharp": {".cs"},
"cpp": {".cpp", ".cc", ".cxx", ".c", ".h", ".hpp"},
"go": {".go"},
"rust": {".rs"},
"swift": {".swift"},
"kotlin": {".kt", ".kts"},
"ruby": {".rb"},
"php": {".php"},
"scala": {".scala"},
}
# Count files by language
language_counts = {}
for file_path in file_paths:
extension = os.path.splitext(file_path.lower())[1]
for lang, exts in language_extensions.items():
if extension in exts:
language_counts[lang] = language_counts.get(lang, 0) + 1
break
if not language_counts:
return "unknown"
# Return most common language, or "mixed" if multiple languages
max_count = max(language_counts.values())
dominant_languages = [lang for lang, count in language_counts.items() if count == max_count]
if len(dominant_languages) == 1:
return dominant_languages[0]
else:
return "mixed"
def get_system_prompt(self) -> str:
"""Not used in this simplified tool."""
return ""
async def prepare_prompt(self, request: TracerRequest) -> str:
"""
Prepare the complete prompt for code analysis.
"""Not used in this simplified tool."""
return ""
This method combines:
- System prompt with analysis instructions
- User request and trace type
- File contents with line numbers
- Analysis parameters
async def execute(self, arguments: dict[str, Any]) -> list:
"""Generate analysis prompt and instructions."""
Args:
request: The validated tracer request
request = TracerRequest(**arguments)
Returns:
str: Complete prompt for the model
# Create enhanced prompt with specific instructions
enhanced_prompt = self._create_enhanced_prompt(request.prompt, request.trace_mode)
Raises:
ValueError: If the prompt exceeds token limits
"""
logger.info(
f"[TRACER] Preparing prompt for {request.trace_mode} trace analysis with {len(request.files)} files"
)
logger.debug(f"[TRACER] User request: {request.prompt[:100]}...")
# Check for prompt.txt in files
prompt_content, updated_files = self.handle_prompt_file(request.files)
# If prompt.txt was found, incorporate it into the request prompt
if prompt_content:
logger.debug("[TRACER] Found prompt.txt file, incorporating content")
request.prompt = prompt_content + "\n\n" + request.prompt
# Update request files list
if updated_files is not None:
logger.debug(f"[TRACER] Updated files list after prompt.txt processing: {len(updated_files)} files")
request.files = updated_files
# Check user input size at MCP transport boundary (before adding internal content)
size_check = self.check_prompt_size(request.prompt)
if size_check:
from tools.models import ToolOutput
raise ValueError(f"MCP_SIZE_CHECK:{ToolOutput(**size_check).model_dump_json()}")
# Detect primary language
primary_language = self.detect_primary_language(request.files)
logger.debug(f"[TRACER] Detected primary language: {primary_language}")
# Use centralized file processing logic for main code files (with line numbers enabled)
continuation_id = getattr(request, "continuation_id", None)
logger.debug(f"[TRACER] Preparing {len(request.files)} code files for analysis")
code_content, processed_files = self._prepare_file_content_for_prompt(request.files, continuation_id, "Code to analyze")
# Store processed files for conversation tracking
self._actually_processed_files = processed_files
if code_content:
from utils.token_utils import estimate_tokens
code_tokens = estimate_tokens(code_content)
logger.info(f"[TRACER] Code files embedded successfully: {code_tokens:,} tokens")
else:
logger.warning("[TRACER] No code content after file processing")
# Build the complete prompt
prompt_parts = []
# Add system prompt
prompt_parts.append(self.get_system_prompt())
# Add user request and analysis parameters
prompt_parts.append("\n=== ANALYSIS REQUEST ===")
prompt_parts.append(f"User Request: {request.prompt}")
prompt_parts.append(f"Trace Mode: {request.trace_mode}")
prompt_parts.append(f"Language: {primary_language}")
prompt_parts.append("=== END REQUEST ===")
# Add web search instruction if enabled
websearch_instruction = self.get_websearch_instruction(
getattr(request, "use_websearch", True),
f"""When analyzing code for {primary_language}, consider if searches for these would help:
- Framework-specific call patterns and lifecycle methods
- Language-specific dispatch mechanisms and polymorphism
- Common side-effect patterns for libraries used in the code
- Documentation for external APIs and services called
- Known design patterns that affect call flow""",
)
if websearch_instruction:
prompt_parts.append(websearch_instruction)
# Add main code to analyze
prompt_parts.append("\n=== CODE TO ANALYZE ===")
prompt_parts.append(code_content)
prompt_parts.append("=== END CODE ===")
# Add analysis instructions
prompt_parts.append(f"\nPlease perform a {request.trace_mode} trace analysis based on the user request.")
full_prompt = "\n".join(prompt_parts)
# Log final prompt statistics
from utils.token_utils import estimate_tokens
total_tokens = estimate_tokens(full_prompt)
logger.info(f"[TRACER] Complete prompt prepared: {total_tokens:,} tokens, {len(full_prompt):,} characters")
return full_prompt
def format_response(self, response: str, request: TracerRequest, model_info: Optional[dict] = None) -> str:
"""
Format the code analysis response with mode-specific rendering instructions.
The base tool handles structured response validation via SPECIAL_STATUS_MODELS,
so this method focuses on providing clear rendering instructions for Claude.
Args:
response: The raw analysis from the model
request: The original request for context
model_info: Optional dict with model metadata
Returns:
str: The response with mode-specific rendering instructions
"""
logger.debug(f"[TRACER] Formatting response for {request.trace_mode} trace analysis")
# Get the friendly model name
model_name = "the model"
if model_info and model_info.get("model_response"):
model_name = model_info["model_response"].friendly_name or "the model"
# Base tool will handle trace_complete JSON responses via SPECIAL_STATUS_MODELS
# No need for manual JSON parsing here
# Generate mode-specific rendering instructions
# Get rendering instructions
rendering_instructions = self._get_rendering_instructions(request.trace_mode)
# Create the complete response with rendering instructions
footer = f"""
---
# Create response with both the enhanced prompt and instructions
response_content = f"""THIS IS A STATIC CODE ANALYSIS REQUEST:
**Analysis Complete**: {model_name} has completed a {request.trace_mode} analysis as requested.
{enhanced_prompt}
## Analysis Instructions
{rendering_instructions}
**GENERAL REQUIREMENTS:**
- Follow the rendering instructions EXACTLY as specified above
- Use only the data provided in the JSON response
- Maintain exact formatting for readability
- Include file paths and line numbers as provided
- Do not add explanations or commentary outside the specified format"""
CRITICAL: Comprehensive Search and Call-Graph Generation:
First, think and identify and collect all relevant code, files, and declarations connected to the method, class, or module
in question:
return f"{response}{footer}"
- If you are unable to find the code or mentioned files, look for the relevant code in subfolders. If unsure, ask the user
to confirm location of folder / filename
- You MUST carry this task using your own tools, do NOT delegate this to any other model
- DO NOT automatically use any zen tools (including zen:analyze, zen:debug, zen:chat, etc.) to perform this analysis.
- EXCEPTION: If files are very large or the codebase is too complex for direct analysis due to context limitations,
you may use zen tools with a larger context model to assist with analysis by passing only the relevant files
- Understand carefully and fully how this code is used, what it depends on, and what other parts of the system depend on it
- Think through what other components or services are affected by this code's execution — directly or indirectly.
- Consider what happens when the code succeeds or fails, and what ripple effects a change to it would cause.
Finally, present your output in a clearly structured format, following rendering guidelines exactly.
IMPORTANT: If using this tool in conjunction with other work, another tool or another checklist item must be completed
immediately then do not stop after displaying your output, proceed directly to your next step.
"""
from mcp.types import TextContent
return [TextContent(type="text", text=response_content)]
def _create_enhanced_prompt(self, original_prompt: str, trace_mode: str) -> str:
"""Create an enhanced, specific prompt for analysis."""
mode_guidance = {
"precision": "Follow the exact execution path from the specified method/function, including all method calls, branching logic, and side effects. Track the complete flow from entry point through all called functions. Show when and how this method/function is used throughout the codebase.",
"dependencies": "Map all bidirectional dependencies for the specified class/module/protocol: what calls this target (incoming) and what it calls (outgoing). Include imports, inheritance, state access, type relationships, and structural connections.",
}
return f"""
TARGET: {original_prompt}
MODE: {trace_mode}
**Specific Instructions**:
{mode_guidance[trace_mode]}
**CRITICAL: Comprehensive File Search Requirements**:
- If you are unable to find the code or mentioned files, look for the relevant code in subfolders. If unsure, ask the user
to confirm location of folder / filename
- DO NOT automatically use any zen tools (including zen:analyze, zen:debug, zen:chat, etc.) to perform this analysis
- EXCEPTION: If files are very large or the codebase is too complex for direct analysis due to context limitations,
you may use zen tools with a larger context model to assist with analysis by passing only the relevant files
**What to identify** (works with any programming language/project):
- Exact method/function names with full signatures and parameter types
- Complete file paths and line numbers for all references
- Class/module context, namespace, and package relationships
- Conditional branches, their conditions, and execution paths
- Side effects (database, network, filesystem, state changes, logging)
- Type relationships, inheritance, polymorphic dispatch, and interfaces
- Cross-module/cross-service dependencies and API boundaries
- Configuration dependencies, environment variables, and external resources
- Error handling paths, exception propagation, and recovery mechanisms
- Async/concurrent execution patterns and synchronization points
- Memory allocation patterns and resource lifecycle management
**Analysis Focus**:
Provide concrete, code-based evidence for all findings. Reference specific line numbers and include exact method signatures. Identify uncertain paths where parameters or runtime context affects flow. Consider project scope and architectural patterns (monolith, microservices, layered, etc.).
"""
def _get_rendering_instructions(self, trace_mode: str) -> str:
"""
@@ -355,105 +189,181 @@ class TracerTool(BaseTool):
return """
## MANDATORY RENDERING INSTRUCTIONS FOR PRECISION TRACE
You MUST render the trace analysis in exactly two views:
You MUST render the trace analysis using ONLY the Vertical Indented Flow Style:
### 1. CALL FLOW DIAGRAM (TOP-DOWN)
### CALL FLOW DIAGRAM - Vertical Indented Style
Use this exact format:
**EXACT FORMAT TO FOLLOW:**
```
[Class::Method] (file: /path, line: ##)
[ClassName::MethodName] (file: /complete/file/path.ext, line: ##)
[Class::CalledMethod] (file: /path, line: ##)
[AnotherClass::calledMethod] (file: /path/to/file.ext, line: ##)
...
[ThirdClass::nestedMethod] (file: /path/file.ext, line: ##)
[DeeperClass::innerCall] (file: /path/inner.ext, line: ##) ? if some_condition
[ServiceClass::processData] (file: /services/service.ext, line: ##)
[RepositoryClass::saveData] (file: /data/repo.ext, line: ##)
[ClientClass::sendRequest] (file: /clients/client.ext, line: ##)
[EmailService::sendEmail] (file: /email/service.ext, line: ##) ⚠️ ambiguous branch
[SMSService::sendSMS] (file: /sms/service.ext, line: ##) ⚠️ ambiguous branch
```
**Rules:**
- Chain each call using ↓ or → for readability
- Include file name and line number per method
- If the call is conditional, append `? if condition`
- If ambiguous, mark with `⚠️ ambiguous branch`
- Indent nested calls appropriately
**CRITICAL FORMATTING RULES:**
### 2. BRANCHING & SIDE EFFECT TABLE
1. **Method Names**: Use the actual naming convention of the project language you're analyzing. Automatically detect and adapt to the project's conventions (camelCase, snake_case, PascalCase, etc.) based on the codebase structure and file extensions.
Render exactly this table format:
2. **Vertical Flow Arrows**:
- Use `↓` for standard sequential calls (vertical flow)
- Use `→` for parallel/alternative calls (horizontal branch)
- NEVER use other arrow types
| Location | Condition | Branches | Ambiguous |
3. **Indentation Logic**:
- Start at column 0 for entry point
- Indent 2 spaces for each nesting level
- Maintain consistent indentation for same call depth
- Sibling calls at same level should have same indentation
4. **Conditional Calls**:
- Add `? if condition_description` after method for conditional execution
- Use actual condition names from code when possible
5. **Ambiguous Branches**:
- Mark with `⚠️ ambiguous branch` when execution path is uncertain
- Use `→` to show alternative paths at same indentation level
6. **File Path Format**:
- Use complete relative paths from project root
- Include actual file extensions from the project
- Show exact line numbers where method is defined
### ADDITIONAL ANALYSIS VIEWS
**1. BRANCHING & SIDE EFFECT TABLE**
| Location | Condition | Branches | Uncertain |
|----------|-----------|----------|-----------|
| /file/path:## | if condition | method1(), method2() | ✅/❌ |
| CompleteFileName.ext:## | if actual_condition_from_code | method1(), method2(), else skip | No |
| AnotherFile.ext:## | if boolean_check | callMethod(), else return | No |
| ThirdFile.ext:## | if validation_passes | processData(), else throw | Yes |
**Side Effects section:**
**2. SIDE EFFECTS**
```
Side Effects:
- [database] description (File.ext:##)
- [network] description (File.ext:##)
- [filesystem] description (File.ext:##)
- [database] Specific database operation description (CompleteFileName.ext:##)
- [network] Specific network call description (CompleteFileName.ext:##)
- [filesystem] Specific file operation description (CompleteFileName.ext:##)
- [state] State changes or property modifications (CompleteFileName.ext:##)
- [memory] Memory allocation or cache operations (CompleteFileName.ext:##)
```
**CRITICAL RULES:**
- ALWAYS render both views unless data is missing
- Use exact filenames, class names, and line numbers from JSON
- DO NOT invent function names or examples
- Mark ambiguous branches with ⚠️ or ✅
- If sections are empty, omit them cleanly"""
**3. USAGE POINTS**
```
Usage Points:
1. FileName.ext:## - Context description of where/why it's called
2. AnotherFile.ext:## - Context description of usage scenario
3. ThirdFile.ext:## - Context description of calling pattern
4. FourthFile.ext:## - Context description of integration point
```
**4. ENTRY POINTS**
```
Entry Points:
- ClassName::methodName (context: where this flow typically starts)
- AnotherClass::entryMethod (context: alternative entry scenario)
- ThirdClass::triggerMethod (context: event-driven entry point)
```
**ABSOLUTE REQUIREMENTS:**
- Use ONLY the vertical indented style for the call flow diagram
- Present ALL FOUR additional analysis views (Branching Table, Side Effects, Usage Points, Entry Points)
- Adapt method naming to match the project's programming language conventions
- Use exact file paths and line numbers from the actual codebase
- DO NOT invent or guess method names or locations
- Follow indentation rules precisely for call hierarchy
- Mark uncertain execution paths clearly
- Provide contextual descriptions in Usage Points and Entry Points sections
- Include comprehensive side effects categorization (database, network, filesystem, state, memory)"""
def _get_dependencies_rendering_instructions(self) -> str:
"""Get rendering instructions for dependencies trace mode."""
return """
## MANDATORY RENDERING INSTRUCTIONS FOR DEPENDENCIES TRACE
You MUST render the trace analysis in exactly two views:
You MUST render the trace analysis using ONLY the Bidirectional Arrow Flow Style:
### 1. DEPENDENCY FLOW GRAPH
### DEPENDENCY FLOW DIAGRAM - Bidirectional Arrow Style
Use this exact format:
**Incoming:**
**EXACT FORMAT TO FOLLOW:**
```
Called by:
- [CallerClass::callerMethod] ← /path/file.ext:##
- [ServiceImpl::run] ← /path/file.ext:##
INCOMING DEPENDENCIES → [TARGET_CLASS/MODULE] → OUTGOING DEPENDENCIES
CallerClass::callerMethod ←────┐
AnotherCaller::anotherMethod ←─┤
ThirdCaller::thirdMethod ←─────┤
[TARGET_CLASS/MODULE]
├────→ FirstDependency::method
├────→ SecondDependency::method
└────→ ThirdDependency::method
TYPE RELATIONSHIPS:
InterfaceName ──implements──→ [TARGET_CLASS] ──extends──→ BaseClass
DTOClass ──uses──→ [TARGET_CLASS] ──uses──→ EntityClass
```
**Outgoing:**
```
Calls:
- [Logger::logAction] → /utils/log.ext:##
- [PaymentClient::send] → /clients/pay.ext:##
```
**CRITICAL FORMATTING RULES:**
**Type Dependencies:**
```
- conforms_to: ProtocolName
- implements: InterfaceName
- imports: ModuleName, LibraryName
```
1. **Target Placement**: Always place the target class/module in square brackets `[TARGET_NAME]` at the center
2. **Incoming Dependencies**: Show on the left side with `←` arrows pointing INTO the target
3. **Outgoing Dependencies**: Show on the right side with `→` arrows pointing OUT FROM the target
4. **Arrow Alignment**: Use consistent spacing and alignment for visual clarity
5. **Method Naming**: Use the project's actual naming conventions detected from the codebase
6. **File References**: Include complete file paths and line numbers
**State Access:**
```
- reads: property.name (line ##)
- writes: object.field (line ##)
```
**VISUAL LAYOUT RULES:**
**Arrow Rules:**
- `←` for incoming (who calls this)
- `→` for outgoing (what this calls)
1. **Header Format**: Always start with the flow direction indicator
2. **Left Side (Incoming)**:
- List all callers with `←` arrows
- Use `┐`, `┤`, `┘` box drawing characters for clean connection lines
- Align arrows consistently
### 2. DEPENDENCY TABLE
3. **Center (Target)**:
- Enclose target in square brackets
- Position centrally between incoming and outgoing
Render exactly this table format:
4. **Right Side (Outgoing)**:
- List all dependencies with `→` arrows
- Use `├`, `└` box drawing characters for branching
- Maintain consistent spacing
5. **Type Relationships Section**:
- Use `──relationship──→` format with double hyphens
- Show inheritance, implementation, and usage relationships
- Place below the main flow diagram
**DEPENDENCY TABLE:**
| Type | From/To | Method | File | Line |
|------|---------|--------|------|------|
| direct_call | From: CallerClass | callerMethod | /path/file.ext | ## |
| method_call | To: TargetClass | targetMethod | /path/file.ext | ## |
| uses_property | To: ObjectClass | .propertyName | /path/file.ext | ## |
| conforms_to | Self: ThisClass | — | /path/file.ext | — |
| incoming_call | From: CallerClass | callerMethod | /complete/path/file.ext | ## |
| outgoing_call | To: TargetClass | targetMethod | /complete/path/file.ext | ## |
| implements | Self: ThisClass | — | /complete/path/file.ext | |
| extends | Self: ThisClass | — | /complete/path/file.ext | — |
| uses_type | Self: ThisClass | — | /complete/path/file.ext | — |
**CRITICAL RULES:**
- ALWAYS render both views unless data is missing
- Use exact filenames, class names, and line numbers from JSON
- DO NOT invent function names or examples
- If sections (state access, type dependencies) are empty, omit them cleanly
- Show directional dependencies with proper arrows"""
**ABSOLUTE REQUIREMENTS:**
- Use ONLY the bidirectional arrow flow style shown above
- Automatically detect and use the project's naming conventions
- Use exact file paths and line numbers from the actual codebase
- DO NOT invent or guess method/class names
- Maintain visual alignment and consistent spacing
- Include type relationships section when applicable
- Show clear directional flow with proper arrows"""