Major new addition: refactor tool
Supports decomposing large components and files, finding codesmells, finding modernizing opportunities as well as code organization opportunities. Fix this mega-classes today! Line numbers added to embedded code for better references from model -> claude
This commit is contained in:
@@ -259,6 +259,23 @@ All tools that work with files support **both individual files and entire direct
|
||||
"Generate tests following patterns from tests/unit/ for new auth module"
|
||||
```
|
||||
|
||||
**`refactor`** - Intelligent code refactoring with decomposition focus
|
||||
- `files`: Code files or directories to analyze for refactoring opportunities (required)
|
||||
- `prompt`: Description of refactoring goals, context, and specific areas of focus (required)
|
||||
- `refactor_type`: codesmells|decompose|modernize|organization (required)
|
||||
- `model`: auto|pro|flash|o3|o3-mini|o4-mini|o4-mini-high (default: server default)
|
||||
- `focus_areas`: Specific areas to focus on (e.g., 'performance', 'readability', 'maintainability', 'security')
|
||||
- `style_guide_examples`: Optional existing code files to use as style/pattern reference
|
||||
- `thinking_mode`: minimal|low|medium|high|max (default: medium, Gemini only)
|
||||
- `continuation_id`: Thread continuation ID for multi-turn conversations
|
||||
|
||||
```
|
||||
"Analyze legacy codebase for decomposition opportunities" (auto mode picks best model)
|
||||
"Use pro to identify code smells in the authentication module with max thinking mode"
|
||||
"Use pro to modernize this JavaScript code following examples/modern-patterns.js"
|
||||
"Refactor src/ for better organization, focus on maintainability and readability"
|
||||
```
|
||||
|
||||
## Collaborative Workflows
|
||||
|
||||
### Design → Review → Implement
|
||||
@@ -284,6 +301,14 @@ suspect lies the bug and then formulate and implement a bare minimal fix. Must n
|
||||
with zen in the end using gemini pro to confirm we're okay to publish the fix
|
||||
```
|
||||
|
||||
### Refactor → Review → Implement → Test
|
||||
```
|
||||
Use zen to analyze this legacy authentication module for decomposition opportunities. The code is getting hard to
|
||||
maintain and we need to break it down. Use gemini pro with high thinking mode to identify code smells and suggest
|
||||
a modernization strategy. After reviewing the refactoring plan, implement the changes step by step and then
|
||||
generate comprehensive tests with zen to ensure nothing breaks.
|
||||
```
|
||||
|
||||
### Tool Selection Guidance
|
||||
To help choose the right tool for your needs:
|
||||
|
||||
@@ -292,14 +317,17 @@ To help choose the right tool for your needs:
|
||||
2. **Want to find bugs/issues in code?** → Use `codereview`
|
||||
3. **Want to understand how code works?** → Use `analyze`
|
||||
4. **Need comprehensive test coverage?** → Use `testgen`
|
||||
5. **Have analysis that needs extension/validation?** → Use `thinkdeep`
|
||||
6. **Want to brainstorm or discuss?** → Use `chat`
|
||||
5. **Want to refactor/modernize code?** → Use `refactor`
|
||||
6. **Have analysis that needs extension/validation?** → Use `thinkdeep`
|
||||
7. **Want to brainstorm or discuss?** → Use `chat`
|
||||
|
||||
**Key Distinctions:**
|
||||
- `analyze` vs `codereview`: analyze explains, codereview prescribes fixes
|
||||
- `chat` vs `thinkdeep`: chat is open-ended, thinkdeep extends specific analysis
|
||||
- `debug` vs `codereview`: debug diagnoses runtime errors, review finds static issues
|
||||
- `testgen` vs `debug`: testgen creates test suites, debug just finds issues and recommends solutions
|
||||
- `refactor` vs `codereview`: refactor suggests structural improvements, codereview finds bugs/issues
|
||||
- `refactor` vs `analyze`: refactor provides actionable refactoring steps, analyze provides understanding
|
||||
|
||||
## Working with Large Prompts
|
||||
|
||||
|
||||
@@ -30,6 +30,29 @@ Simulator tests replicate real-world Claude CLI interactions with the MCP server
|
||||
|
||||
**Important**: Simulator tests require `LOG_LEVEL=DEBUG` in your `.env` file to validate detailed execution logs.
|
||||
|
||||
#### Monitoring Logs During Tests
|
||||
|
||||
**Important**: The MCP stdio protocol interferes with stderr output during tool execution. While server startup logs appear in `docker compose logs`, tool execution logs are only written to file-based logs inside the container. This is a known limitation of the stdio-based MCP protocol and cannot be fixed without changing the MCP implementation.
|
||||
|
||||
To monitor logs during test execution:
|
||||
|
||||
```bash
|
||||
# Monitor main server logs (includes all tool execution details)
|
||||
docker exec zen-mcp-server tail -f -n 500 /tmp/mcp_server.log
|
||||
|
||||
# Monitor MCP activity logs (tool calls and completions)
|
||||
docker exec zen-mcp-server tail -f /tmp/mcp_activity.log
|
||||
|
||||
# Check log file sizes (logs rotate at 20MB)
|
||||
docker exec zen-mcp-server ls -lh /tmp/mcp_*.log*
|
||||
```
|
||||
|
||||
**Log Rotation**: All log files are configured with automatic rotation at 20MB to prevent disk space issues. The server keeps:
|
||||
- 10 rotated files for mcp_server.log (200MB total)
|
||||
- 5 rotated files for mcp_activity.log (100MB total)
|
||||
|
||||
**Why logs don't appear in docker compose logs**: The MCP stdio_server captures stderr during tool execution to prevent interference with the JSON-RPC protocol communication. This means that while you'll see startup logs in `docker compose logs`, you won't see tool execution logs there.
|
||||
|
||||
#### Running All Simulator Tests
|
||||
```bash
|
||||
# Run all simulator tests
|
||||
|
||||
Reference in New Issue
Block a user