Updated instructions.

This commit is contained in:
Fahad
2025-06-16 19:22:29 +04:00
parent b528598360
commit 5f69ad4049

View File

@@ -91,6 +91,17 @@ class ExampleRequest(ToolRequest):
default="detailed", default="detailed",
description="Output format: 'summary', 'detailed', or 'actionable'" description="Output format: 'summary', 'detailed', or 'actionable'"
) )
# New features - images and web search support
images: Optional[list[str]] = Field(
default=None,
description="Optional images for visual context (file paths or base64 data URLs)"
)
use_websearch: Optional[bool] = Field(
default=True,
description="Enable web search for documentation and current information"
)
``` ```
### 3. Implement the Tool Class ### 3. Implement the Tool Class
@@ -152,6 +163,16 @@ class ExampleTool(BaseTool):
"description": "Thinking depth: minimal (0.5% of model max), " "description": "Thinking depth: minimal (0.5% of model max), "
"low (8%), medium (33%), high (67%), max (100%)", "low (8%), medium (33%), high (67%), max (100%)",
}, },
"use_websearch": {
"type": "boolean",
"description": "Enable web search for documentation and current information",
"default": True,
},
"images": {
"type": "array",
"items": {"type": "string"},
"description": "Optional images for visual context",
},
"continuation_id": { "continuation_id": {
"type": "string", "type": "string",
"description": "Thread continuation ID for multi-turn conversations", "description": "Thread continuation ID for multi-turn conversations",
@@ -331,14 +352,16 @@ Find the `TOOLS` dictionary in `server.py` and add your tool:
```python ```python
TOOLS = { TOOLS = {
"thinkdeep": ThinkDeepTool(),
"codereview": CodeReviewTool(),
"debug": DebugIssueTool(),
"analyze": AnalyzeTool(), "analyze": AnalyzeTool(),
"chat": ChatTool(), "chat": ChatTool(),
"review_code": CodeReviewTool(), "listmodels": ListModelsTool(),
"debug": DebugTool(), "precommit": Precommit(),
"review_changes": PreCommitTool(), "testgen": TestGenerationTool(),
"generate_tests": TestGenTool(),
"thinkdeep": ThinkDeepTool(),
"refactor": RefactorTool(), "refactor": RefactorTool(),
"tracer": TracerTool(),
"example": ExampleTool(), # Add your tool here "example": ExampleTool(), # Add your tool here
} }
``` ```
@@ -462,6 +485,16 @@ Add your tool to the README.md in the tools section:
```markdown ```markdown
### Available Tools ### Available Tools
- **thinkdeep** - Extended thinking and reasoning for complex problems
- **codereview** - Professional code review with bug and security analysis
- **debug** - Debug and root cause analysis for complex issues
- **analyze** - General-purpose file and code analysis
- **chat** - General chat and collaborative thinking
- **listmodels** - List all available AI models and their capabilities
- **precommit** - Pre-commit validation for git changes
- **testgen** - Comprehensive test generation with edge cases
- **refactor** - Intelligent code refactoring suggestions
- **tracer** - Static analysis for tracing code execution paths
- **example** - Brief description of what the tool does - **example** - Brief description of what the tool does
- Use cases: [scenario 1], [scenario 2] - Use cases: [scenario 1], [scenario 2]
- Supports: [key features] - Supports: [key features]
@@ -470,6 +503,25 @@ Add your tool to the README.md in the tools section:
## Advanced Features ## Advanced Features
### Token Budget Management
The server provides a `_remaining_tokens` parameter that tools can use for dynamic content allocation:
```python
# In execute method, you receive remaining tokens:
async def execute(self, arguments: dict[str, Any]) -> list[TextContent]:
# Access remaining tokens if provided
remaining_tokens = arguments.get('_remaining_tokens')
# Use for file content preparation
file_content = self._prepare_file_content_for_prompt(
files,
continuation_id,
"Analysis files",
max_tokens=remaining_tokens - 5000 # Reserve for response
)
```
### Understanding Conversation Memory ### Understanding Conversation Memory
The `continuation_id` feature enables multi-turn conversations using the conversation memory system (`utils/conversation_memory.py`). Here's how it works: The `continuation_id` feature enables multi-turn conversations using the conversation memory system (`utils/conversation_memory.py`). Here's how it works:
@@ -505,12 +557,15 @@ Tools can return special status responses for complex interactions. These are de
```python ```python
# Currently supported special statuses: # Currently supported special statuses:
SPECIAL_STATUS_MODELS = { SPECIAL_STATUS_MODELS = {
"need_clarification": NeedClarificationModel, "clarification_required": ClarificationRequest,
"focused_review_required": FocusedReviewRequiredModel, "full_codereview_required": FullCodereviewRequired,
"more_review_required": MoreReviewRequiredModel, "focused_review_required": FocusedReviewRequired,
"more_testgen_required": MoreTestGenRequiredModel, "test_sample_needed": TestSampleNeeded,
"more_refactor_required": MoreRefactorRequiredModel, "more_tests_required": MoreTestsRequired,
"resend_prompt": ResendPromptModel, "refactor_analysis_complete": RefactorAnalysisComplete,
"trace_complete": TraceComplete,
"resend_prompt": ResendPromptRequest,
"code_too_large": CodeTooLargeRequest,
} }
``` ```
@@ -595,6 +650,30 @@ websearch_instruction = self.get_websearch_instruction(
full_prompt = f"{system_prompt}{websearch_instruction}\n\n{user_content}" full_prompt = f"{system_prompt}{websearch_instruction}\n\n{user_content}"
``` ```
### Image Support
Tools can now accept images for visual context:
```python
# In your request model:
images: Optional[list[str]] = Field(
None,
description="Optional images for visual context"
)
# In prepare_prompt:
if request.images:
# Images are automatically validated and processed by base class
# They will be included in the prompt sent to the model
pass
```
Image validation includes:
- Size limits based on model capabilities
- Format validation (PNG, JPEG, GIF, WebP)
- Automatic base64 encoding for file paths
- Model-specific image count limits
## Best Practices ## Best Practices
1. **Clear Tool Descriptions**: Write descriptive text that helps Claude understand when to use your tool 1. **Clear Tool Descriptions**: Write descriptive text that helps Claude understand when to use your tool
@@ -614,6 +693,8 @@ full_prompt = f"{system_prompt}{websearch_instruction}\n\n{user_content}"
3. **Don't Hardcode Models**: Use model categories for flexibility 3. **Don't Hardcode Models**: Use model categories for flexibility
4. **Don't Forget Tests**: Every tool needs tests for reliability 4. **Don't Forget Tests**: Every tool needs tests for reliability
5. **Don't Break Conventions**: Follow existing patterns from other tools 5. **Don't Break Conventions**: Follow existing patterns from other tools
6. **Don't Overlook Images**: Validate image limits based on model capabilities
7. **Don't Waste Tokens**: Use remaining_tokens budget for efficient allocation
## Testing Your Tool ## Testing Your Tool
@@ -655,6 +736,42 @@ Before submitting your PR:
- [ ] Appropriate model category selected - [ ] Appropriate model category selected
- [ ] Tool description is clear and helpful - [ ] Tool description is clear and helpful
## Model Providers and Configuration
The Zen MCP Server supports multiple AI providers:
### Built-in Providers
- **Anthropic** (Claude models)
- **Google** (Gemini models)
- **OpenAI** (GPT and O-series models)
- **X.AI** (Grok models)
- **Mistral** (Mistral models)
- **Meta** (Llama models via various providers)
- **Groq** (Fast inference)
- **Fireworks** (Open models)
- **OpenRouter** (Multi-provider gateway)
- **Deepseek** (Deepseek models)
- **Together** (Open models)
### Custom Endpoints
- **Ollama** - Local models via `http://host.docker.internal:11434/v1`
- **vLLM** - Custom inference endpoints
### Prompt Templates
The server supports prompt templates for quick tool invocation:
```python
PROMPT_TEMPLATES = {
"thinkdeep": {
"name": "thinkdeeper",
"description": "Think deeply about the current context",
"template": "Think deeper about this with {model} using {thinking_mode} thinking mode",
},
# Add your own templates in server.py
}
```
## Example: Complete Simple Tool ## Example: Complete Simple Tool
Here's a minimal but complete example tool: Here's a minimal but complete example tool: